10.1007@978 981 15 2341 0
10.1007@978 981 15 2341 0
10.1007@978 981 15 2341 0
Yi Wang
Kristian Martinsen
Tao Yu
Kesheng Wang Editors
Advanced
Manufacturing
and Automation IX
Lecture Notes in Electrical Engineering
Volume 634
Series Editors
Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli
Federico II, Naples, Italy
Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán,
Mexico
Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India
Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany
Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China
Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore,
Singapore, Singapore
Rüdiger Dillmann Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology,
Karlsruhe, Germany
Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China
Gianluigi Ferrari, Università di Parma, Parma, Italy
Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid,
Madrid, Spain
Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität
München, Munich, Germany
Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA,
USA
Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt
Torsten Kroeger, Stanford University, Stanford, CA, USA
Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA
Ferran Martin, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra,
Barcelona, Spain
Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore
Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany
Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA
Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany
Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston
North, Manawatu-Wanganui, New Zealand
Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA
Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan
Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy
Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China
Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University,
Singapore, Singapore
Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany
Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal
Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China
Junjie James Zhang, Charlotte, NC, USA
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments
in Electrical Engineering - quickly, informally and in high quality. While original research
reported in proceedings and monographs has traditionally formed the core of LNEE, we also
encourage authors to submit books devoted to supporting student education and professional
training in the various fields and applications areas of electrical engineering. The series cover
classical and emerging topics concerning:
• Communication Engineering, Information Theory and Networks
• Electronics Engineering and Microelectronics
• Signal, Image and Speech Processing
• Wireless and Mobile Communication
• Circuits and Systems
• Energy Systems, Power Electronics and Electrical Machines
• Electro-optical Engineering
• Instrumentation Engineering
• Avionics Engineering
• Control Systems
• Internet-of-Things and Cybersecurity
• Biomedical Devices, MEMS and NEMS
For general information about this book series, comments or suggestions, please contact leontina.
[email protected].
To submit a proposal or request further information, please contact the Publishing Editor in
your country:
China
Jasmine Dou, Associate Editor ([email protected])
India, Japan, Rest of Asia
Swati Meherishi, Executive Editor ([email protected])
Southeast Asia, Australia, New Zealand
Ramesh Nath Premnath, Editor ([email protected])
USA, Canada:
Michael Luby, Senior Editor ([email protected])
All other Countries:
Leontina Di Cecco, Senior Editor ([email protected])
** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex,
SCOPUS, MetaPress, Web of Science and Springerlink **
Kesheng Wang
Editors
Advanced Manufacturing
and Automation IX
123
Editors
Yi Wang Kristian Martinsen
School of Business Department of Manufacturing
Plymouth University and Civil Engineering
Plymouth, UK NTNU
Gjøvik, Norway
Tao Yu
Shanghai Second Polytechnic University Kesheng Wang
Shanghai, China Department of Mechanical
and Industrial Engineering
Norwegian University of Science
and Technology
Trondheim, Sør-Trøndelag Fylke, Norway
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
v
vi Preface
IWAMA 2019 takes place in Plymouth University, UK, 21–22 November 2019,
organized by Plymouth University, Norwegian University of Science and
Technology and Lingnan Normal University. The programme is designed to
improve manufacturing and automation technologies for the next generation
through discussion of the most recent advances and future perspectives and to
engage the worldwide community in a collective effort to solve problems in
manufacturing and automation.
The main focus of the workshop is focused on the transformation of present
factories, towards reusable, flexible, modular, intelligent, digital, virtual, affordable,
easy-to-adapt, easy-to-operate, easy-to-maintain and highly reliable “smart facto-
ries”. Therefore, IWAMA 2019 has mainly covered five topics in manufacturing
engineering:
1. Industry 4.0
2. Manufacturing Systems
3. Manufacturing Technologies
4. Production Management
5. Design and optimization
All papers submitted to the workshop have been subjected to strict peer review
by at least two expert referees. Finally, 84 papers have been selected to be included
in the proceedings after a revision process. We hope that the proceedings will not
only give the readers a broad overview of the latest advances, and a summary of the
event, but also provide researchers with a valuable reference in this field.
On behalf of the organization committee and the international scientific com-
mittee of IWAMA 2019, I would like to take this opportunity to express my
appreciation for all the kind support, from the contributors of high-quality keynotes
and papers and all the participants. My thanks are extended to all the workshop
organizers and paper reviewers, to Plymouth University and NTNU for the financial
support and to co-sponsors for their generous contribution. Thanks are also given to
Jian Wu, Jin Yuan, Yun Chen, Bo Chen and Tamal Ghosh, for their hard editorial
work of the proceedings and arrangement of the workshop.
Yi Wang
Chair of IWAMA 2019
Organization
vii
viii Organization
Secretariat
Jian Wu
Tamal Ghosh
Organization ix
Contents
xi
xii Contents
Industry 4.0
Knowledge Discovery and Anomaly Identification for Low
Correlation Industry Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Zhe Li and Jingyue Li
A Method of Communication State Monitoring for Multi-node
CAN Bus Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Lu Li-xin, Gu Ye, Li Gui-qin, and Peter Mitrouchev
Communication Data Processing Method for Massage Chair
Detection System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Lixin Lu, Yujie Jin, Guiqin Li, and Peter Mitrouchev
Development of Data Visualization Interface in Smart Ship System . . . 219
Guiqin Li, Zhipeng Du, Maoheng Zhou, Qiuyu Zhu, Jian Lan, Yang Lu,
and Peter Mitrouchev
Feature Detection Technology of Communication Backplane . . . . . . . . . 227
Guiqin Li, Hanlin Wang, Shengyi Lin, Tao Yu, and Peter Mitrouchev
Research on Data Monitoring System for Intelligent Ship . . . . . . . . . . . 234
Guiqin Li, Xuechao Deng, Maoheng Zhou, Qiuyu Zhu, Jian Lan,
Hong Xia, and Peter Mitrouchev
Research on Fault Diagnosis Algorithm Based on Multiscale
Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Xiaolong Li, Lilan Liu, Xiang Wan, Lingyan Gao, and Qi Huang
Proactive Learning for Intelligent Maintenance in Industry 4.0 . . . . . . . 250
Rami Noureddine, Wei Deng Solvang, Espen Johannessen, and Hao Yu
An Introduction of the Role of Virtual Technologies and Digital
Twin in Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Mohammad Azarian, Hao Yu, Wei Deng Solvang, and Beibei Shu
Model Optimization Method Based on Rhino . . . . . . . . . . . . . . . . . . . . . 267
Mengyao Dong, Zenggui Gao, and Lilan Liu
Construction of Equipment Maintenance Guiding System
and Research on Key Technologies Based on Augmented Reality . . . . . 275
Lingyan Gao, Fang Wu, Lilan Liu, and Xiang Wan
xiv Contents
Manufacturing System
Common Faults Analysis and Detection System Design
of Elevator Tractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Yan Dou, Wenmeng Li, Yang Ge, and Lanzhong Guo
Balanced Maintenance Program with a Value Chain Perspective . . . . . 317
Jon Martin Fordal, Thor Inge Bernhardsen, Harald Rødseth,
and Per Schjølberg
Construction Design of AGV Caller System . . . . . . . . . . . . . . . . . . . . . . 325
Zhang Xi, Wang Xin, and Yuanzhi Xu
A Transfer Learning Strip Steel Surface Defect Recognition
Network Based on VGG19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Xiang Wan, Lilan Liu, Sen Wang, and Yi Wang
Visual Interaction of Rolling Steel Heating Furnace Based
on Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Bowen Feng, Lilan Liu, Xiang Wan, and Qi Huang
Design and Test of Electro-Hydraulic Control System
for Intelligent Fruit Tree Planter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Ranguang Yin, Jin Yuan, and Xuemei Liu
Lean Implementing Facilitating Integrated Value Chain . . . . . . . . . . . . 358
Inger Gamme, Silje Aschehoug, and Eirin Lodgaard
Developing of Auxiliary Mechanical Arm to Color Doppler
Ultrasound Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Haohan Zhang, Zhiqiang Li, Guowei Zhang, and Xiaoyu Liu
The Importance of Key Performance Indicators that Can
Contribute to Autonomous Quality Control . . . . . . . . . . . . . . . . . . . . . . 373
Ragnhild J. Eleftheriadis and Odd Myklebust
Contents xv
Manufacturing Technology
Effect of T-groove Parameters on Steady-State Characteristics
of Cylindrical Gas Seal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Junfeng Sun, Meihong Liu, Zhen Xu, Taohong Liao, and Xiangping Hu
Simulation Algorithm of Sample Strategy for CMM Based
on Neural Network Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Petr Chelishchev and Knut Sørby
Digital Modeling and Algorithms for Series Topological
Mechanisms Based on POC Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Lixin Lu, Hehui Tang, Guiqin Li, and Peter Mitrouchev
Optimization of Injection Molding for UAV Rotor Based
on Taguchi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Xiong Feng, Zhengqian Li, and Guiqin Li
Assembly Sequence Optimization Based on Improved
PSO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Xiangyu Zhang, Lilan Liu, Xiang Wan, Kesheng Wang, and Qi Huang
Influence of Laser Scan Speed on the Relative Density and Tensile
Properties of 18Ni Maraging Steel Grade 300 . . . . . . . . . . . . . . . . . . . . 466
Even Wilberg Hovig and Knut Sørby
xvi Contents
Production Management
The Innovative Development and Application of New Energy
Vehicles Industry from the Perspective of Game Theory . . . . . . . . . . . . 535
Jianhua Wang and Junwei Ma
Survey and Planning of High-Payload Human-Robot Collaboration:
Multi-modal Communication Based on Sensor Fusion . . . . . . . . . . . . . . 545
Gabor Sziebig
Research on Data Encapsulation Model for Memory Management . . . . 552
Lixin Lu, Weixing Zhao, Guiqin Li, and Peter Mitrouchev
Research on Task Scheduling Design of Multi-task System
in Massage Chair Function Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
Lixin Lu, Leibing Lv, Guiqin Li, and Peter Mitrouchev
Contents xvii
Kristian Martinsen took his Dr. Ing. degree at the Norwegian University for
Science and Technology (NTNU) in 1995, with the topic “Vectorial Tolerancing in
Manufacturing”. He has 15 years’ experience from the manufacturing industry. He
is a professor at faculty of Engineering and Department for Manufacturing and
Civil Engineering, the Norwegian University for Science and Technology (NTNU),
and is the manager of the Manufacturing Engineering research group in this
department. He is a corporate member of the International Academy for Production
Engineering and a member of the High-Level Group of the EU technology platform
for manufacturing, MANUFUTURE. He is the manager of the Norwegian national
infrastructure for manufacturing research laboratories, MANULAB, and is the
international co-ordinator for the Norwegian Centre for Research-based
Innovation SFI MANUFACTURING. He has published many papers in interna-
tional journals and conference. His major research area is within the field of
measurement systems, variation/quality management, tolerancing and industry 5.0.
xix
xx About the Editors
1 Introduction
As important equipment in the field of mineral processing, ball mills play crucial roles
in normal operation of the national economy [1–3]. For ball mills, wear and tear of the
lifting bars is the main causes of failure of the liner of the ball mill. According to
historical data [4], wet ball mills consumed more than 110,000 tons of liners in metal
mines during 2004 in China. However, green environmental protection is the main-
stream trend of world economic development, which adds new requirements on
equipment efficiency to save energy. Therefore, selecting a suitable wear-resistant
material and designing a reasonable structure of the liner is of great importance for the
ball mill.
Two biological characteristics of the buttercup and the ostrich toe make them to have
excellent wear and impact resistance [5]. These characteristics are closely related to the
surface topography, hierarchy, and materials of the organism. The unit shape of the
wear-resistant part of the ostrich toe is a spherical crown type convex body. This part of
the toe is in contact with the ground and has a high frequency of abrasive wear with
sand. By measuring the cross section, it is found that the ratio of height, width and
adjacent spacing of the bottom of the convex body is about 5:3:1. Similarly, the ratio of
the intercostals grooves, rib width and rib height can be found from the surface mor-
phology of the clam shell and it is about 4:3:1.
In this paper, the ball mill prototype with the cylinder size 600 mm 400 mm is used
as the research platform and the size of the incoming ore is 4 mm–12 mm. The pro-
portional relationship discussed in Sect. 2 between the different parameters of the
extracted biometrics is used to design the lifting bars with different characteristic
surfaces. According to the relationship between the stripe and the direction of motion
of the cylinder, the stripe is divided into horizontal stripe and vertical stripe. The three-
dimensional model of the lifting bar with the stripe feature is shown in Fig. 1. In the lift
bar model, the width of the horizontal stripes and vertical stripes is M = 6 mm, the
height is H = 2 mm, and the space between stripes is L = 8 mm. At the same time,
according to the wear-resisting characteristics of ostrich toes, a lifting bar with convex
features on the surface is designed. The unit of the convex hull feature as a diameter of
M = 6 mm, a height of H = 2 mm, and the space between stripes is L = 8 mm. The
designed lifting bar with convex hull feature is shown in Fig. 2.
Fig. 1. Lifting bar with stripe features Fig. 2. Lifting bar with convex hull feature
Performance Analysis of Ball Mill Liner Based on DEM-FEM Coupling 5
The parameters of the mill in the simulation are as follows: the filling rate is 0.4 [6]; the
diameter of the selected steel balls is 40 mm, 30 mm, 20 mm, and they match the
medium using the equal number method. The material of the steel ball is ZGMn13, and
the selected mill rotation rate is 65%.
Table 1. The mass distribution of ore and grinding medium of each particle in the simulation
Particle size (mm) Category Shape Quality (kg)
4 Ore Sphere 8.568
Tetrahedron
7.5 Sphere 11.424
Tetrahedron
Hexahedron
Diamond
12 Sphere 8.568
Tetrahedron
20, 30, 40 Grinding medium Sphere 216
The outlines of the particles are limited to spheres, tetrahedrons, hexahedrons, and
diamonds. The mass distribution of particles and the shape in the simulation model are
shown in Table 1.
(a)Static accumulation state after blanking (b)The motion state of the material at 2.9s
Fig. 3. The motion of materials in the ball mill at different time points
The static analysis of the liner is based on the DEM-FEM coupling method. The force
obtained in EDEM soft and its simulation files are imported into the EDEM module.
Therefore, the meshing in the coupling analysis is consistent with the discrete element
simulation. In such a way, a more accurate solution can be obtained.
The wear of the lifting bar is related to the impact of the material and the medium
on the lifting surface. Therefore, the equivalent stress distribution on the surface of the
lifting bar is analyzed in detail. Figures 4, 5, 6 and 7 shows the equivalent stress
distribution of the ball mill cylinder model with four different surface feature lifting
bars in a certain period. Results in Fig. 4 illustrate that the smooth surface of lifting bar
is most concentrated in the joint portion of the lifting bar and the substrate of liner
during a certain period of mill operation. Secondly, Stress concentration occurs at the
adjacent part of the surface of lifting bar and the upper surface. This is because the
contacts between the material and the medium in these places are the severest, and it is
known that the stresses at both ends of the lifting bar are also higher than the stress in
the middle of the bar. It can be seen from Fig. 5 that the equivalent stress on the surface
of the lateral stripe surfacing feature lift bar is mainly concentrated in the adjacent part
of the lift bar and the liner base, and there is no obvious stress concentration. The stress
between the bionic features is lower than that of the bionic features. From the stress and
its distribution, the value is smaller than that of the smooth surface features.
Figure 6 shows the stress distribution of the lifting bar of the longitudinal stripe
feature. The stress is mainly distributed at the end of the longitudinal feature of lifting
bar and near the substrate of the liner. This is related to the geometric dimension of the
end of the feature and the size of the material. Since the particles stay between the
bionic feature and the substrate of the liner, the stress becomes larger. In general, the
stress value of the longitudinal feature of lifting bar is smaller than that of the smooth
surface. In Fig. 7, the distribution of overall stress is uniform compared to the other
three lift bars. At the same time, the stress value on the feature unit is smaller than the
stress value of the lateral feature bar. These results show that the stress relieving effect
of the convex hull type bionic feature is better than other bionic feature in the simu-
lation. However, on the matrix between the features, there is few difference between the
vertical stripes lifting bar and even the smooth surface lifting bar, so that the protection
Performance Analysis of Ball Mill Liner Based on DEM-FEM Coupling 7
effect on the substrate is limited. To make a more accurate study of the wear zone
distribution of each lifting bar during the grinding process, the EDEM post-processing
module is used to select the “Geometry Bin” option. The concentrated area of the liner
and lifting bars are divided into two parts, and the areas are numbered and shown in
Fig. 8.
In the post-processing module of EDEM, the “Arc hard Wear” option in the
geometry section is selected in “File Export-Result Date”, since this option evaluates
the amount of wear based on the accumulated energy. The trend can also verify the
cumulative energy received by the liner and the lifting bar in this simulation. The
accumulated energy received by the liner and the lifting bar is plotted after the ball mill
model with different surface feature operated for a period, and this result can be used to
intuitively compare the amount of wear in the same area. The relationship is shown in
Fig. 9.
8 Z. Xu et al.
Fig. 8. The divided area of the liner and the lifting bars
Fig. 9. The wear distribution of specified area in different liner and lifting bars
As shown in Fig. 9, for the liner where the smooth surface feature lifting bar is
mounted, the main wear position is located between the upper surface of the lifting bar
and the non-lifting surface, i.e., between the region 9 and the region 14. The amount of
wear in the region 9 is 5.19E−7 mm. This is because of the right-angled trapezoidal
liner used in this simulation. Therefore, stress concentration appears at the common
edge of the non-lifting surface (region 9) and the upper surface. At the same time, the
transition region between the upper surface and the lifting surface (region 13) under-
goes relatively intense friction during the grinding process with the material raised to
the highest point, and this is the reason why the amount of wear is correspondingly
high. The reason why the amount of wear of the area 10 to area 12 is also relatively
large is that from the time when the upper surface is in contact with the material, and
the time when the material begins to drop, the material does more work for the location
than the work done on the lifting surface, so the amount of wear is slightly larger than
the area 13 to area 15. The wears of other areas are significantly lower than the ones in
area 9 to area 14.
Performance Analysis of Ball Mill Liner Based on DEM-FEM Coupling 9
6 Conclusions
In this paper, the stress distribution of the lifting bar and its surrounding area during the
grinding process is analyzed. At the same time, the wear areas of the lifting bar of
different surface features are compared. The results are summarized as follows:
(1) During the grinding process, the stress of the lifting bar is mainly concentrated in
the adjacent part of the lifting bar and the liner, and the values in the two ends are
larger than the one in intermediate part. The smooth surface lifting bar has
obvious stress concentration during the grinding process, and the concentrated
area is the widest among the four lifting bars. The stress relaxation effect occurs in
all three different bionic feature units, and this reduces the equivalent stress of the
lifting bar to a certain extent and improves the wear resistance of the lifting bar.
(2) Although the stress on the convex hull type bionic feature unit is significantly
reduced, the effect of the release of the equivalent stress on the lifting bar base is
lower than that of the horizontal stripe. The order of the stress relieving effect of
the bionic feature unit, from high to low, is horizontal stripe, convex hull feature
and vertical stripe.
(3) During the grinding process, the wear of the liner surface and the lifting surface of
the right angle trapezoidal lifting bar is significantly higher than that of the liner
between the lifting bar, and the most severely worn area are mainly concentrated
near the common side of upper surface and lifting surface.
(4) Comparing with the lifting bar on the smooth surface, the liner with the bionic
feature lifting bar has better wear resistance, and the order of the wear resistance
of the liner, from low to high, is horizontal stripes, vertical stripes, and convex
hull.
(5) The horizontal stripes bionic feature lifting bar can extend the replacement period
of the mill liner and avoid the mill downtime caused by the inconsistent
replacement period between the liner and the lifting bar. It can also reduce the
number of liner replacements and extend the liner life cycle.
Acknowledgement. The work was fully supported by the Youth Project of Science and
Technology Department of Yunnan Province (No. 2017FD132). We gratefully acknowledge the
relevant organizations.
References
1. Zhao, M., Lu, Y., Pan, Y.: Review on the theory of pulverization and the development of
pulverizing equipment. Min. Metall. 10(2), 36–41 (2001)
2. Zhang, G.: Current status and development of crushing and grinding equipment. Powder
Technol. 4(3), 37–42 (1998)
3. Belov, Brandt: Grinding. Liaoning People’s Publishing House, Liaoning (1954)
10 Z. Xu et al.
4. Li, W.: Market and production of wear-resistant steel parts. In: Proceedings of the Annual
Meeting of Yunnan Wear-Resistant and Corrosion-Resistant Materials 2004, Kunming,
pp. 7–12 (2004)
5. Cao, Z., Wang Dapeng, G.: Research progress of Marsh’s mother-of-pearl and seawater
pearls. J. South. Agric. Sci. 40(12), 1618–1622 (2009)
6. Zhang, X., Dong, W., Zhou, H., et al.: Numerical simulation of wear resistance of ball mill
lifting bars with biomimetic characteristics. Nonferrous Metals (Mineral Process.) 6, 56–62
(2017)
Performance of Spiral Groove Dry Gas
Seal for Natural Gas Considering
Viscosity-Pressure Effect of the Gas
Abstract. Centrifugal compressors used for transporting natural gas are usually
equipped with dry gas seals. The working medium of the seal is usually the
delivered gas, that is, natural gas. In this paper, the natural gas viscosity-pressure
equation is derived from the Pederson mixed gas viscosity model and Lucas
viscosity-pressure model, and the real gas property of natural gas is expressed by
Redlich-Kwong equation. The gas film pressure governing equations proposed
by Muijderman for narrow grooves are modified and solved for the seal faces.
The influences of natural gas viscosity-pressure effect on the sealing charac-
teristics, such as leakage rate and opening force, of spiral groove dry gas seal are
analyzed. Results show that the viscosity-pressure effect has significant influ-
ence on spiral groove dry gas seal. This effect reduces the leakage rate but
increases the opening force, compared to the situation without considering the
viscosity-pressure effect. With the pressure up to 4 MPa, the viscosity-pressure
effect of natural gas is weak and negligible. As the pressure increases, the
viscosity-pressure effect increases. At 12 MPa, the relative deviations of leakage
rate and opening force caused by the viscosity-pressure effect are respectively
−30.6% and 1.65%. Therefore, the analyses indicate that the viscosity-pressure
effect of natural gas needs to be considered when used in high pressure situation.
1 Introduction
In natural gas long-distance pipelines, the compressors used for transporting natural gas
are usually equipped with dry gas seals as their shaft end seals. The working medium of
the seal is usually the delivered gas, that is, natural gas. Typically, the natural gas in the
pipeline is a mixture of different gases, and its components is different from the natural
gas sources. The physical property is different from each other. Viscosity is an
important physical property for dry gas seal, and this property of the natural gas is
closely related to gas components, temperature and pressure. In general, when the
isothermal flow is assumed, the viscosity of the natural gas is a function of composition
and pressure.
Daliri et al. [1] analyzed the viscosity variation with pressure to obtain squeeze film
characteristics by modified Reynolds equation and Stoke’s microcontinuum. Lin et al.
[2] analyzed the effects of viscosity-pressure dependency and studied the impacts of
squeezed films between parallel circular plates of non-Newtonian coupled stress fluid
lubrication. According to their results, the influences of viscosity-pressure dependency
raise the load capacity and lengthen the approaching time of the plates. As to the
viscosity-pressure effect on the dry gas seal, Song et al. [3] analyzed the effect of
viscosity-pressure of nitrogen gas on the sealing performance by the Lucas model. Their
results show that high pressure has significant effects on the opening force, the leakage
rate and the gas pressure at the spiral groove root radius. However, nitrogen is a pure gas
and does not involve the viscosity relationship of a mixed gas such as the natural gas.
As to the spiral groove dry gas seal, the Pederson mixed gas viscosity model and
Lucas viscosity-pressure model are used to express the natural gas viscosity-pressure
effect, and the real gas property of natural gas is expressed by Redlich-Kwong equa-
tion. The gas film pressure governing equations proposed by Muijderman for narrow
grooves are modified and solved for the dry gas seal faces. The dry gas sealing
characteristic parameters such as the opening force and leakage rate are obtained.
2 Model Description
2.1 Geometry Model
The structural model of the spiral groove dry gas seal and geometric model of seal face
are shown in Fig. 1. In the geometric model, ri and ro are the inner and outer radii of
the sealing ring, respectively, and rg is the radius at the root of the spiral groove; x is
the angular velocity of the sealing ring; pi and po are the inlet and outlet pressure, and a
is the helix angle.
Fig. 1. Structural model of the spiral groove dry gas seal (a) and geometric model of seal face (b)
Performance of Spiral Groove Dry Gas Seal for Natural Gas 13
1 þ a1 p1:3088
r
go ðpo ; To Þ ¼ g0 ð1Þ
a2 pa5 a4 1
r þ ð1 þ a3 pr Þ
Equation (1) is substituted to the Pederson mixed gas viscosity expression [5],
which yields the model of natural gas viscosity-pressure effect:
16 23 12
Tc;mix pc;mix l amix 1 þ a1 p1:3088
r
gmix ¼ g ð2Þ
Tco pco Mo ao a2 pa5 a4 1 0
r þ ð1 þ a3 pr Þ
0 !2:303 !2:303 1
X
c X
c X
c X
c
l ¼ 1:304 104 @ ni Mi2 = ni M i ni M i Aþ ni M i
i¼1 i¼1 i¼1 i¼1
Where, {ai, i = 1, …, 5} are correction factors, and Pc, Tc are the critical pressure
and critical temperature obtained from the literature [5], respectively.
RT a
p¼ ð3Þ
V b T 0:5 V ðV þ bÞ
PV ¼ ZRT ð4Þ
14 X. Sun et al.
q ¼ pM=ZRT ð6Þ
The density of the natural gas of Eqs. (5) and (6) are expressed as:
pM=RT
qmix ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1=3 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð7Þ
N 2
M 3 N 2
M 3ffi 1=3
N2 þ 2 þ 3 N
þ 2 2 þ 3 þ 1
3
where, ai, aj are pure material parameters, yi, yj are the mole fraction of the mixture of
the pure substances i and j; kij is the binary interaction coefficient of the i, j pure
substances. These parameters can be found in the literature [6].
dp 6g St RT 1
¼ mix 3 ð8Þ
dr ph qmix r
Performance of Spiral Groove Dry Gas Seal for Natural Gas 15
St is the mass flow rate of the gas passing through the sealing surface; h and h1 are,
respectively, the film thickness of the groove and non-groove area, and they fulfill the
relationship h1 = h + t, where t is the groove depth of the spiral groove; x is the
angular velocity of rotation of the sealing ring; g1, g5, and g7 are the spiral groove
coefficients which can be obtained from the literature [7].
pjr¼ri ¼ pi ; pjr¼ro ¼ po
The pressure distribution p(r) of end face film is obtained, and the end face opening
force F is obtained by integrating over the entire end face:
Z ro
F¼ pðr Þ2prdr ð10Þ
ri
1.00
Current data 2.4 Current data
0.98 NIST database Ref [9]
0.94 2.0
0.92
1.8
0.90
1.6
0.88
1.4
0.86
0.84 1.2
0 3 6 9 12 15 0 2 4 6 8 10 12 14 16
P/MPa P/MPa
Fig. 2. The comparison between the current data and references data [9, 10]
The boundary condition of internal pressure is 0.1013 MPa and the external pressure po
is respectively 0.6 MPa, 4 MPa, or 12 MPa. The sealing performance is calculated at
different film thicknesses. The results are shown in Figs. 3 and 4.
pressure effect reduces the leakage rate. The reason is that as the pressure increases, the
viscosity increases and the gas flow decreases, which results in the decrease in the
leakage rate. The value of │E2│ is greater than the │E1│ indicates that the
viscosity-pressure effect induces a stronger influence on real natural gas spiral groove
dry gas seal compared with the assumptions of ideal gas. When the pressure reaches
12 MPa, the averages of E1 and E2 are −28.622% and −30.6%, respectively. The
results show that the viscosity-pressure effect has influence on the leakage rate of dry
gas seal.
5 Conclusions
For spiral groove dry gas seal of conveying natural gas centrifugal compressor, the
natural gas viscosity-pressure effect is analyzed based on the narrow groove theory of
the spiral groove. The conclusions of the present research are listed as follows: (1) The
viscosity-pressure effect reduces the gas leakage rate but increases the opening force.
(2) Up to 4 MPa, natural gas viscosity-pressure effect is weak. As the pressure
increases, the viscosity-pressure effect increases. (3) At 12 MPa, the relative deviations
of leakage rate and opening force caused by the viscosity-pressure effect are respec-
tively 30.6% and −1.6472%. The viscosity-pressure effect of natural gas needs to be
considered when used for high pressure situation.
References
1. Daliri, M., Jalali-Vahid, D.: Investigation of combined effects of rotational inertia and
viscosity-pressure dependency on the squeeze film characteristics of parallel annular plates
lubricated by couple stress fluid. J. Tribol.-Trans. 137(3), 1–23 (2015)
2. Lin, J.-R., Chu, L.-M., Liang, L.-J.: Effects of viscosity-pressure dependency on the non-
newtonian squeeze film of parallel circular plates. Lubr. Sci. 25(1), 1–6 (2013)
3. Song, P., Ma, A., Xu, H.: The high pressure spiral groove dry gas seal performance by
considering the relationship of the viscosity and the pressure of the gas. In: 23rd
International Conference on FLUID SEALING, pp. 36–72 (2016)
4. Poling, B.E., Prausnitz, J.M., John, P.O.C., et al.: The Properties of Gases and Liquids.
Mcgraw-Hill, New York (2001)
5. Zhang, S., Ma, I., Xu, Y.: Natural Gas Engineering Handbook, pp. 32–44. Petroleum
Industry Press, Beijing (2016). (in Chinese)
6. Chen, Z., Gu, Y., Hu, W.: Chemical Thermodynamics, Third edn. pp. 184–185. Chemical
Industry Press, Beijing (2011). (in Chinese)
7. Muijderman, E.A.: Spiral Groove Bearings, pp. 17–21. Springer, New York (1966).
Bathgate R.H. Trans.
8. National Institute of Standards and Technology: NIST Chemistry Webbook [EB/OL]
9. Sun, H., Wang, J.: Design Manual for Flowmeter Measurement Throttling Device, Second
edn., p. 4. Chemical Industry Press, Beijing (2005). (in Chinese)
10. Deng, J., Song, F., Chen, J.: Research the simulation software of Realpipe-gas in natural gas
long-distance pipelines. Oil Gas Storage Transp. 9(30), 659–662 (2011). (in Chinese)
11. Sun, X., Song, P.: Analysis on the performance of dry gas seal in centrifugal compressor for
transporting natural gas. J. Drain. Irrig. Mach. Eng. 1(36), 55–62 (2018). (in Chinese)
Analysis of Residual Stress for Autofrettage
High Pressure Cylinder
1 Introduction
Domestic researchers have also done a lot of work on the autofrettage model of the
high pressure cylinder. Huang [5] proposes an autofrettage model considering the
material strain-hardening relationship and the Bauschinger effect, based on the actual
tensile and compressive stress-strain curve of material, plane-strain, and modified yield
criterion. Yuan [6, 7] studied the effects of end conditions of the vessel and material
parameters on the residual stress. ZHU [8] Based on the 3rd strength theory, the
theoretical relations among the equivalent stress of total stresses at elastoplastic junc-
ture, depth of plastic zone and reverse yielding, and load-bearing capacity for an
autofrettaged cylindrical pressure vessel are analyzed and demonstrated by using
combined image and analytical methods. On the basis of a nonlinear kinematic hard-
ening model, FU and YANG [9] present a nonlinear combined hardening model which
is used to describe the mechanical properties of autofretted barrel and calculate the
residual stress distribution. Yang [10] based on the energy dissipation method, derived
the fatigue damage and fatigue life function of composite thick-walled cylinders.
The PEEQ diagram of the thick-walled cylinder is shown in Fig. 5. As the self-
reinforcing pressure increases, the equivalent elastic-plastic strain gradually increases.
When the autofrettage pressure is 550 MPa, the elastic-plastic strain begins to appear,
and the 800 MPa front elastic-plastic strain increase is slower. The strain increase is
slower, and the elastic-plastic strain starts to increase sharply after 800 MPa. When the
self-reinforcing pressure increases to 1200 MPa, it begins to enter the fully plastic
shaped state.
he working load of the water jet high pressure cylinder studied in this paper is
420 MPa, and the maximum stress under the working load of the high pressure
cylinder without self-reinforcing treatment is 699.6 MPa. The maximum stress after
self-reinforcing treatment is shown in Fig. 6. It can be seen from the figure that the
maximum working stress begins to decrease after the self-reinforcing pressure is higher
than 550 MPa. When the self-reinforced pressure increases to 800 MPa, the working
stress drops to a minimum of 495.6 MPa. Then, if the self-reinforced stress continues
to increase, the working stress begins to rise. Therefore, it can be concluded that the
working stress near 800 MPa takes the lowest value.
It can be found that the optimal self-enhancement pressure calculated by the for-
mula method is quite different from the optimal self-enhancement pressure calculated
by the bilinear hardening model. Therefore, in the engineering design process, it is not
advisable to simulate the autofrettage high-pressure cylinder using the BLH.
The radial distribution of the maximum tangential residual stress is shown in Fig. 7
above, where green, blue, yellow, and brown represent the tangential residual stress
distribution at 800 MPa, 850 MPa, 900 MPa, and 950 MPa, respectively. It can be
seen that the maximum tangential residual stress is obtained at the inner wall. The trend
of the residual stress along the radial direction is firstly sharply decreased with the
increase of the thickness, and after reaching the wall thickness of 7 mm, it enters a
relatively gentle phase.
450
The working stress diagram is shown in the above Fig. 8. The orange curve is the
maximum working stress under the ideal elastoplastic model. It can be seen that when
the self-reinforcing stress is less than 550 MPa, the material does not have any
strengthening effect. When it is greater than 550 MPa, the working stress begins to
have a significant drop interval. The blue curve is the maximum working stress contour
under the autofrettage material model. The two are approximately coincident before
Analysis of Residual Stress for Autofrettage High Pressure Cylinder 27
750 MPa, and after more than 750 MPa, the two begin to separate. It can be seen that
because of the existence of the Bauschinger effect, the self-reinforcing effect is
weakened, so the influence of the Bauschinger effect must be considered in the design
process, otherwise there will be significant losses.
4 Conclusions
(1) By simplifying the water jet high pressure cylinder model, the parameters of the
self-reinforced model are determined, and the distribution of residual stress is
calculated. The value of the best self-reinforced stress is obtained.
(2) The Huang’s model is simplified to facilitate finite element analysis calculations.
Based on abaqus software, a three-dimensional finite element parametric model of
high-pressure cylinder is established. The method of simulating the self-
enhancement process of high-pressure cylinder by multi-load step and field
variable method is given. The finite element simulation of the BLH model and the
self-enhancement model is carried out, and the simulation results are obtained.
The maximum residual stress and working stress under different self-reinforced
stresses are analyzed. The optimal self-reinforced pressure is obtained by
28 G. Li et al.
comparison, and the distribution of tangential residual stress along the radial
direction is obtained under the optimal self-reinforced stress. The simulation
results show that the BLH model does not consider the Bauschinger effect and the
reverse yielding effect, and the results obtained deviate greatly from the real
situation. The results of the self-enhanced model simulation more accurately
reflect the distribution of residual stress.
References
1. Parker, A.P.: Autofrettage of open-end tubes—pressures, stresses, strains, and code
comparisons. J. Press. Vessel Technol. 123(271), 8 (2001)
2. Bhatnagar, R.M.: Modelling, validation and design of autofrettage and compound cylinder.
Eur. J. Mechan. 39, 17–25 (2013)
3. Gibson, M.C.: Determination of Residual Stress Distributions in Autofrettaged Thick-
Walled Cylinders. Cranfield University, Cranfield (2008)
4. Peng, X., Balendra, R.: Application of a physically based constitutive model to metal
forming analysis. J. Mater. Process. Technol. 145(2), 180–188 (2004)
5. Huang, X.P., Cui, W.: Effect of bauschinger effect and yield criterion on residual stress
distribution of autofrettaged tube. ASME J. Press. Vessel Technol. 128(2), 212–216 (2006)
6. Yuan, G.: Analysis of residual stress for autofrettaged ultrahigh pressure vessels. Zhongguo
Jixie Gongcheng/China Mech. Eng. 22(5), 536–540 (2011)
7. Yuan, G.: Numerical analysis of residual stress and strength of autofrettage high pressure
cylinder finishing. Mech. Des. Manuf. (03), 229–232 (2015)
8. Zhu, R.: Study on autofrettage of cylindrical pressure vessels. J. Mech. Eng. 46(6), 126–133
(2010)
9. Fu, S., Yang, G.: A nonlinear combined hardening model for residual stress analysis of
autofretted thick-walled cylinder Acta Armamentarii 07 (2018)
10. Yang, Z.Y., Qian, L.F.: Research on fatigue damage of composite thick wall cylinder. Chin.
J. Appl. Mech. 30(03), 378–383 (2013)
Study on the Detection System for Electric
Control Cabinet
Abstract. The electric control cabinet for massage armchair, as a control unit,
has a crucial influence on the normal operation of massage armchair, so the
manual detection of electric control cabinet should be replaced by an intelligent
detection system with high efficiency to improve the reliability of detection
results. By combining the characteristics of electric control cabinet and several
detection methods widely used at present, this paper proposes a study on the
detection system for electric control cabinet in massage armchair to meet the
urgent demand of precise and efficient detection of electric control cabinets, with
LabVIEW as the software foundation and ADAM data acquisition module as the
hardware supports.
1 Introduction
2 System Introduction
The system mainly consists of detection module and data analysis module. The
detection system of electric control cabinet mainly aims to confirm whether the control
function of cabinet is normal or not. Therefore, this paper regards the electric control
cabinet as “black box” so that it should pay less attention to the internal structure and
electric control principle of the electric control cabinet but emphasize the output signal
generated by the electric control cabinet when the input signal has been given. From
this perspective, all the electric control cabinets of massage armchair have a common
characteristic regardless of various types. In short, the input and output signals of
electric control cabinet are both constituted by analog quantity signal and control
signal.
The upper computer of this system is programmed through LabVIEW, controlling
the operation of electric control cabinet through the module from RS-232 to TTL in an
industrial personal computer (IPC) and collecting the pressure value, analog voltage
and control peripheral through RS485 bus and Advantech ADAM module. As shown
in the Fig. 1, the control system controls the data acquisition module respectively
through the bus and achieves the control of detection system and data management
based on LabVIEW.
3 Detection Module
Generally, different signal pathways will be used to enter the controller according to
different signal types, roughly including digital input, digital output, analog input,
analog output, technology/frequency inputs and other modules. As shown in the Fig. 2,
the most common method for system structure is to control the ADAM module (I/O
module from Advantech Co., Ltd) on the whole network of RS485 through IPC and
RS232/RS485 conversion modules. Each ADAM module is connected to RS485 and
operate independently without any data exchange between modules to achieve data
transfer and data receive.
Study on the Detection System for Electric Control Cabinet 31
In this detection system, the electric control cabinet has the output signals including
pressure value, analog voltage and digital quantity, which can be acquired through I/O
module. Therefore, it’s considerately appropriate to apply the structures of distributed
data acquisition and control system constituted by RS485 bus to this system.
As for the detection of airbag, measure the output pressure of air pump with an air
gauge and judge the working condition of each airbag based on the high or low levels
controlled by solenoid valve. As for the detection of movement, a set of load resistance
in the detection system is used to simulate the working condition of electric control
cabinet, so as to detect the output voltage in the case of analog load and further judge
whether the corresponding function can meet requirements. Finally, after the detection
is completed, the data will be stored in local database to generate a report and the
system will be reset for the next detection.
The system processes the measured data through a barometer to determine whether the
air pump function of the electric control cabinet is normal. However, due to the
interference of various external factors, the collected signals will inevitably have
various burrs. Therefore, the signal should be filtered to overcome the sudden distur-
bance caused by the abrupt disturbance or the sensor.
When the sampling rate of the ADC is higher than the actual demand and the
controller has enough actual filtering of the sensor for multiple acquisitions, the median
filtering method is generally adopted, which essentially determines the filter output
response by minimizing the absolute value of the error box.
Its mathematical principle is that let a set of observations be fxi gð1 i NÞ and
find the best approximation x of fxi g to minimize the sum of the absolute errors. Then
there is
32 L. Lu et al.
X
N
Q¼ j x xi j ð1Þ
i¼1
dQ dX N
¼ jx xi j
dx dx i¼1
N h i1=2
dX
¼ ð x xi Þ 2
dx i¼1
XN
x xi XN
¼ ¼ signðx xi Þ ¼ 0 ð2Þ
i¼1
j x xi j i¼1
To satisfy the formula (1), x should take the value of fxi g in the order of the size of
the middle position.In this system, the ADC sampling rate is set to 4.4 kHz. After each
time the data acquisition ends, a DMA interrupt is generated, and the sensor data of
different channels is copied to the elements of the corresponding label of the array
ADC1_ConvertedValue. And the elements in the array will be updated again every 8
times. At this point, the array is placed in a two-dimensional buffer as a column of a
two-dimensional array. The buffer uses the data structure of the heap (first-in first -out)
so that the five latest 8-channel sensor values are always stored in the buffer and
refreshed at a frequency of 4.4 kHz. According to the control instruction of the host
computer, when the controller needs to collect and filter data for different channels, the
program calls the filter. And if the data in the buffer does not overflow, it will directly
return the collected 8-channel value. Otherwise, it will perform 5 cycles of bubble
sorting for each row of 5 data in the buffer and 8 rows of data will be sorted to return
the median value of each row. The data is stored in the global variable ADC1_fil-
terValue and the data is stored after a data acquisition is completed. The exact delay is
based on the acquisition frequency in the instruction and the filter is called again after
the time has elapsed until all points are collected.
The data acquisition frequency of different detection functions is also different.
When the data sampling rate is low, the newly acquired elements in the buffer are
obtained by taking the median value of the five elements continuously sampled, which
achieves that the element time interval is shorter, the sampling value of pressure is
more accurate. When the sampling frequency is 4.4 kHz, one column of data is added
each time in the filter buffer and the median value is extracted by scrolling, which does
not limit the sampling rate while ensuring the filtering effect. Following figures is a
comparison of the collected pressure waveform before and after filtering. Figure 3 is
the original waveform acquired, and Fig. 4 is the effect after the filter is called. It can
tell that the filter removes the pulse interference better and makes extraction of sub-
sequent feature much smoother.
Study on the Detection System for Electric Control Cabinet 33
5 System Development
The LabVIEW, which is significantly used for the detection system in the development
environment, has its own unique advantages. Firstly, the establishment of human-
computer interface is easier and faster than that of other languages. Secondly, Lab-
VIEW can provide serial communication control unit for researchers to develop the
system program of serial communication rapidly. Besides, there is a large amount of
convenience in its functions to facilitate the use of users. This system combines
LabVIEW and distributed system to build a high-efficiency and practical detection
system in an easy way.
The modularized software design is used for the programming of control system,
with the software structure shown as below (Fig. 5):
34 L. Lu et al.
As for the design and development of this system, the VISA serial communication
control unit of LabVIEW is used to achieve the connection with hardware equipment
and the communication protocol of ADAM is used to achieve the control of upper
computer over peripheral equipment, such as reading the output value of air pump
collected through air pressure sensor, etc. Meanwhile, ADAM has the functions of
current and voltage measurements, properly achieving the direct measurement on
current and voltage of analog load resistance and realize the precision and high effi-
ciency of data acquisition (Fig. 6).
The upper computer program is mainly divided into four parts: ADAM operation,
function detection, cabinet command and custom control. Each part includes the cor-
responding VI module. The modular design is adopted in the design of the program.
Each function detection item of electric control cabinet is set for a module (a sub-
routine). The advantage of this is that when the type of the electric control cabinet
increased, the corresponding program module can be used as long as hardware support
is satisfied, which is convenient for post-maintenance.
6 Result
According to the demand analysis on electric control cabinet of massage armchair, this
paper proposes the detection system structure of electric control cabinet based on
distributed network. The LabVIEW is adopted as the upper computer to control the
electric control cabinet and read the feedback signal through serial port and achieve the
distributed control over each ADAM module through RS485 bus so that the detection
system can have good real-time responsiveness and the capacity of resisting distur-
bance. In order to detect whether the electric control cabinet can drive the normal
operation of a motor, the load resistance should be used to simulate the internal
resistance of motor under working conditions so as to realize convenient and efficient
detection procedures, guarantee the accuracy of detection results and facilitate the use
of users. The detection system for the electric control cabinet of massage armchair
proposed in this paper provides an integrated environment for follow-up intelligent
production (Fig. 7).
References
1. Wang, D.: Application of automated inspection in the era of “Industry 4.0”. Sci. Technol.
(12), 109 (2016)
2. Chen, J.J.: Industrial automation technology in various engineering fields. Silicon Val. 5, 129
(2010)
3. Su, D.D.: Research and Design of Adaptive Control System of Massage Chairs. Anhui
University of Technology (2017)
4. Yang, S.X., Wang, X., Wu, D.J.: The design of test system for power plant control panel.
Mob. Power Veh. (01), 19–20+35 (2011)
5. Lan, J.: Design of Automatic Test System for Electrical Cabinet of Metro Vehicle Control
System. University of Science and Technology, Nanjing (2014)
6. Mao, L., Sun, D.: A method of pressure sensor dynamic digital filter. Sens. Instrum. 24(12–1),
127–128 (2008)
7. Tian, Y.: Development of electronic control function inspection system for mining equipment.
Ind. Miner. Autom. 37(8), 165–167 (2011)
8. Ping, Y.B.: Research on Control System Based on RS-485 Network. Southwest Jiaotong
University (2003)
Effects of Remelting on Fatigue Wear
Performance of Coating
1 Introduction
Flame spray welding (powder welding) is a method to make a coating by heating a self-
fluxing alloy powder on surface with an oxy-acetylene flame or other heat source. With
this method, a spray coating on the surface of metal substrate could be obtained. The
method is well to reduce the pores and oxide slag in sprayed layer. And, a fusion layer
between metal substrate and coating would be generated and improve the compactness
and bonding strength of the coating greatly. Moreover, the surface of workpiece will
achieve an excellent corrosion resistance. Due to its performance and wear resistance
and ability to withstand higher stresses, it is widely used in aerospace, mechanical
engineering, petrochemical and other fields.
In recent decades, many scholars [1–5] have studied the safety and reliability of the
interface of powder flame spray weldments. While, the fatigue performance and wear
resistance of flame spray weldments with various remelting times have few studies. In
this study, 40Cr was metal substrate and Ni60A nickel-based self-fluxing alloy powder
was coating material. After flame spray welding, flame remelting technology is
adopted. In flame remelting process, the flame temperature is constant, various
experiments were carried out to study the effect from remelting time. In previous study,
the bending behavior and torsion fatigue properties of samples after various remelting
time treatments have been worked out [6–8]. The main job in this study is to investigate
wear resistance of the treated coated surface. With parametric study and analyze, the
optimization of process parameters, basic theory of wear resistance and prediction of
fatigue wear life could be explored.
2 Experimental Procedure
2.1 Preparation of Spraying Material and Coating Samples
The spraying material of Ni60A alloy powder is used in this study, which is a kind of
self-fluxing alloy powder. With this material, external weld flux is not necessary during
flame spray welding. Moreover, the alloy composition shows the deoxidation and
slagging performance during remelting, which could improve the wetting performance
greatly. Meanwhile, a low melting point alloy which is well metallurgically bonded
could be obtained. The melting point from 1050 to 1100 °C could ensure only the
spray coating layer melts during remelting while have no influence on substrate metal.
The hardness of coating layer was measured as 55–65 HRC. Also, a good solid-state
flow property was observed and the particles are spherical with granularity of 200. The
mass percentage of the chemical composition are 13.7% Cr, 2.96% B, 4.40% Si, 2.67%
Fe, 0.60% C, and the rest is Ni.
Before flame spray welding, the surface of the substrate metal was derusted and
degreased. Then, the coating layer is prepared by sandblasted and roughened before
flame thermal spraying. The process parameters are shown in Table 1 [9].
The total processes were surface pretreatment, preheating, pre-spraying and dusting.
After spraying, the sample was processed into a standard sample of U24 mm 7.9 mm
(thickness) with coating thickness of 1 mm. Then, the samples were proceeded by flame
remelting. A constant flame temperature with four different remelting times were tested.
The remelting time was 2 min, 5 min, 10 min and 12 min. After remelting, the samples
were cooled slowly to room temperature surrounded by asbestos powder.
rubber ball sample, the ceramic ball Si3N4 with U10 mm with amplitude of 1 mm was
used. The test load of 30 N with 20 Hz was applied in 30 min. The three-dimensional
shape of the wear scar of the sample coating was reflected by the non-contact three-
dimensional surface profilometer of ADE, American. With this profilometer, the wear
volume of coating could be calculated.
Fig. 1. Micro wear morphology of coating after different post-fusing: (a) 2 min; (b) 5 min;
(c) 10 min; (d) 12 min
Fig. 2. Wear volume of coating after different post-fusing: (a) 2 min; (b) 5 min; (c) 10 min;
(d) 12 min
Effects of Remelting on Fatigue Wear Performance of Coating 41
remelting for 10 min began to fluctuate after 200 s, while at the period from 800 s
to 1200 s, the friction factor dropped back to a low value. And then it showed a
volatility growth. To this end, the mean friction factor under different conditions were
calculated. The mean friction coefficient after remelting for 2 min was 0.661842. The
mean friction coefficient after remelting for 5 min was 0.7253625. The mean friction
coefficient after remelting for 10 min was 0.6818065. And the mean friction coefficient
after remelting for 12 min is 0.6893495. Hence, the order of friction factor could be
determined as 2 min < 10 min < 12 min. Hence, the friction coefficient of remelting
2 min was minimal.
Fig. 3. Friction coefficient of coating after different post-fusing: (a) 2 min; (b) 5 min; (c) 10 min;
(d) 12 min
Secondly, Fig. 3 shows that the friction factor is relatively stable during the pre-
vious 200 at test. While the mean value fluctuated slightly, it was indicated that the
friction factor is mainly determined by the coating structure in this period. Moreover,
the influence on friction factor from strength and defect was small. The mean value of
friction coefficient of each coating layer could be obtained by calculation. The mean
friction coefficient of coating layer after remelting for 2 min is 0.505657. And the mean
friction coefficient of coating layer after remelting for 5 min is 0.52132. The mean
friction coefficient of remelting for 10 min is 0.41205. And the mean value after
remelting 12 min is 0.526977. Hence, the order of friction factor is 10 min
< 2 min < 5 min < 12 min. With comparing the wear volume of coating, it was found
that the friction coefficient by remelting 2 min was less than that of remelting 5 min.
42 Z. Zhao et al.
It was indicated that there has no absolute correspondence between wear resistance and
friction factor. Hence, it is necessary to study the microscopic morphology of each
coating layer.
According to the wear formula proposed by Czichos [10], the wear rate remains
constant during the wear stable period. And the wear volume is a function of time as
W = Ct. Where W is wearing volume, t is wearing time and C is a constant. Therefore,
we can fit the wear volume and time curve at wear stable period to obtain the constant
C. Hence, the wear fatigue life of thermal sprayed parts under a same condition could
be predicted. Meanwhile, the product can be repaired and replaced timely and the
reliability and economic of the product could be ensured.
5 Conclusions
(1) Under the same test conditions, the wear volume is directly related to the overall
wear resistance of the coating. The smaller the wear volume means a better
performance of wear resistance.
(2) The friction factor is directly related to fractional wear resistance of the coating.
The smaller value means a better performance of fractional wear resistance of the
coating and it does not indicate the overall wear resistance of the coating.
(3) When remelting time is reasonable, the wear resistance is best. While the
remelting time is insufficient or too long, the wear resistance are not well.
(4) Fitting the wear amount and time curve during the wear stabilization period will
obtain the value of the constant c in the formula W = Ct. Thereby, the fatigue
wear life of the thermally sprayed member can be predicted under the same
conditions.
References
1. Zhang, X.C., Xu, B.S., Wang, Z.D., Tu, S.T.: Failure mode and fatigue mechanism of laser-
remelted plasma-sprayed Ni alloy coating in rolling contact. Surf. Coat. Technol. 205, 3119–
3127 (2011)
2. Berger, L.M., Lipp, J., Spatzier, J., Bretschneider, J.: Dependence of the rolling contact
fatigue of HVOF-Sprayed WC-17%Co hardmetal coatings on substrate hardness. Wear 271,
2080–2088 (2011)
3. Wang, S.Y., GuoLu, L.I., Wang, H.D., et al.: Influence of remelting treatment on rolling
contact fatigue performance of NiCrBSi coating. Trans. Mater. Heat Treat. 32(11), 135–139
(2011)
4. Wang, G., Sun, D., Wang, Y., et al.: Cavitation properties of Ni-based coatings deposited by
HVAF and plasma cladding technology. Trans. Mater. Heat Treat. 28(6), 109–113 (2007)
Effects of Remelting on Fatigue Wear Performance of Coating 43
5. Akebono, H., Komotori, J., Shimizu, M.: Effect of coating microstructure on the fatigue
properties of steel thermally sprayed with Ni-based self-fluxing alloy. Int. J. Fatigue 30, 814–
821 (2008)
6. Zhao, Z., Li, X., Li, Y., et al.: The analysis of fatigue properties and the improvement of
process for plunger with different post-fused thermal spray. Trans. Mater. Heat Treat. 33(s1),
92–95 (2012)
7. Zhao, Z., Li, Y., Li, X.: Effects of remelting time on fatigue life and wear performance of
thermal spray welding components. Trans. Mater. Heat Treat. 34(7), 169–174 (2013)
8. Zhao, Z., Li, X., Li, Y., et al.: Effect of remelting processing on fatigue properties of Ni
based PM alloy parts with thermal spraying coating. Powder Metall. Technol. 1, 3–7 (2012)
9. Li, X., Zhao, Z.: The investigation and practice of flame thermal spray technology on piston.
J. Lanzhou Polytech. Coll. 12(2), 9–11 (2005)
10. Yang, W., Wu, Y., Hong, S., et al.: Microstructure, friction and wear properties of HVOF
sprayed WC-10Co-4Cr coating. J. Mater. Eng. 46(5), 120–125 (2018)
Design of Emergency Response Control System
for Elevator Blackout
1 Introduction
The traditional rescue method requires manual turning, which takes a long time. In
order to rescue trapped passengers from the car in time, emergency response control
system came into being. When the city power supply is normal, the elevator emergency
response control system charges the internal storage battery. Once the power grid is cut
off or the elevator has an electrical fault, the system first isolates the external power grid
from the elevator operation control system, and then inverts and outputs the electric
energy in the storage battery to the frequency converter of the control cabinet so that it
drives the traction machine to drive the car to slowly level off and release people. The
elevator emergency response control system should be able to distinguish from the
time-consuming and labor-intensive manual operation, so as to cut off power and save
oneself, reduce the trapped time of personnel and ensure the safety of passengers.
At present, there are two kinds of elevator power failure emergency rescue devices
on the domestic market. The first type is to install an electric brake release device.
When the elevator is out of power, the device opens the traction brake independently
from the control cabinet to make the car floor nearby, and then professional rescuers
come to open the door and release the trapped passengers. The second type is the
installation of uninterrupted power supply in the elevator control cabinet. When the
elevator is out of power, supply power to the control cabinet and let the car auto-
matically open and put people on the flat floor. This kind of rescue device is more
convenient but not widely used at present [1] (Fig. 1).
Emergency Control
Charging circuit DC Converter Brake
System
Three-phase Inverter
Tractor
Storage battery Circuit
This design is an emergency response control system. The system needs to be able to
detect the power supply situation of the elevator at all times and react in time to
cooperate with the elevator controller inverter to drive the traction machine when
power is cut off. During the normal operation of the elevator, the emergency response
control system shall not interfere with the operation of the operation control system.
The rated voltage of the inverter used to drive the traction machine of the elevator is
usually 380 V AC. The controller used in this design must have the functions of low-
speed self-rescue, running direction self-identification and power failure rescue. After
the power failure of the elevator, the controller can identify the load condition of the car
and start the emergency power supply to slowly run the elevator to the level area to
open the door in the most energy-saving way, thus safely and quickly rescuing the
trapped personnel in the car.
system, a phase sequence relay needs to be placed, which mainly plays the role of
phase sequence detection and phase interruption protection. When the power failure
occurs, elevator controller will stop the operation when the power is lost. In order to
realize the automatic landing release of the car, a storage battery is required to transmit
power to the control cabinet. Under the condition of ensuring its power supply, an
emergency operation start signal is output to the frequency converter of the control
cabinet to make it operate, and the traction machine is driven to lift the car and control
the door system to open the door to release the car. Since a battery is required during
power failure, it is inevitable to have a charging circuit and a saturated discharging
circuit in the circuit when power supply is normal. After the emergency rescue, the
emergency system needs to enter the standby state to wait for the power supply to
return to normal, which requires the controller to output a rescue stop signal to stop the
system [2].
In this experiment, NICE3000new series elevator integrated controller is selected
for the inverter part of the control cabinet. Its rated voltage is AC 380 V. After the
emergency system outputs 380 V AC, the controller can realize the function of low-
speed self-rescue, control the car to slowly level the floor and then open the door.
In addition to the safety switch circuit, it also includes the door lock circuit. When
all the door electrical interlocking switches are closed, the controller’s main board can
receive the signal that the safety circuit works normally. Since there is a safety switch
of phase sequence relay in the series circuit of safety switch, when the elevator is out of
power, the phase sequence relay will stop working, and its safety switch will be
disconnected naturally, making the safety circuit invalid. In the emergency operation
stage, the safety circuit can be connected only when the safety switch of phase
sequence relay is closed, and the controller can control the emergency operation of the
elevator. The emergency system of this experiment should not only be able to output
the two-phase 380 V alternating current and the emergency operation start signal to the
control cabinet inverter, but also be able to output the phase sequence short connection
signal.
Inverter
Elevator
Controller
The microcomputer control module takes the single chip microcomputer as the
core, its working voltage is generally 24 V direct current. When the elevator is running
normally, the charging power supply module will supply dc 24 V to the power
detection end of the microcomputer control module. After recognition by the single-
chip microcomputer, the power supply of the elevator is confirmed to be normal. The
phase sequence relay module has 3 input points to detect three-phase electricity, and
one of its normally open contacts is connected to the microcomputer module power
detection terminal and connected in series to the power supply circuit. When the
elevator is out of power, the phase sequence relay stops working, the normally open
contact of the phase sequence relay in the power supply circuit is disconnected, the
power detection end cannot detect 24 V dc, and the single chip microcomputer can
identify and confirm the elevator is out of power. Therefore, the basis of microcom-
puter control module to judge whether the elevator is out of power is whether 24 V dc
is detected at the power detection end [3] (Fig. 3).
Fig. 3. The Microcomputer control module, The Incomplete sequence relay module and The
Charging power supply module
The charging power supply outputs 24 V DC to the power supply detection ter-
minal of the PC control board. The MCU recognizes that the elevator power supply is
normal, then the K1 and K3 terminals have no signal output, and the K1 and K3 coils
are not charged. K1 and K3 normally closed contacts are closed to maintain circuit
connectivity; K1 and K3 normally open contacts are disconnected to maintain standby
state; K1 often open contacts are disconnected to maintain standby state; K1 often open
contacts are disconnected without starting signal and phase sequence short connection
signal are issued; K3 normally closed contacts are kept closed so that power supply
control terminal continues to monitor power supply.
The 48 V output terminal of the charging power supply charges the battery, and the
QF1 (48 V air switch) is turned on to prevent the short circuit of the inverter. Every
charging period, the battery is close to saturation. The signal is sent from the K2
terminal of the PC control board. The action coil of the K2 relay is energized. The K2
single pole double throw switch disconnects the charging circuit. The battery and the
resistance are discharged in series. At the same time, the inverter is charged and
recharged after a period of time. Because the KM contactor coil is powered up, the KM
closed contacts at the output end of the inverters are disconnected to isolate the power
supply [4, 5].
When the elevator is out of power, the coil of KM contactor loses power, which
causes the open contacts of KM to be disconnected and the circuit to be disconnected.
The phase sequence relay stops working, its normal open contacts are disconnected in
the power supply circuit, and there is no 24 V direct current input in the power supply
detection terminal. The single chip computer identifies and confirms the elevator
blackout, the output signals of K1 and K3 terminals, and the action coils of K1 and K3
relays are powered up. The normal closed contacts are disconnected and isolated from
the market electricity, so as to prevent the sudden calls to make the KM contactor act
during emergency operation.
During the period, the computer module is supplied with 24 V DC by the battery,
and the QF2 (24 V air) switch is closed. K1 and K3 normally open contacts are closed,
batteries output 48 V DC to the inverters, QF1 closed, and inverters output 380 V AC
(KM normally closed due to coil power loss has been disconnected) to the elevator
control cabinet frequency converter NICE3000 new. K1 usually opens contacts to close
and output emergency operation start signal and phase sequence short connection
signal. In safety circuit, phase sequence relay often opens contacts to close and the
safety circuit works normally. The controller drives the tractor to release the car slowly
near the flat floor. During this period, because the K3 normally closed contacts are
disconnected, even if the phase sequence relay operates the computer module, the
power signal will not be detected, which ensures the stability of emergency operation.
After the rescue, the controller outputs emergency stop signal to MCU for identi-
fication. The MCU controls K1 and K3 relays to stop, and the system enters standby
state to wait for elevator power supply recovery.
emergency operation signals to the main board X20 of the control cabinet, and at the
same time provides phase sequence short connection signals to the phase sequence
relay contacts of the safety circuit, and the elevator enters emergency operation. After
the end of emergency operation, the elevator is located at the horizontal position. The
horizontal sensor photoelectric switch X1 receives the horizontal signal, which is used
as the stop signal to output to the emergency system. The system enters the standby
state and waits for the power supply to resume (Fig. 5).
During emergency operation, 380 V AC power is output from the inverter to the
elevator control cabinet. After K1 relay action, 24 V DC power is detected at the X20
end of the integrated controller. The 11 and 14 contacts of the phase sequence relay in
the safety circuit are short connected, and the frequency converter drives the tractor
operation.
5 Summary
Emergency system and elevator control cabinet frequency converter matching use.
When the abnormal elevator power supply is detected, the system provides 380 V AC
to the controller and sends a start signal, and the elevator car will open slowly on the
flat floor. After the operation, the controller feeds back a stop signal to the system and
waits for power supply to be restored. Nowadays, most of the converters in the elevator
Design of Emergency Response Control System for Elevator Blackout 51
control cabinet are integrated controllers, which have rich varieties and powerful
functions, and have the function of low-speed self-rescue.
References
1. Wang, G., Zhang, G., Yang, R., et al.: Robust low-cost control scheme of direct-drive gearless
traction machine for elevators without a weight transducer. IEEE Trans. Ind. Appl. 48(3),
996–1005 (2012)
2. Weiss, G., Felicito, N.R., Kaykaty, M., et al.: Weight-transducerless starting torque
compensation of gearless permanent-magnet traction machine for direct-drive elevators.
IEEE Trans. Ind. Electron. 61(9), 4594–4604 (2014)
3. Rashad, E.M., Radwan, T.S., Rahman, M.A.: A maximum torque per ampere vector control
strategy for synchronous reluctance motors considering saturation and iron losses. In:
Conference Record of the Industry Applications Conference, 2004. IAS Meeting, vol. 4,
pp. 2411–2417. IEEE (2004)
4. Wang, A., Wang, Q., Jiang, W.: A novel double-loop vector control strategy for PMSMs
based on kinetic energy feedback. J. Power Electron. 15(5), 1256–1263 (2015)
5. Zhang, Y.B., Pi, Y.G.: Fractional order controller for PMSM speed servo system based on
Bode’s ideal transfer function. 173(6), 110–117 (2014)
Effect of Cerium on Microstructure
and Friction of MoS2 Coating
Abstract. The MoS2 coatings with different Cerium contents (0.5%, 1%, 2%,
4%) were prepared based on the titanium alloy (Ti811) by a mixing method. The
surface microstructure and metallographic structure of the MoS2 coatings were
characterized by scanning electron microscopy (SEM). And the friction coeffi-
cient and wear texture were analyzed of four kinds of MoS2 coating layer. The
results have shown the increase of Cerium content can refine the microstructure
of the MoS2 coating, thereby inhibiting the growth of the crystal grains,
improving the wear resistance, and lower the friction coefficient. The minimum
friction coefficient is 0.055 at the Ce (1%). With the increase of Ce content,
hydrogenation occurs, resulting in the crystallization of the MoS2 coating, which
reduces the friction coefficient and affects the wear resistance.
1 Introduction
Tribology [1] is an applied discipline developed in the 1940s that focuses on friction,
wear, and lubrication caused by relative motion between contact surfaces. Fretting wear
can not only cause occlusion, looseness, noise increase between the contact surfaces
but also may cause cracks on the surface of the test piece, which is greatly reducing the
service life of the device [2].
As a convenient surface treatment technology, the bonded solid lubricant coating
(such as MoS2) can reduce the friction and wear of parts [3–5]. Alnabulsi, Luo et al.
studied the properties of fretting friction with MoS2 and analyzed the mechanism of
friction [6, 7]. Zhu et al. tested the frictional properties of the lateral and radial loads on
the MoS2 coating surface and compared the damage characteristics of the MoS2 coating
under two different loads [8].
In this paper, the MoS2 coatings with different Cerium contents (0.5%, 1%, 2%,
4%) were prepared based on the titanium alloy (Ti811) by mixing method. The surface
2.1 Matrix
The matrix material is the titanium alloy (Ti811) which is more common in aerospace.
It is characterized by good thermal stability, strong corrosion-resistance, and high
specific strength. Generally higher performance aircraft engine compressor components
mainly choose this material. In this study, the titanium alloy (Ti811) sample processing
size is u22 mm 7 mm, Using a double annealing treatment, and mechanical pol-
ishing to a surface roughness of Ra = 0.6 lm. The chemical components and the
properties of Ti811 as shown Tables 1 and 2.
80%, spray pressure is 0.2–0.4 MPa, Spray angle is 70°–90°, the thickness is 8–
15 lm.
(d) Place the sample at room temperature for 1–2 h, then put it into the oven and
gradually heat it. Curing time is 130 °C for half an hour; 220 °C for 0.5 h to 1 h.
The picture of four coating sample as shown in Fig. 1.
3 Experiment
The fretting friction and wear test is carried out on self-made fretting friction and wear
tester. The schematic diagram of the fretting friction test device is shown in Fig. 2. The
motor drives the table to swing, and the three-axis micro force sensor is mounted on the
upper test ball to measure the friction during the sliding process (Fig. 3).
In this test, the test parameters are: motor speed is 60 r/min, test load is 50 N,
reciprocating distance amplitude is 200 lm, the test period is 2000 times, and each
group of tests is done 3 times, and the average value is taken.
will gradually stabilize and rise and fall within a certain fixed range. The coefficient of
friction of 0.5 content is the highest, which is 0.062–0.064, and the 1.0 content is the
lowest, which is 0.056–0.058. As the content is increased, the coefficient of friction
does not decrease and is basically between 0.056 and 0.064.
Combined with microstructure, MoS2 containing Ce can reduce the friction coef-
ficient of the surface. The friction coefficient is related to the distribution of Ce in
MoS2. The more uniform Ce is inside MoS2, the lower the friction coefficient. But with
the increase of Ce content, Ce accumulates in MoS2, which reduces the friction-
reducing characteristics. However, if the Ce content is increased, Ce accumulates in
MoS2, which in turn reduces the wear reduction characteristics. A Ce content of 1%
allows Ce to be uniformly distributed in MoS2, thus obtaining the lowest coefficient of
friction.
Considering four microscopic of wear, the MoS2 coating with Ce can alleviate
wear. A coating with a small amount of Ce added does not have a good anti-wear effect
with the MoS2 coating. As the Ce content increases, the distribution in the MoS2
coating becomes more and more uniform, and the friction characteristics are effectively
improved. As the Ce particle content increases, Ce particles begin to build up inside the
coating, forming hard particles. The hard particles fall off during the movement, and at
the same time increase the damage to the surface of the friction pair, and the surface
does not form a good lubricating film.
5 Conclusion
(1) Cerium can improve the structure of the dry film coating. The MoS2 coating has a
flat particle shape. After adding 1% Ce, the coating structure is refined, the coating
structure is uniform, and there is no obvious particle agglomeration.
(2) The addition of Cerium to the MoS2 dry layer can improve the friction and wear
properties of the coating, and make the coating more wear-resistant and wear-
reducing. By comparing the fretting friction test of dry with different Cerium
amount, with 1% Ce added was obtained, and the wear resistance of MoS2 dry
film coating was the best.
References
1. Stachowiak, G., Batchelor, A.W.: Engineering tribology. Butterworth-Heinemann, Oxford
(2013)
2. Zhu, M., Xu, J., Zhou, Z.: Alleviating fretting damages through surface engineering design.
China Surf. Eng. 20(6), 5–10 (2007)
3. Asl, K.M., Masoudi, A., Khomamizadeh, F.J.M.S., et al.: The effect of different rare earth
elements content on microstructure, mechanical and wear behavior of Mg-Al–Zn alloy. Mater.
Sci. Eng.: A 527(7–8), 2027–2035 (2010)
4. Yan, M., Zhang, C., Sun, Z.J.A.S.S.: Study on depth-related microstructure and wear property
of rare earth nitrocarburized layer of M50NiL steel. Appl. Surf. Sci. 289, 370–377 (2014)
5. Li, B., Xie, F., Zhang, M., et al.: Study on tribological properties of Nano-MoS2 as additive in
lubricating oils. Lubr. Eng. 39(9), 91–95 (2014)
6. Alnabulsi, S., Lince, J., Paul, D., et al.: Complementary XPS and AES analysis of MoS3 solid
lubricant coatings. Microsc. Microanal. 20(S3), 2060–2061 (2014)
7. Luo, J., Zhu, M., Wang, Y., et al.: Study on rotational fretting wear of bonded MoS2 solid
lubricant coating prepared on medium carbon steel. Tribol. Int. 44(11), 1565–1570 (2011)
8. Zhu, M., Zhou, H., Chen, J., et al.: A comparative study on radial and tangential fretting
damage of molybdenum disulfide-bonded solid lubrication coating. Tribology 22(1), 14218
(2002)
A Machine Vision Method for Elevator
Braking Detection
1 Introduction
As special equipment, an elevator can quickly carry out vertical transportation, pro-
viding a great convenience for transportation. Elevator has a safety system to ensure
safe operation, and the brake is an important device of the elevator safety system. It can
effectively prevent and control the occurrence of accidents such as slipping and falling,
which greatly improves the safety of elevator operation and enables the elevator to stop
at exactly the position set by the program.
Brake force is an important technical index of a brake, whose schematic diagram of
the structure is shown in Fig. 1. The field measurement of braking force basically relies
on manual measurement. The general practice is to load the car with a certain load,
after the elevator runs smoothly, cut off the traction electromechanical source, the brake
starts to break at the same time, until the elevator stops completely, the distance of the
car runs during this period is the braking distance. The manual measurement procedure
is tedious and the accuracy is poor. This paper presents a machine vision elevator
braking force detection method, which can automatically calculate the braking distance,
simplify the braking detection process and improve the detection accuracy.
As shown in Fig. 1, the braking distance is the running distance of lift car during
test, which can be calculated by knowing the diameter of traction wheel and the
rotation angle of traction wheel during a test. Therefore, the main task of this paper is to
propose a machine vision method to detect the rotation angle of the traction wheel
during the test. After obtaining the rotation angle of traction wheel, the length of wire
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 59–66, 2020.
https://doi.org/10.1007/978-981-15-2341-0_8
60 Y. Ge et al.
rope movement can be calculated using the diameter of the traction wheel, and that is
the braking distance. The calculating rotation angle of two images belongs to image
registration technology. Image registration refers to the geometric alignment of two or
more images in the same scene with different shooting time, a field of view or imaging
mode.
Image registration technology is widely used in medical image processing, remote
sensing image processing, and computer vision. The main image registration methods
are methods based on image grayscale [1], e.g. mutual information, methods based fast
Fourier transform [2], methods based on image features, e.g. feature points and edge
detection [3]. Ofverstedt et al. [4] proposed an affine registration framework based on a
combination of strength and spatial information, which is symmetric and without
strength interpolation. This method shows stronger robustness and higher accuracy
than the commonly used similarity measure. Zhu et al. [5] proposed a robust non-rigid
body feature matching method based on geometric constraints. The non-rigid feature
matching method is transformed into the maximum likelihood (ML) estimation prob-
lem. The experimental results show that the method has good performance. Alahyane
et al. [6] proposed a new fluid image registration method based on lattice point
Boltzmann method (LBM). Chen et al. [7] proposed a multistage optimization method
based on two-step Gauss-Newton method to minimize continuously differentiable
functions obtained by discretization model for image registration. Wu et al. [8] pro-
posed an aerial image registration algorithm based on gaussian mixture model,
established an image registration model based on Gaussian mixture model (GMM), and
solved the transformation matrix between two aerial images.
This paper presents a method for detecting the rotation angle of traction wheel
using a Fourier transform. This paper is organized as follows. Section 1 is an intro-
duction. Section 2 introduces the basic theory used in this paper. Section 3 verifies the
proposed method by an example. Finally, the conclusions are given in Sect. 4.
A Machine Vision Method for Elevator Braking Detection 61
2 Basic Theory
Fourier transform is employed for detecting the rotation angle of two images.
Where Re½ and Im½ represent the operation of the real and imaginary parts of the
function.
Where r denotes the distance from the point on a radius to circle center, r 2 ½0; R.
In calculating superimposed pixel values of points on the radius, h0 and r are
A Machine Vision Method for Elevator Braking Detection 63
h ¼ h2 h1 ð5Þ
The rotation angle of Fig. 2 is 30.2755° calculated using the proposed method, and
the actual rotation angle is 30°. It can be seen that the error of the machine vision
method proposed is very small, which can meet the requirements of some engineering
calculations.
As shown in Fig. 4, there are the first and last frames of video from an elevator brake
test beginning to complete stop. Due to space limitation, all the frames are not listed,
only rotation angle between the first frame and the last frame is calculated. It should be
noted that the traction wheel rotation angle is less than a week during this period. For
the convenience of measuring traction wheel rotation angle and determine the main
direction of images, a black reflective strip was attached to traction wheel during the
experiment.
64 Y. Ge et al.
In order to reduce the influence of image background and improve the calculation
accuracy of the rotation angle, only the center part of images is shotted, as shown in
Fig. 5.
Using the proposed method, Fourier transforms for images, then draw the spectrum
diagram, as shown in Fig. 6.
A Machine Vision Method for Elevator Braking Detection 65
As shown in Fig. 6, there is a line with higher brightness through the center of the
image, that is the main direction of the image. After calculating the rotation angle, the
rotation angle of the traction wheel is 48.2197°, and actual measurement of rotation
angle is 47.5°, the error is 0.7197°. When traction wheel diameter is known, Formula
(6) can be used to calculate braking distance.
L ¼ phD=360 ð6Þ
4 Conclusion
An elevator brake force detection method based on machine vision is presented in this
paper. The Fourier transform theory is used to transform the image from spatial domain
to frequency domain, then power spectrum of the image is calculated, the main
direction of the image is determined, and finally, rotation angle between two images is
calculated. The proposed method can reduce the workload of elevator brake force
detection and improve detection accuracy.
66 Y. Ge et al.
References
1. Barbara, Z., Flusser, J.: Image registration methods: a survey. Image Vis. Comput. 21(11),
977–1000 (2003)
2. Reddy, B.S., Chatterji, B.N.: An FFT-based technique for translation, rotation, and scale-
invariant image registration. IEEE Trans. Image Process. 5(8), 1266–1271 (1996)
3. Wang, K., Shi, T., Liao, G.: Image registration using a point-line duality based line matching
method. J. Vis. Commun. Image Represent. 24(5), 615–626 (2013)
4. Ofverstedt, J., Lindblad, J., Sladoje, N.: Fast and robust symmetric image registration based
on distances combining intensity and spatial information. IEEE Trans. Image Process. 28(7),
3584–3597 (2019)
5. Zhu, H., Zou, K., Li, Y., et al.: Robust non-rigid feature matching for image registration using
geometry preserving. Sensors 19(12), 1746–1752 (2019)
6. Alahyane, M., Hakim, A., Laghrib, A., et al.: A lattice Boltzmann method applied to the fluid
image registration. Appl. Math. Comput. 349, 421–438 (2019)
7. Chen, K., Grapiglia, G.N., Yuan, J., et al.: Improved optimization methods for image
registration problems. Numer. Algorithms 80(2), 305–336 (2019)
8. Wu, C., Wang, Y., Karimi, H.R.: A robust aerial image registration method using Gaussian
mixture models. Neurocomputing 144, 546–552 (2014)
Remote Monitoring and Fault Diagnosis
System and Method for Traction Elevator
Cattle Dawn
1 Introduction
With the rapid growth of the number of elevators, high load, large volume and long
cycle of use of elevators become common, and the number of old elevators is surging.
Due to the expansion of the number and scope of elevator using, the faults ,which
become an important hidden danger in production safety have entered the stage of
frequent occurrence—the time between faults has been significantly shortened.
According to the statistics of elevators running for 5–10 years, an elevator has an
average of mechanical and electrical failures for 36.5 times every year, and 3.3 times
accidents that cause great harm to equipment and personal safety such as jacking and
clamping. At present, there are nearly 700 elevator manufacturers in China, and
thousands of elevator installation, transformation and maintenance units with hundreds
of thousands of employees. In order to get a share from the limited market, many
actions—vicious maintenance competition, bad currency drive good currency, the
maintenance work out of place, grab resources—have been taken, which result in the
maintenance market chaos, poor quality maintenance, directly affecting the safe
operation of the elevator. These have a great impact on the safety of the elevator and
elevator accidents and malfunctions occur from time to time.
Aiming at the elevator safety, traditional method is as follows: first set up the
elevator monitoring center and apply computer control technology and the elevator
remote monitoring based on network communication technology, then use sensors to
collect elevator operation data, analyzing abnormal data through the microprocessor. It
can monitor the elevator in the network 24 h a day without interruption, analyze and
record the running condition of the elevator in real time, and calculate the failure rate
automatically according to the fault record. By means of GPRS network transmission,
public telephone line transmission, LAN transmission and 485 communication trans-
mission, it is a comprehensive elevator management platform that can realize elevator
fault alarm, rescue of trapped people, daily management, quality assessment, hidden
danger prevention and other functions. The method above solves the problem of
knowing when, where, what happened and the development process of the accident in
the first time, but it is the treatment after the occurrence of the fault, and cannot reduce
the occurrence of the fault.
Related failure, equipment parameters are intermittent or permanently beyond the scope
specified in technical standards.
Unrelated failure, due to the failure of the test instrument and other reasons, the
collected data exceed the specified value.
Fatal failure, failures that can lead to significant loss of personal safety and prop-
erty, identify the cause of failure and predict the change of function difference,
according to the trend of failure and the prediction of equipment life to establish a
benchmark, so as to carry out the corresponding warning.
By means of vibration detection, modal analysis is carried out and the physical
coordinates in the system of linear constant coefficient differential equations are
transformed into modal coordinates, then derive out the system modal parameters:
Fundamental equations for modal analysis:
M x þ C x þ Kx þ f ðtÞ
In the equation, M, C, K - mass matrix, damping matrix and stiffness matrix of the
vibration system;
x; x; x – Displacement vector, velocity vector and acceleration vector in vibration
system;
For an undamped system, the free vibration equation is:
M x þ Kx ¼ 0
Traction elevator operation monitoring system can carry out real-time monitoring
of elevator status and fault warning or alarm, effectively ensuring the safety of elevator
operation and solving the possibility of elevator failure. Is the traction elevator oper-
ation monitoring method. Remote monitoring and fault diagnosis system of traction
Remote Monitoring and Fault Diagnosis System and Method 69
elevator, temperature sensor is installed on traction wheel and motor brake, acceleration
sensor is installed on the traction machine of the lift car, photoelectric sensor is
installed on car door, The microprocessor and elevator monitoring center are installed
on the top of the elevator; The sensor is connected with the elevator microprocessor,
and the microprocessor is connected with the elevator monitoring center. Micropro-
cessor, including central processing unit, the signal acquisition module, power module
and storage module, signal acquisition module receives the acceleration sensor, pho-
toelectric sensors, temperature sensors transmit signal data, signal acquisition module is
connected to a central processing unit, central processing unit to analyze the signal
data, determine whether the parameters of each sensor are within the specified range,
power supply module and storage module respectively connected with the central
processing unit (Fig. 1).
The central processing unit includes a first judge. After starting the motor and
motor brake, the first judge is used to judge whether the acceleration signal data of the
acceleration sensor exceeds the specified range when the photoelectric sensor has a
signal. If it does, the central processing unit will send an alarm signal. The central
processing unit also includes a second judge. After starting the motor and the motor
brake, when there is a signal from the photoelectric sensor, the system calculates the
real-time speed and position of the elevator car based on the signal data of the
acceleration sensor, and determines whether the slip difference of the traction wheel
and the traction rope exceeds the specified range. If the data exceeds the specified
range, the central processing unit will send an alarm signal. Otherwise, the signal data
70 S. Niu et al.
store will be stored in the storage module it belongs to. The second discriminator can
also be used when the photoelectric sensor has no signal but the acceleration sensor has
signal data. It will activate the elevator’s safety clamps and the central processing unit
will send out an alarm signal. The central processing unit also includes a third judge. It
will determine whether the temperature exceeds the specified range according to the
temperature data collected by the temperature sensor. Acceleration sensor is MEMS
triaxial acceleration sensor. The monitoring center includes computers that are used to
receive data from microprocessors and the data is displayed graphically on the com-
puter screen.
Install the temperature sensor on the traction wheel and motor brake, the acceleration
sensor on the cage and the traction machine, and the photoelectric sensor on the door of
the car; the elevator also installs microprocessors, used for analysis of abnormal data, to
realize the early fault diagnosis of foresight. Elevator microprocessor in the central
processing unit are connected to each sensor, used for collecting temperature and
acceleration. Then it will determine whether the data is within the scope of the regu-
lations, to the health of the elevator monitoring analysis; the microprocessor sends the
monitoring analysis results to the monitoring center.
When the motor and motor brake are started, if the photoelectric sensor has a signal,
the first judge of the central processing unit will send an alarm signal according to
whether the acceleration signal data of the acceleration sensor exceeds the specified
range or not. The second judge of the central processing unit will calculate the
instantaneous speed and position of the elevator car based on the acceleration signal
data, and determine whether the slip difference between the traction wheel and the
traction rope exceeds the specified range. If the specified range is exceeded, the central
processing unit will send an alarm signal; Otherwise, the signal data is stored in the
storage module; When the photoelectric sensor has no signal and the acceleration
sensor has signal data, the elevator’s safety clamp will be activated and the central
processing unit will send an alarm signal at the same time. The third judge of the
central processing unit determines whether the temperature data is out of the specified
temperature range (Fig. 2).
4 Conclusion
References
1. Wang, X., Niu, S.: Application of supercapacitor in elevator emergency and energy saving.
J. Changshu Inst. Technol. 27(2), 72–74 (2013)
2. Shen, S., Zhao, G.: Research on elevator intelligent fault diagnosis system based on wireless
cable. Mech. Eng. (2018)
3. Tian, M., Rui, Y.N.: Elevator safety remote monitoring technology and fault diagnosis. Chem.
Ind. Manag. (2015)
4. Chen, G.: Mechanical fault diagnosis of elevator system based on vibration analysis, Master’s
thesis. Soochow University. (2018)
Soil Resistance Computation and Discrete
Element Simulation Model of Subsoiler
Prototype Parts
Abstract. Subsoiling operation can break the bottom layer of the soil, thicken
the tillage layer, and increase crop yield. The subsoiling operation faces the
problem of large resistance and serious wear of the soil parts, which greatly
increases the cost of the subsoiling operation. In order to reduce tillage resis-
tance, the active lubrication method is selected to study. According to the
structure of the curved deep shovel handle, a simple sample is designed and
processed. A theoretical calculation method for the resistance of the sample
work is proposed. The finite element software EDEM is used to simulate the
working condition of the sample, and it is tested in indoor soil trough and
outdoor field. By comparing the simulation results with the real test, the cor-
rectness of the theoretical calculation and simulation model was verified, and the
drag reduction effect of the active lubrication and drag reduction operation mode
was proved.
1 Introduction
In order to study the drag reduction effect of the active lubrication mode, a sample
was designed and processed in this paper. The theoretical analysis of the sample and
the EDEM discrete element simulation model were established. The correctness of the
theoretical analysis and simulation model was verified by experiments. The drag
reduction effect of the active lubrication drag reduction mode was verified.
In this paper, the shovel handle of the surface deep shovel is selected as the prototype
of the sample. The prototype is optimized by the bionic method. By simulating the
corrugated shape of the earthworm body surface, the back surface and ripple of the
head and body of the scorpion to simulate, three back holes were designed on the side
of the sample, and the three back holes were arranged vertically in a straight line on the
side of the sample [2]. The arrangement of these back holes is the optimal arrangement
of resistance reduction effect obtained by Bingxue Kou after the experiment. In this
paper, the internal pipeline is designed in the sample, and the orifice is designed in the
back hole of the foremost side in the forward direction of the sample as shown in
Fig. 1. The orifice is evenly distributed in the back hole and connected to the pipe
inside the sample.
3 Force Analysis
In this paper, based on the experience and calculation methods of the predecessors,
combined with the Kostritsyn calculation method, the theoretical analysis of the force
of the sample is carried out. When Kostritsyn studied the cutting force of soil, he
divided the subsoiler type cutter into two basic shapes, among which the front edge
with sharp Angle is called the wedge blade, and the parallel blade is called the side
edge [3]. The stress of the sample is analyzed, and its stress is shown in Fig. 4.
74 G. Liu et al.
Combined with Jingfeng Bai’s calculation method for subsoiler, the formula for
calculating the resistance of the sample can be obtained as follows [4]:
a a
F1 ¼ 2 N1 sin þ l N1 cos þ lN2 ð1Þ
2 2
2S1 B cos a2 þ h
N1 ¼ Kel þ ð2Þ
d0
a
N2 ¼ Kel S2 cos ð3Þ
2
where
F1—total resistance of the sample (N)
T—the positive force on the knife (N)
P—the drag component acting in the normal direction (N)
N1 —normal force of soil on wedge edge (N)
N2 —normal force on the opposite edge of soil (N)
a—sample wedge edge Angle (50°)
l—Coefficient of friction between sample and soil (0.6)
Kel—Stress due to elastic deformation of soil (4500 N/m2)
S1—Wedge edge area of sample (0.003542 m2)
S2—Area of side edge of sample (0.0106 m2)
d′—Sample width (0.015 m)
h—Sample work depth (0.15 m)
L0—average deformation amount of soil
h—Angle of friction between soil and metal (40°).
When the subsoiler works, taking into account the complexity of the soil envi-
ronment and the impact of the operating speed on the resistance, it is considered that
the resistance also includes a part of the force including the undetermined coefficient,
which is calculated as:
F0 ¼ khd0 v2 ð4Þ
Soil Resistance Computation and Discrete Element Simulation Model 75
where
k—Undetermined coefficient
v—Operating speed (0.5 m/s).
According to the previous test, the value of the undetermined coefficient is taken as
120. Therefore, the average deformation of soil L0 = 0.0089 m.
By arranging formula (1) to formula (4), the corresponding data can be substituted
into the force of the sample:
F2 ¼ F1 þ F 0 ¼ 911:55 N ð5Þ
In summary, the total resistance of the sample is 911.55 N, and the direction is
opposite to the direction in which the sample advances.
4.3 Simulation
In the Geometries function branch in the pre-processing module of EDEM2018 soft-
ware, the Geometries sample in IGS format is imported into EDEM software, and the
working depth of the Geometries sample is 150 mm by adjusting its position relative to
the soil groove. The forward speed of the deep pine shovel is set to 0.5 m/s. The initial
state of soil model simulation in EDEM software is shown in Fig. 5.
In the EDEM solver module setup, this paper takes the experience of the prede-
cessors and the way to find related documents, sets the fixed time step to 33%, saves
the time every 0.0001 s. In the simulation process, the method of modifying the
dynamic coefficient of rolling friction between the soil and the sample is used to
simulate the lubrication and non-lubrication operations. The larger friction coefficient is
Soil Resistance Computation and Discrete Element Simulation Model 77
used as the case of the sample operation without lubrication, and the smaller friction
coefficient is used as the operation of lubricating the sample with the lubricating
medium. After performing multiple simulations, through the analysis of the data, the
resistance data in the steady state after the sampled material is placed in the soil, after
processing the data, the working resistance of the sample is finally shown in Table 2.
It can be seen from the table that when coefficient is 0.6, the working resistance of
the sample is 961.36 N, which corresponds to the resistance in the case of no lubri-
cation. When coefficient is 0.2, the working resistance is 830.59 N, which corresponds
to the resistance in the case of lubrication.
5 Test
In order to realize the experiment of the sample conveniently and quickly, and to
adjust the experimental scheme in real time, this paper built a simple soil trench test
bench indoors. The design drawing is shown in Fig. 7. The test bench is shown in
Fig. 8. The measurement of parameters is shown in Fig. 9.
78 G. Liu et al.
After constructing the test bench, installing the rack and filling the soil in the soil
tank, this paper collected the physical parameters of several fields in the Huanghuaihai
area. The method of adding water and compacting the soil tank is adopted to make the
soil physical property parameters in the soil tank consistent with the field parameters.
Fig. 10. Indoor soil groove Fig. 11. South school Fig. 12. Huanghuaihai corn
test field innovation center
The basic parameters of the soil tank and Daejeon are shown in Table 3 during the test.
Fig. 13. Laboratory test result Fig. 14. South school test Fig. 15. Test result of Huang-
result huaihai corn innovation center
Figures 13, 14 and 15 show the test results of the multi-site test. Figure 13 shows
the results of the lubrication drag reduction test conducted on the indoor soil test bench.
The average resistance of liquid lubrication test is 750.73 N, and the average working
resistance is 889.55 N under the condition of no liquid lubrication. The drag reduction
rate is 15.61%. Figure 14 shows the test in the school test field. During the test, the soil
moisture content is small and the soil is dry. In the case of liquid lubrication, the
average operating resistance of the sample is 574.49 N. In the case of no liquid
lubrication, the average operating resistance is 778.45 N. The drag reduction rate is
26.2%. Figure 15 shows the sample test in Huanghuaihai corn innovation center. The
average resistance of liquid lubrication test is 835.11 N, and the average working
resistance is 924.40 N under the condition of no liquid lubrication, and the drag
reduction rate is 9.63%. At this time, the center’s field is in a state shortly after
irrigation, and the soil moisture content is high.
6 Discussion
After tests, according to the data of the tests and the comparison with the results of
theoretical calculation, it is found that the error between the theoretical calculation
results and the results of indoor tests is 2.4%, and the error between the results of
Huanghuaihai regional tests is 1.4%. Therefore, it is considered that this theoretical
calculation method is feasible. Meanwhile, comparing the simulation and experiment, it
is found that the error of the simulated resistance is also within the acceptable range.
Therefore, it is considered that the simulation model is correct, and it is feasible to adjust
the rolling friction factor in the simulation to correspond to the watering condition.
7 Conclusions
This paper chose the active lubrication drag reduction method for research and pro-
cessed a sample. A theoretical calculation method of sample resistance is proposed, and
a simulation model which can simulate the working resistance of the sample is
established. The correctness of the theoretical calculation method and the simulation
model is verified by experiments, and the drag reduction effect of the active lubrication
and drag reduction operation mode is verified.
80 G. Liu et al.
Acknowledgements. This work was supported by National Key R&D Program of China
(2017YFD0701103-3) and Key research development plan of Shandong Province (2018GNC11
2017, 2017GNC12108).
References
1. Zhang, J., Zhai, J., Ma, Y.: Design and experiment of biomimetic drag reducing deep
loosening shovel. J. Agric. Mach. 45(04), 141–145 (2014)
2. Kou, B.: Resistance reduction by bionic coupling of earthworm lubrication function. Master’s
thesis, Jilin University, pp. 19–24 (2011)
3. Gill, W.R., Vanden Berg, G.E.: Soil Dynamics in Tillage and Traction. China Agricultural
Machinery Publishing House, Beijing (1983)
4. Bai, J.: Analysis of Anti-Drag Performance for Vibrating Bionic Subsoiler. Northwest
Agriculture and Forestry University, Xianyang (2015)
5. Deng, J., Hu, J., Li, Q., Li, H., Yu, T.: Simulation and experimental study of deep loosening
shovel based on EDEM discrete element method. Chin. J. Agric. Mech. 37(04), 14–18 (2016)
6. Deng, J.: Simulation and Experimental Study of the Subsoiler Tillage Resistance Based on
Discrete Element Method. Heilongjiang Bayi Agricultural University, Daqing (2015)
7. Hu, J.: Simulation analysis of seed-filling performance of magnetic plate seed-metering device
by discrete element method. Trans. Chin. Soc. Agric. Mach. 45(2), 94–98 (2014)
8. Wang, X.: Calibration method of soil contact characteristic parameters based on DEM theory.
J. Agric. Mach. 48(12), 78–85 (2017)
9. Liu, X., Du, S., Yuan, J., Li, Y., Zou, L.: Analysis and experiment on selective harvesting
mechanical end-effector of white asparagus. Trans. Chin. Soc. Agric. Mach. 49(04), 110–120
(2018)
Simulation Analysis of Soil Resistance of White
Asparagus Harvesting End Actuator Baffle
Parts Based on Discrete Element Method
Abstract. In this paper, the baffle in the end effector of white asparagus harvest
is taken as the research object, and five different baffles were designed. The
discrete element method was used to analyze the resistance of different baffles
when entering the soil. The simulation test results are as follows. The resistance
of these five baffles depends on the depth and velocity of the incoming soil and
is a quadratic function of these two factors. When the depth of the baffle into the
soil is less than 6 cm and the velocity of the baffle into the soil is 0.2 m/s, the
resistance of the inclined cylindrical baffle is minimal. When the speed is
0.4 m/s and 0.6 m/s, the rectangular inclined sheet-like baffle has the least
resistance. When the depth of the baffle into the soil is greater than 6 cm, the
resistance of the triangular plate baffle is minimal regardless of the speed. This
study can provide a theoretical basis for structural parameter optimization of
white asparagus harvest end effectors.
1 Introduction
White asparagus is a perennial herb with high nutritional value and anti-cancer health
effects [1]. China is a big country for white asparagus cultivation, but the harvest of
white asparagus is mainly based on artificial picking. There is no machinery for har-
vesting white asparagus in China. The harvest time of white asparagus is morning or
evening, and the harvesting time is relatively concentrated. The manual harvesting
workload is large and the efficiency is low. The high-efficiency and low-damage har-
vesting requirements of white asparagus have become the bottleneck restricting the
development of China’s asparagus industry [2].
The mechanical harvesting of white asparagus can be divided into the following
steps: the harvesting mechanism is inserted into the soil, the root of the white asparagus
is cut, and the whole white asparagus is taken out [3]. The harvesting machine has a
special baffle structure or clamping structure that prevents white asparagus from falling
during the harvesting process. These structures increase the resistance to the harvesting
mechanism when it enters the soil, thereby increasing energy consumption.
EDEM software is a software that uses discrete element technology to simulate and
analyze the interaction of particles and particle clusters. EDEM software is a software
for finite element simulation by synthesizing macroscopic objects through microscopic
particles and imparting mechanical properties between the particles. It adopts advanced
DEM algorithm and can simulate the finite element particles reliably. EDEM software
has numerous applications in industry and agriculture, such as the study of viscous and
non-viscous soils, crop harvesting and screening during crop harvesting, and the design
and optimization of various knives and shovel [4–6].
In this paper, the five kinds of baffle structures in the self-designed white asparagus
harvesting end effector are analyzed by discrete element method. Through simulation
analysis, the variation of the resistance of different baffles with the velocity and depth
of the baffle entering the soil is obtained. It is ultimately determined which baffle
receives the least resistance to soil. This study can provide a theoretical basis for
structural parameter optimization of white asparagus harvest end effectors.
In this paper, five kinds of baffle structure models are designed with Solidworks. The
various models are shown in Fig. 1, and are represented by A, B, C, D, and E,
respectively. All baffles are attached to the same shelf. The A baffle is composed of two
triangular hollow sheets. The B baffle is welded by two stainless steel cylinders with
the same angle as the angle of the A baffle. The C-Baffle is formed by bending two
stainless steel cylinders. The three baffles A, B, and C have the same length in the
horizontal direction. The D baffle and the E baffle are welded at the same angle by a
stainless steel plate and a cylinder. The width of the steel plate of the A baffle and the D
baffle is 2 mm and the thickness is 1 mm. The stainless steel cylinders in the B, C and
E baffles are both 2 mm in diameter. The distance between the uppermost and low-
ermost portions of the five baffles in the vertical direction is the same. In the following
sections, the five types of baffles are represented by their corresponding uppercase
English letters.
A B C D E
In EDEM software, the interaction force between particles and particles, the interaction
force between particles and boundaries, and the force of the particles themselves are
generally analyzed using different contact models [7]. In this study, the ‘Hertz-Mindlin
with bonding’ model in the EDEM2018 version was used to set up the soil model for
simulation. The parameters that need to be set for this model are unit hardness, stress
and bond radius [8]. This model can be used to simulate problems such as fracture and
fracture, and the bond between the particles is destroyed by external forces. This model
is used in this paper to simulate soil bonding and fracture. The simulation parameters of
the model are shown in Table 1.
The particle radius is 1.25 mm and the particle size ratio fluctuates between 0.8 and
1.2. The parameters of the contact model are shown in Table 2. The speed of the baffle
is set to 0.2 m/s, 0.4 m/s and 0.6 m/s, and the direction is set to the direction of gravity
acceleration. The depth of the baffle into the soil is 12 cm and the set time step is 30%.
The discrete element model in the simulation process is shown in Fig. 2.
A B C D E
13
11 A
9 B
Force (N)
7 C
5 D
3 E
1
-1 0 0.1 0.2 0.3 0.4 0.5 0.6
Time(s)
When the baffle has just entered the soil, the forces of the three baffles A, B, and C
are approximately equal and significantly larger than the resistance of the D and E
baffles. When the baffle half enters the soil, the force of the B model increases sig-
nificantly, reaching 13 N. The other four baffles are subjected to forces between 10 N
and 11 N, and the difference in resistance does not exceed 1 N. Therefore, when the
speed is 0.2 m/s, the D and E baffles can be used to reduce the resistance of the end
effector into the soil.
Simulation Analysis of Soil Resistance 85
17
15
13
11 A
9 B
Force(N)
7
5 C
3 D
1 E
-1
0 0.05 0.1 0.15 0.2 0.25 0.3
Time (s)
In the first half of the simulation, the B and C baffles are subjected to a large force,
the A-shaped baffle is centered, and the D and E baffles are less stressed. In the second
half of the simulation, the B-plate is significantly increased in force. The resistance of
the D and E baffles exceeds the A-type baffle within 0.2 s. The B and E baffles are
subjected to a maximum force of 17 N, and the A baffle is subjected to a force of at
least 14 N. Therefore, when the speed is 0.4 m/s, the A baffle can be selected.
24
21
A
18
B
15
Force (N)
12 C
9 D
6 E
3
0
0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2
Time (s)
When the speed is 0.6 m/s, the overall trend of the force of each baffle is the same
as that of 0.4 m/s, but the interval between the force curves is obviously increased
compared with the speed of 0.4 m/s, and this The trend gradually increases with the
increase of the depth of the soil. B baffle force is up to 23.5 N, and A baffle force is
only 18 N. By analyzing the force curves of the three speeds, it can be found that when
the depth of the baffle entering the soil is less than 6 cm, the resistance of the D and E
baffles is approximately the same, and is always smaller than the resistance of the other
baffles.
86 H. Ma et al.
24 0.2m/s
20 0.2m/s 20 0.4m/s
Force(N)
16 0.4m/s 16
Force(N)
0.6m/s 0.6m/s
12 12
8 8
4 4
0
0
0 2 4 6 8 10 12
0 2 4 6 8 10 12
A Depth(cm) B Depth(cm)
20 0.2m/s 20 0.2m/s
16 0.4m/s 16 0.4m/s
Force(N)
Force(N)
12 0.6m/s 12 0.6m/s
8 8
4 4
0 0
0 2 4 6 8 10 12 0 2 4 6 8 10 12
C Depth(cm) D Depth(cm)
20 0.2m/s
16 0.4m/s
Force(N)
12 0.6m/s
8
4
0
0 2 4 6 8 10 12
E Depth(N)
When the depth of the baffle into the soil is greater than 6 cm. The resistance of the
B baffle increases rapidly with increasing speed. The maximum resistance of the B
baffle is 24 N, and that of the other baffles is 20 N. When the soil depth is between
6 cm and 12 cm, the magnitude of the resistance of the remaining four baffles does not
differ by more than 1 N. But different baffles have different trend of resistance change.
As the speed and depth of entering the soil increase, the slope of the force curve of the
C, D, and E baffles becomes larger and larger. Compared with other baffles, the force of
the A baffle increases more slowly. By analyzing the previous single variable, it can be
Simulation Analysis of Soil Resistance 87
found that the relative magnitude of the resistance of each baffle changes when the
depth of the soil is 6 cm. Therefore, the force of the soil depth of 6 cm and 12 cm is
taken to determine the selection of the baffles at different depths and different speeds.
When the depth of the soil is less than 6 cm, the force curve of the D and E baf-fles
does not intersect with the force curve of the other baffles, and is always below the
force curve of the other baffles. Table 3 can be used to determine the selection of
baffles at different speeds. When the depth of the soil is greater than 6 cm, the resis-
tance of the A baffle increases the slowest, and it can be seen from Table 4 that the
resistance of the A baffle is the smallest, so the A-plate is the least stressed regardless of
the speed.
5 Conclusions
The resistance of these five baffles depends on the depth and velocity of the incoming
soil and is a quadratic function of these two factors. When the depth of entering the soil
is less than 8 cm and the speed is 0.2 m/s, the E baffle receives the least resistance. At
speeds of 0.4 m/s and 0.6 m/s, the D baffle receives the least resistance. When the
depth of entering the soil is greater than 8 cm, the A baffle receives the least resistance
regardless of the speed.
Acknowledgements. This work is supported by Key R&D plan of Shandong Province (2017
GNC12110, 2017GNC12108), by National Natural Science Foundation of China (51675317)
and by National Key R&D Program of China (2017YFD0701103-3).
88 H. Ma et al.
References
1. Huang, Z.: Key points of cultivation techniques of white asparagus in Dongshan County,
Fujian Province. Agric. Eng. Technol. 39(05), 77+85 (2019)
2. Chen, D., Wang, S., Wang, X.: Analysis on the status quo and development of mechanized
harvesting technology of asparagus. J. China Agric. Univ. 21(04), 113–120 (2016)
3. Du, S.: Optimization design and experimental study of selective harvesting end effector for
white asparagus. Shandong Agricultural University (2018)
4. Fang, H., Ji, C., Chandio, F.A., Guo, J., Zhang, Q., Chaudhry, A.: Analysis of soil movement
behavior in rotary cultivating process based on discrete element method. Trans. Chin. Soc.
Agric. Mach. 47(03), 22–28 (2016)
5. Wang, J., Wang, Q., Tang, H., Zhou, W., Duo, T., Zhao, Y.: Design and experiment of deep
burying and stalk returning device for rice straw. Trans. Chin. Soc. Agric. Mach. 46(09), 112–
117 (2015)
6. Zhang, Q., Liao, Q., Yu, W., Liu, H., Zhou, Y., Xiao, W.: Optimizati and experiment of the
surface of the ditching plow of the rape direct seeding machine. Trans. Chin. Soc. Agric.
Mach. 46(01), 53–59 (2015)
7. Wu, Y., Bai, X., Zhang, S., Zhai, J.: Discrete element analysis of material movement in rice
mill based on discrete element EDEM. Cereal Process. 43(02), 52–55 (2018)
8. Wang, X., Yue, B., Gao, X., Zheng, Z., Zhu, R., Huang, Y.: Simulation and experiment of
soil disturbance behavior when different heights of shingling shovel are installed. Trans. Chin.
Soc. Agric. Mach. 49(10), 124–136 (2018)
Simulation and Experimental Study of Static
Porosity Droplets Deposition Test Rig
1 Introduction
In the field operation of the sprayer, the penetration and deposition of droplets are
important indicators to measure the performance or the appropriateness of the
parameters. The deposition and infiltration of droplets in the canopy of crops has a
major impact on the control of pests and diseases as well as pesticide pollution. In order
to improve the control effect, avoid environmental pollution and reduce the amount of
pesticides used, it is particularly important to study the deposition distribution of
droplets in plant canopies.
At present, the test method for droplet osmosis deposition in crops is mainly to use
computational fluid dynamics (CFD) simulation and field testing of real plants. Dekeyser
et al. Used artificial trees to compare the sedimentary quality of seven orchard spray
application techniques in trees [1]. Tonggui Wu et al. proposed a three-dimensional
structural index that can represent the optical porosity of multi-row tree-shaped wind-
breaks, and applied it to wide-width tree-shaped windbreaks, revealing the relationship
between the protective effect of tree windbreaks and optical porosity [2]. Cian James
Desmond et al. found that an accurate assessment of the porosity of the canopy, and
specifically the variability with height, improves simulation quality regardless of the
turbulence closure model used or the level of canopy geometry included [3]. Hong et al.
established a comprehensive CFD model to predict displacement of pesticide spray
droplets discharged from an air-assisted sprayer, depositions onto tree canopies, and off-
target deposition and airborne drift in an apple orchard [4, 5]. Endalew et al. proposed a
new CFD integration method for airflow and droplet deposition simulation of airflow
assisted spray, simulating orchard canopy targets, assisting airflow, complex interactions
of air and mist flow, and proposing a random deposition model. This model was used for
droplet deposition of leaves and verified the model to improve the design characteristics
of nebulizer and the calibration of operating parameters to improve spray efficiency and
reduce environmental impact [6–8].
As one of China’s important economic crops, cotton needs to be sprayed with
pesticides during the growth process. As the cotton grows to the middle and late stages,
the occlusion of the leaves is very serious, and the internal porosity is small, which is
not conducive to the uniform deposition of droplets. In this paper, cotton is used as the
research object and the research on spatial droplet deposition based on static porosity of
cotton is carried out.
This paper only discusses the static porosity of plant populations and there is no change
in porosity during spraying. The canopy profile of multiple cottons is identical to the
canopy profile of individual cotton and the porosity is also the same. The establishment
process of the static porosity testing device for cotton is as follows.
(1) Layering cotton plants. Four cotton plants with certain interaction between
branches and leaves were selected as the droplet deposition test area. At the same
time, along the vertical direction of the plants, they were divided into three
different layers, namely the top layer, the middle layer and the lower layer. The
height of each floor from the ground is shown in Fig. 1.
(2) Porosity measurement of cotton plants. Assume that the porosity of each layer of
cotton plants in (1) is 60%, 50% and 35%, respectively.
Simulation and Experimental Study 91
(3) Arrangement of porosity per layer in the test device. The porosity of each layer is
described by using a 9 cm diameter disc in a square area of 1 1 m. According
to the porosity assumed above, when the porosity is 60%, the number of discs is
64 and the actual calculated porosity is 59.28%. When the porosity is 50%, the
number of discs is 81 and the actual porosity calculated is 48.47%. When the
porosity is 35%, the number of discs is 100 and the actually calculated porosity is
36.38%. All the discs on each layer are evenly dispersed on the square area.
(4) Construction of space droplet deposition test device based on static porosity.
Aluminum profiles were used to build a three-layer disc placement platform to
ensure that the heights of each layer from the ground were 0.9 m, 0.59 m and
0.28 m respectively. Finally, a static porosity similarity test device for cotton
population plants was obtained, as shown in Fig. 2.
(1) The test site is the Spray Performance Laboratory of Shandong Agricultural
University. The effect of natural wind is not considered during the test. As shown
in Fig. 3. The porosity is adjusted by changing the size between the wires.
(2) Set the sampling point. According to the porosity of each layer of the prediction
model, select different sampling points on the front of the disc. A 3 4 cm
water-sensitive paper was used as a droplet deposition carrier and the water-
sensitive paper was fixed to a disk by a paper clip. It is randomly placed in the
sampling point and numbered on the back of the water-sensitive paper.
(3) Start spray test. The test used water instead of pesticide to spray. By adjusting the
duty cycle and the nozzle flow rate was adjusted to 0.37 L/min. When the nozzle
sprayed the droplets to a stable state, the mobile platform is controlled to pass
through the spray zone at a speed of 0.65 m/s.
(4) Collect water-sensitive paper. After the spray is complete, wait for the water
sensitive paper to dry, place it in a sealed bag for post treatment. Repeat the above
steps and adjust the combination of porosity conversion devices according to the
test scheme. Each test combination was repeated three times.
(5) The DepositScan software was used to process the water-sensitive paper. Then the
analysis results are exported to Excel to obtain the unit deposition amount on each
water-sensitive paper. The average deposition per unit area of each layer after
treatment is calculated according to Eq. (1), and the deposition amount of the
whole layer would be calculated by formula (2):
q1 þ q2 þ q3 þ þ qm
q ¼ ð1Þ
m
Q ¼ q S ð2Þ
Where
m—Number of sampling points per layer
q—Average deposition at all sampling points in each layer (ll/cm2)
S—Total area per layer (cm2)
Q—Total deposition per layer (ll).
For the above nine sets of porosity combinations, according to the relationship
between different cotton canopy morphology and porosity, the corresponding canopy
shapes of the nine devices are shown in Fig. 4.
Fig. 5. CFD simulation process of static porosity droplet deposition test device
In the process of solving and calculation, the droplets are sprayed at the initial
position for 50 steps, a total of 0.5 s, so that the drops fall to the ground. Then, the
calculation area moves 145 steps, a total of 1.45 s, at the speed of 0.65 m/s. The
particle spraying is completed and the motion stops. The rest of the particles that don’t
Simulation and Experimental Study 95
fall off continue to calculate until all the particles fall off, complete the calculation.
During the whole spray movement, the total spraying time of droplet particles was
1.95 s.
Q ¼ q t 106 ð3Þ
Where
Q—The total amount of spray in the spray process ðL=min)
q—Single nozzle flow ðL=min)
t—Duration of spray process ðs)
The calculation results show that the total injection volume in the simulation
process is 12025 lL. In the simulation, the deposition amount of droplets in each layer
is shown in Table 2.
Table 2. Statistical table of droplet deposition of each layer in the process of simulation
Porosity assemblage Droplet deposition per layer (ll)
Upper layer Middle layer Lower layer
1 4266.68 3472.28 881.6
2 4145.7 3342.44 2176.58
3 3855.67 2732.95 1132.56
4 5098.16 2562.66 1497.09
5 5094.98 2794.13 1237.62
6 5094.58 2292.17 1186.24
7 5587.17 2562.66 1090.65
8 5570.52 2232.16 1277.60
9 5557.00 2110.45 697.60
The deposition data obtained in laboratory tests are processed and the deposition
per layer is expressed as a percentage of the deposition per layer relative to the total jet
volume. The percentage of deposition in each layer and two kinds of test errors are
shown in Table 3.
96 L. Song et al.
Table 3. Statistical table of droplet deposition of each layer in the process of simulation
S/N Upper layer (%) Middle layer (%) Lower layer (%)
SL Exp ERR SL Exp ERR SL Exp ERR
1 35.48 30.48 14.09 28.88 26.38 8.80 7.33 5.27 28.10
2 34.48 29.78 13.63 27.79 25.38 8.69 18.10 15.12 16.44
3 32.06 26.96 15.91 22.72 23.20 11.13 9.41 8.11 13.86
4 42.40 38.80 8.49 21.31 19.55 5.26 12.45 12.14 2.51
5 42.37 38.27 9.68 23.23 24.15 8.96 10.29 10.27 0.21
6 42.37 38.17 9.91 19.06 15.89 10.13 9.86 9.99 13.16
7 46.46 43.16 7.10 21.31 17.02 10.74 9.07 7.63 15.82
8 46.32 43.52 6.04 18.56 14.24 12.53 10.62 9.87 7.07
9 46.21 42.21 8.65 17.55 12.63 10.92 5.80 4.47 23.01
It can be seen from Table 3 that the simulation results are basically consistent with
the test results. Only in combination 1 and 9, the errors of the lower layer deposition
amount are 28.1% and 23.01% respectively, which are mainly caused by the errors of
the test system, and the rest are within 16.5%. It shows that the simulation basically
reflects the actual spray situation, and the droplet deposition distribution test device
with similar static porosity can be used to analyze the spatial distribution of droplet
deposition.
5 Conclusions
This paper describes the density of plant canopy by using porosity, and designs a kind
of space droplet deposition test device which can replace the static porosity of plant
canopy. Taking cotton as an example, a static porosity deposition test device was set up
and spray deposition test was carried out. At the same time, the spatial distribution of
fog droplets under different porosity was obtained through CFD simulation. The results
show that the static porosity droplet deposition test device can be used to analyze the
spatial distribution of droplet deposition.
Acknowledgements. This work was supported by the National Natural Science Foundation of
China (51475278) and the Agricultural Machinery Equipment R&D Innovation Project of
Shandong Province (2018YF002).
References
1. Dekeyser, D.: Spray deposition assessment using different application techniques in artificial
orchard trees. Crop Protect. 64(10), 187–197 (2014)
2. Wu, T.: Relationships between shelter effects and optical porosity: a meta-analysis for tree
windbreaks. Agric. For. Meteorol. 259(9), 75–81 (2018)
Simulation and Experimental Study 97
3. Desmond, C.J.: A study on the inclusion of forest canopy morphology data in numerical
simulations for the purpose of wind resource assessment. J. Wind Eng. Ind. Aerodyn. 126
(03), 24–37 (2014)
4. Hong, S.-W.: CFD simulation of airflow inside tree canopies discharged from air-assisted
sprayers. Comput. Electron. Agric. 149(06), 121–132 (2018)
5. Hong, S.-W., Zhao, L., Zhu, H.: CFD simulation of pesticide spray from air-assisted sprayers
in an apple orchard: tree deposition and off-target losses. Atmos. Environ. 175(03), 109–119
(2018)
6. Endalew, A.M.: Modelling pesticide flow and deposition from air-assisted orchard spraying in
orchards: a new integrated CFD approach. Agric. Forest Meteorol. 150(10), 1383–1392
(2010)
7. Endalew, A.M.: A new integrated CFD modelling approach towards air-assisted orchard
spraying—part I: model development and effect of wind speed and direction on sprayer
airflow. Comput. Electron. Agric. 71(02), 128–136 (2009)
8. Endalew, A.M.: A new integrated CFD modelling approach towards air-assisted orchard
spraying—Part II: validation for different sprayer types. Comput. Electron. Agric. 71(02),
137–147 (2009)
Effect of Heat Treatment on the Ductility
of Inconel 718 Processed by Laser Powder
Bed Fusion
1 Introduction
grained compared to its cast and wrought counterparts [3], with traces of microseg-
regation, MC carbides and Laves-phase as a result of the LPBF process [4–8].
Several studies have been conducted to investigate the influence of different heat
treatment procedures on the microstructure and mechanical properties of Inconel 718
processed by LPBF. Aydinöz et al. [4] investigated the effect of, amongst other,
solution annealing (1000 °C/1 h, air cooling) and ageing (720 °C/8 h, furnace cool to
621 °C/2 h, 621 °C/8 h), and reported elongation at break of approximately 10% at
room temperature. Chlebus et al. [5] conducted experiments with an identical ageing
scheme, but with different annealing temperatures prior to ageing. The reported elon-
gation at break for specimens built parallel to the build direction and solution annealed
at 1100 °C is 19 ± 2% at room temperature. Hovig et al. [2] reports elongation at
break of 5 ± 2% at room temperature for specimens manufactured parallel to the build
direction, solution annealed at 980 °C/1 h, and aged following the same scheme as
Aydinöz et al. and Chlebus et al. Hot isostatic pressing (HIP) at 1160 °C prior to
ageing increased the elongation at break up to 9 ± 4%. Schneider et al. [9] compared
the mechanical properties of 10 different heat treatment variations, with reported
elongation at break ranging from 15.44 ± 2.00% to 34.34 ± 1.52% depending on the
heat treatment condition. Amongst the aged conditions, stress relief at 1066 °C/1.5 h,
solution treatment at 954 °C/1 h, followed by ageing at 720 °C/8 h and 620°C/10 h,
produced the highest elongation at break with a reported value of 21.96 ± 0.37%.
Common for all the mentioned studies are yield strengths upwards of 1000 MPa, and in
some cases exceeding 1300 MPa.
When conducting tensile tests at elevated temperatures the yield strength, UTS, and
elongation at break is reduced when comparted to testing at room temperature. Trosch
et al. [3] compared the tensile properties of cast, forged and additively manufactured
Inconel 718 tested at room temperature, 450° and 650 °C. The material was heat
treated with solution treatment at 980°/1 h followed by ageing at 720 °C/8 h and 620 °
C/8 h. The elongation at break dropped from 20.4% at room temperature to 14.2% at
650 °C for the LPBF specimens built in parallel to the build direction. The elongation
at break of LPBF specimens were reported to drop further than that of forged and casted
samples at 650 °C compared to room temperature, which is attributed to the presence
of d-phase within the grains.
Zhang et al. [10] conducted a study where three different levels of d-phase was
present in the material. The material was tested at 950 °C and the material condition
without d-phase display the highest elongation at break. As the d-phase content is
increased the elongation at break is significantly reduced. By increasing the level of d-
phase from 0% to 3.79%, the elongation at break is reduced by 20%. If the level of d-
phase is further increased from 3.79% to 8.21%, the elongation at break is reduced by
an additional 20%. The yield strength and UTS was not as greatly affected by the
alteration of d-phase content.
This study focuses on understanding the mechanisms that influence the elongation
at break, in order to increase the ductility while maintaining acceptable strength.
100 E. W. Hovig et al.
2 Experimental Method
Twenty tensile specimens were manufactured out of Inconel 718 powder using an EOS
GmbH EOSINT M280. The processing parameters are denoted as In718 Performance
2.1 by the vendor, and argon shielding gas was used with a Grid Nozzle type 2200
5501. The specimen geometry is shown in Fig. 1, with the specimen orientation with
respect to the build direction indicated.
Fig. 1. Specimen geometry. All dimensions in mm, except for roughness (lm).
The chemical composition of the powder feedstock as given by the material vendor
is shown in Table 1.
Table 1. Chemical composition of the Inconel 718 powder feedstock as supplied by the material
vendor.
Ni Cr Nb Mo Ti Al Co Cu C Fe
wt–% 50–55 17–21 4.75–5.5 2.8–3.3 0.65–1.15 0.2–0.8 <1.0 <0.3 <0.08 Bal.
The tensile specimens were tested in an MTS3 tensile machine with a 100 kN load
cell at a displacement rate of 0.6 mm/min, at either room temperature, 550 °C, 600 °C,
or 650 °C (induction heating).
All the Inconel 718 samples display a microstructure typical for LPBF. The
microstructure is cellular, with fine grains in a narrow area close to the laser scan
trajectory, and equiaxed grains growing between the laser paths. This complies well
with the findings of e.g. Rodgers et al. [11].
In the stress relieved condition, the microstructure is decorated with needle shaped
d-phase in the interdendritic region, with additional d–phase on the grain boundaries,
and Al-oxides distributed across the microstructure, as shown in Fig. 2(a).
Table 2. Mechnical properties of Inconel 718 tested at 550 °C and 600 °C, heat treated
according to HT1. The values in parenthesis are the reported expected values by the material
supplier tested at 650 °C [12].
Rp0.2 (MPa) UTS (MPa) Elongation at break (%)
550 °C
Mean 1101 (1010 ± 50) 1267 (1210 ± 50) 11.49 (20 ± 3)
Std. Dev. 4.61 9.01 0.96
600 °C
Mean 1076 (1010 ± 50) 1220 (1210 ± 50) 11.53 (20 ± 3)
Std. Dev. 2.59 9.55 0.33
The tensile strength properties in HT1 are about 30–40 MPa lower when tested at
600 °C compared to 550 °C. The elongation at break in HT1 did not seem to be
affected by the different testing temperatures. The elongation at break compares to
reported values in the literature [4], with elongation at break of 11.5%, but the elon-
gation is not satisfactory when compared to the material data sheet provided by the
material supplier, which suggests an elongation at break of 20 ± 3% [12]. The material
in HT1 was subjected to a stress relief prior to the recommended heat treatment,
however. The influence of the stress relief on the microstructure is discussed later in
this section.
Figure 2(b) shows the microstructure of Inconel 718 after HT1. d-phase was
observed on the grain boundary, an Al-rich spherical phase was observed within the
grains, and evidence of Laves-phase was observed on the grain boundaries. Some
d-phase was observed within the grains as well.
102 E. W. Hovig et al.
Fig. 2. SEM micrograph of stress relieved (a) and HT1 (b) Inconel 718. The arrows indicate
(1) interdendritic d-phase, (2) grain boundary d-phase, (3) Al-oxides, and (4) Irregular particles,
believed to be either Laves, carbides, or inclusions because of powder impurity or introduced
during the sample preparation.
Table 3. Mechanical properties of Inconel 718 heat treated according to HT2.a tested at room
temperature and 650 °C.
Rp0.2 (MPa) UTS (MPa) Elongation at break (%)
Room temperature
Mean 1160 1402 32.2
Std. Dev. 10.10 10.10 0.50
650 °C
Mean 978 1098 25.6
Std. Dev. 0.9 3.0 2.8
Table 4. Mechanical properties of Inconel 718 heat treated according to HT2.b tested at room
temperature and 650 °C.
Rp0.2 (MPa) UTS (MPa) Elongation at break (%)
Room temperature
Mean 1188 1425 33.0
Std. Dev. 4.5 4.46 0.3
650 °C
Mean 1047 1164 24.1
Std. Dev. 16.8 7.93 0.5
Effect of Heat Treatment on the Ductility of Inconel 718 Processed 103
The elongation at break in HT2.a and HT2.b far exceeds the reported values in the
literature [2, 4, 5, 9], while at the same time keeping a satisfactory yield strength. The
heat treatment procedure was based on findings by Schneider et al. [9] in order to
maximize the elongation at break. In HT2.a and HT2.b the material was not subjected
to a stress relief prior to solution annealing, and the ageing temperature was lower
compared to HT1. HT2.b was solution annealed at a slightly higher temperature
(1065 °C) compared to HT2.a (1010 °C). The higher temperature seems to increase
both strength and ductility. Figure 3 shows the microstructure after HT2.a and HT2.b
heat treatments respectively. d-phase was observed both on the grain boundaries and
within the grains. The Al-rich phase is also observed, but no Laves-phase was detected.
Solution treatment at a higher temperature reduces the amount of d-phase observed.
Fig. 3. Microstructure of Inconel 718 with heat treatment HT2.a (a) and HT2.b (b). The arrows
indicate (1) d-phase (white) and (2) Al-oxide with Ni3Nb phase (black dots).
4 Discussion
Based purely on the tensile results in this study, it is apparent that HT2 is superior to
HT1 with respect to elongation at break. There is one main difference between the two
heat treatment procedures; HT1 was stress relieved prior to solution annealing and
ageing.
The elongation at break after HT1 of 11.5% at 550 °C and 600 °C is comparable to
the findings of Aydinöz et al. [4], where the material was solution annealed at 1000 °C
prior to ageing. Similar results are reported by Hovig et al. [2], where the material was
solution annealed at 980 °C prior to ageing. In the reported cases from the literature
with solution annealing temperatures above 1000 °C the elongation at break for tensile
specimens built parallel to the build direction in LPBF is in the region of 20% or higher
[5, 9]. The elongation at break after HT2.a and HT2.b is 25.6% and 24.1% at 650 °C,
which compares favourably to previous studies with similar solution annealing tem-
peratures [5, 9].
In an effort to explain why a change in solution annealing temperature can more
than double the elongation at break, an isothermal transformation diagram (TTT dia-
gram) is consulted. The TTT diagram for Inconel 718 published in Hovig et al. [2]
104 E. W. Hovig et al.
shows that unwanted phases such as d and r, form at temperatures between 650 °C and
1000 °C. d-phase can form if the material is held at temperatures between 900 °C and
1000 °C in excess of one hour. Grain-boundary d can form after just minutes if held at
the same temperature. In order to form r-phase the material must be held at temper-
atures in excess of 660 °C for extended periods of time (>1000 h). The high amount of
d-phase in the stress relieved condition (Fig. 2(a)) is likely a result of the slow cooling
rate. Solution treatment at sufficiently high temperature reduces the content of d-phase
[13], which can be seen in the reduction of d in HT2.b compared to HT2.a (Fig. 3).
In other words, when solution annealing (or stress relief) is performed at temper-
atures below 1000 °C unwanted phases such as d are allowed to form. The same would
be the case if the cooling rate from temperatures above 1000 °C is not sufficiently high.
This is supported by the microscope image in Fig. 2(a), where a high density of both
interdendritic and grain-boundary d is evident. Most of the interdendritic d-phase
dissolve in the solution annealing that follows the stress relief in HT1, but the
microscope analysis still shows traces of d-phase within the grains (Fig. 2(b)). There is
still evidence of d-phase, especially on the grain boundaries, in HT2 (Fig. 3) as well,
but to a far lesser extent. Zhang et al. [10] demonstrated how an increased d-phase
content reduces the elongation at break, which supports the findings in this study. In
order to increase the elongation at break, the amount of d-phase, especially within the
grains, should be reduced through careful selection of heat treatment parameters.
In an effort to increase the ductility of Inconel 718 processed by LPBF the effects of
different heat treatment procedures have been investigated. The following conclusions
can be drawn from this work:
LPBF Inconel 718 subjected to stress relief or solution annealing at temperatures
below 1000 °C, or with insufficient cooling rates from higher temperatures, exhibit an
unsatisfactory elongation at break. This is attributed to an excessive formation of
d-phase.
The heat treatment denoted HT2.a and HT2.b resulted in elongation at break of
32.2% and 33.0% at room temperature and 26.6% and 24.1% at 650 °C respectively.
This heat treatment procedure more than doubled the elongation at break at high
temperature compared to HT1, while maintaining satisfactory strength.
Acknowledgements. The authors would like to thank Tronrud Engineering, Hønefoss, Norway
for supplying the Inconel 718 test specimens and performing the stress relief on the samples
denoted HT1. Furthermore, thanks are due to SWEREA KIMAB, Stockholm, Sweden for heat
treatment and tensile testing, and to Hydro Innovation & Technology, Finspång, Sweden for
microstructure investigations.
This study is financed by the Norwegian research council, grant number 256623.
Effect of Heat Treatment on the Ductility of Inconel 718 Processed 105
References
1. Wang, X., Gong, X., Chou, K.: Review on powder-bed laser additive manufacturing of
Inconel 718 parts. Proc. Inst. Mech. Eng. Part B: J. Eng. Manuf. 231(11), 1890–1903 (2016)
2. Hovig, E.W., Azar, A.S., Grytten, F., Sørby, K., Andreassen, E.: Determination of
anisotropic mechanical properties for materials processed by laser powder bed fusion.
Determination of Anisotropic Mechanical Properties for Materials Processed by Laser
Powder Bed Fusion (2019)
3. Trosch, T., Strößner, J., Völkl, R., Glatzel, U.: Microstructure and mechanical properties of
selective laser melted Inconel 718 compared to forging and casting. Mater. Lett. 164, 428–
431 (2016)
4. Aydinöz, M.E., Brenne, F., Schaper, M., Schaak, C., Tillmann, W., Nellesen, J., Niendorf,
T.: On the microstructural and mechanical properties of post-treated additively manufactured
Inconel 718 superalloy under quasi-static and cyclic loading. Mater. Sci. Eng., A 669, 246–
258 (2016)
5. Chlebus, E., Gruber, K., Kuźnicka, B., Kurzac, J., Kurzynowski, T.: Effect of heat treatment
on the microstructure and mechanical properties of Inconel 718 processed by selective laser
melting. Mater. Sci. Eng., A 639, 647–655 (2015)
6. Li, X., Shi, J.J., Wang, C.H., Cao, G.H., Russell, A.M., Zhou, Z.J., Li, C.P., Chen, G.F.:
Effect of heat treatment on microstructure evolution of Inconel 718 alloy fabricated by
selective laser melting. J. Alloy. Compd. 764, 639–649 (2018)
7. Moussaoui, K., Rubio, W., Mousseigne, M., Sultan, T., Rezai, F.: Effects of selective laser
melting additive manufacturing parameters of Inconel 718 on porosity, microstructure and
mechanical properties. Mater. Sci. Eng. A 735, 182–190 (2018)
8. Popovich, V.A., Borisov, E.V., Popovich, A.A., Sufiiarov, V.S., Masaylo, D.V., Alzina, L.:
Impact of heat treatment on mechanical behaviour of Inconel 718 processed with tailored
microstructure by selective laser melting. Mater. Des. 131, 12–22 (2017)
9. Schneider, J., Lund, B., Fullen, M.: Effect of heat treatment variations on the mechanical
properties of Inconel 718 selective laser melted specimens. Addit. Manuf. 21, 248–254
(2018)
10. Zhang, S.-H., Zhang, H.-Y., Cheng, M.: Tensile deformation and fracture characteristics of
delta-processed Inconel 718 alloy at elevated temperature. Mater. Sci. Eng. A 528(19),
6253–6258 (2011)
11. Rodgers, T.M., Madison, J.D., Tikare, V.: Simulation of metal additive manufacturing
microstructures using kinetic Monte Carlo. Comput. Mater. Sci. 135, 78–89 (2017)
12. EOS GmbH: EOS NickelAlloy IN718 (2014). http://ip-saas-eos-cms.s3.amazonaws.com/
public/4528b4a1bf688496/ff974161c2057e6df56db5b67f0f5595/EOS_NickelAlloy_IN718_
en.pdf. Accessed 16 May 2019
13. Zhang, H.Y., Zhang, S.H., Cheng, M., Zhao, Z.: Microstructure evolution of IN718 alloy
during the delta process. Proc. Eng. 207, 1099–1104 (2017)
Comparative Study of the Effect of Chord
Length Computation Methods in Design
of Wind Turbine Blade
Abstract. Design of wind turbine blade is the most important step in devel-
oping efficient non-conventional energy converters in order to tackle today’s
energy crisis scenario. In horizontal axis wind turbines in particular, blades are
the most important parts of the turbine and needs optimized design since it has
direct relation with output performance. In addition, the turbine blade is critical
part in terms of the manufacturing cost of the blade, which is about 15–20% of
the total wind turbine plant cost. One of the main design parameters in the
geometry of wind turbine blades is the cord length. This parameter affects not
only the performance, but also the blade structural stiffness. Closer review of the
literature shows that there is no commonly accepted formula for calculation of
chord length distribution. Thus, this paper focuses on comparative study of the
works of different researchers on the formulas used for calculation of chord
length distribution along the blade. Upon evaluating the available formulas,
some methods of combining works of different researchers for designing hori-
zontal axis wind turbine blade are included such as chord length distribution,
manufacturing complexity of wind blade, weight of the blade and power output
obtained. Each method is compared for the same airfoil, input parameters like
radius, wind speeds, number of blades, tip speed ratio and materials. Effect of
the chord length of the blade on the performance of the wind turbine is com-
puted a dedicated software (Qblade) and analyzed.
Keywords: Horizontal axis wind turbine Wind turbine blade Chord length
distribution Qblade
1 Introduction
Currently, wind turbine industries are highly interested to conduct studies on ways to
increase the efficiency of wind turbines. This is because more and more electric power
needed to come from renewable sources due to the increasing environmental concerns
and the rising oil prices. In addition, the concerns over the reducing fossil-fuel
resources has forced the search for alternative energy sources such as wind energy,
which has been proved to be an important source of nonpolluting and renewable energy
source and special focus is given on wind energy due to its advantages [1] compared
with fossil fuels. The reliability, sustainability, and risks related with wind energy
applications have been studied by BoroumandJazi et al. [2]. As reported in [3], wind
turbines as electric power generators are also found to be attractive even for moderate
wind speeds. When designing a wind turbine, the blade design is extremely important.
For design of horizontal axis wind turbine blades, in particular, the information or
parameters required are the variation of the rotor blade chord and blade twist as a
function of the blade radius, as well as the airfoil section shapes used for the rotor and
their corresponding aerodynamic characteristics. Chord length is the basic part which
has great effect in increasing the performance of wind turbines. Specially, the chord
length distribution is required to give maximum power output with low weight of the
blade. The chord length distribution varies from root to tip as a function of each
segment of the blade as illustrated in Fig. 1.
The chord length affects the performance of the wind turbine blade as well as the
blade structural stiffness. In the design process of the blade, primarily, the blade
parameters of each section are obtained, and then the main parameters such as chord
length and torsion angle are calculated. Several researchers have used several formulas
for calculating the chord length distribution [4–6]. El-Okda [7] collected a total of ten
different methods for the calculation of the chord length distribution along the rotor
blade, with and without considering wake rotation, under design methods of horizontal
axis wind turbine rotor blades and concluded that Schmitz method for optimum flow
angle and chord distribution is found to be the simplest and most straight forward result
from BEM optimization analysis.
Due to the observed variations of approaches to calculate the chord length distri-
bution, this paper focuses on a comparison study of a total number of 19 different chord
length calculation methods used in designing horizontal axis wind turbine blade. The
methods are evaluated based on output power, torque, manufacturing cost, and weight
of the blade for the same material. Moreover, the Qblade software is used to compare
the performance of each chord length calculation formula in blade designing for the
108 T. Batu and H. G. Lemu
same airfoil having the same input parameters like angle of attack, lift coefficients,
wind speeds and other parameters which are common for all the considered cases.
To study the chord length calculation methods for design of horizontal axis wind
turbine (HAWT) blade, the following parameters are used for each of the methods.
Referring to the data of design of rated wind speed Vrated = 12.62 m/s, blade pitch is
fixed at an angle of hcp = −2°, rotor radius R = 4.95 m, hub radius (rhub) = 0.15 m and
number of blades B = 3. According to the range of tip speed ratio of high-speed wind
turbines, k = 7 is selected for all considered formulas. NACA 4415 airfoils is selected
because it has desirable features in an effective angle of attack of each radial segment of
the blade, i.e. the ratio CL/CD is a maximum. So, for NACA 4415, a = 7° is found to be
the optimum angle of attack according to the design data in Qblade software, with the
corresponding lift coefficient CL, opt(7°) = 1.20423 and drag coefficient of CD, opt
(7°) = 0.01162 [8]. The rated tip speed ratio (kr) is given as a function of the arbitrary
radius (r) as follows:
r
kr ¼ k ð1Þ
R
Table 1. Different formulas for calculating chord distribution and twist angle
No Chord length distribution References
1 Cðr Þ ¼ BCRL k 1:868 þ 5:957 Raju [9] and Edon
kr k2 þ
3:1 0:5433 0:2917
k3r k4r
r [10]
cos /
2 Cðr Þ ¼ 8pr3Bk ; / ¼ 900 23 tan1 k1r Ingram [6]
r
3 Cðr Þ ¼ 9k8R0:8 2 kk0:8 C2p , where k0:8 imply at r = 0.8R Burton et al. [11]
L kB
2
4 Cðr Þ ¼ 9BC
16pR
k2 r
Jamieson [12]
L
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
5 C ðr Þ ¼ 2pr 8 U
B 9CL Urel ; where Urel ¼ U 2 þ Vw2 Hau [5], Schubel
and Crossley [13]
sin / 2
6 Cðr Þopt ¼ ð8ar
1aÞBCn ; / ¼ tan1 ð3k2 r Þ Hansen [4]
Cn ¼ CL cos / þ CD sin /; a ¼ 0:2
7 Cðr Þ ¼ BCl
8pr
ð1 cos /Þ; /ðr Þ ¼ 23 tan1 ð3k2 r Þ Duran [14];
Manwell et al. [15]
sin /
8 Cðr Þ ¼ 8pr
3BCL kr ; / ¼ tan1 ð3k2 r Þ Manwell et al. [15]
without considering wake rotation
9 Cðr Þ ¼ 9BC16pR
pffiffiffiffiffiffiffi
1 ; / ¼ tan1 ð3k2 r Þ; a ¼ 1=3 Gundtoft [16]
L
k k2 4
r þ9
10 Cðr Þ ¼ 16pr 2 1 1 1
; /ðr Þ ¼ 23 tan1 ð3k2 r Þ DNV [17]
BCL sin 3 tan kr
(continued)
Comparative Study of the Effect of Chord Length Computation Methods 109
Table 1. (continued)
No Chord length distribution References
11* C ðr Þ ¼ 8pFr sinð/Þ
;F ¼ p2 cos1 exp ðRr Þ
B2rsin/ Maalawi [18]
BCL ð1k
kr þ tan /
r tan /
CD =CL Þ
12* sinð/Þ Fhub; if r 0:8R Shen et al. [19] and
Cðr Þ ¼ BC k8pFr ; F ¼
L ð1kr tan /CD =CL Þ Ftip; if r [ 0:8R
r þ tan /
Maalawi [18]
1 BðRr Þ
Ftip ¼ p cos
2
exp g 2rsin/ ;
Fhub ¼ p cos
2 1
exp g Bð2rsin/rrhub Þ
15 ðRr Þ Biadgo and
/ðr Þ ¼ 23 tan1 ð3k2 r Þ; F ¼ p2 cos1 exp B2rsin/
Aynekulu [21]
8pFr sinð/Þ ðcos /ðr Þ kr sinð/ÞÞ
C ðr Þ ¼ BCl ðsin / þ kr cos/Þ
sinð/Þ ðcos /ðr Þ kr sinð/ÞÞ 1 2
16 Cðr Þ ¼ 8pFrBCl Þ ; /ðr Þ ¼ 3 tan ð3kr Þ;
2 Elkoda [7] and
ðsin / þ kr cos/ Biadgo, Aynekulu
ðR r Þ
Ftip ¼ p2 cos1 exp B2rsin/ ; [21]
ð Þ
Fhub ¼ p2 cos1 exp 2rsin/hub
B r r
; F ¼ Ftip Fhub
sinð/Þ ðcos /ðr Þ kr sinð/ÞÞ
17 Cðr Þ ¼ 8pFrBCl ðsin / þ kr cos/Þ ;
El khchine [20] and
Biadgo, Aynekulu
Fhub ; if r 0:8R
F¼ [21]
Ftip ; if r [ 0:8R
ðR r Þ
Ftip ¼ p cos1 exp g B2rsin/
2
; / ¼ 23 tan1 k1r
Fhub ¼ p2 cos1 exp g Bð2rsin/ r rhub Þ
; e g ¼ e0:125ðBk21Þ þ 0:1;
ð/Þ ðcos /ðr Þ kr sinð/ÞÞ
18 Cðr Þ ¼ 8pFrsin ðsin / þ kr cos/Þ ; /ðr Þ ¼ 23 tan1 ð3k2 r Þ Shen et al. [19] and
BCl
Biadgo, Aynekulu
Fhub; if r 0:8R ðRr Þ
F¼ ; Ftip ¼ p2 cos1 exp B2rsin/ [21]
Ftip; if r [ 0:8R
Fhub ¼ p2 cos1 exp Bð2rsin/ r rhub Þ
19 C ðr Þ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
16p Raut et al. [22]
ðð49Þ þ kr þ 21Þ
2
9Nk
*
The formulas given in 11–14 include tip losses, F, and drag is not omitted.
110 T. Batu and H. G. Lemu
Fig. 2. Comparative graphical distribution of C/R vs r/R for references in Table 1: (a) 1–8,
(b) 15, 17 and 18, (c) 11, 12 and 14, (d) 13 and 16
Figure 2(a) indicates that the chord length distribution of those researchers listed in
the figure give small value compared to the other methods. The method used by Burton
et al. [11] gives chord length that has a linear distribution, i.e. all values are the same
along the blade like a beam. As a result, the chord distribution is constant everywhere.
This reduces the power output of the wind turbine. Gundtoft [16] method (Fig. 2(b))
has also approximately linear chord distribution. Linear chord distribution is not rec-
ommended because equal chord lengths along the blade sections located at larger radius
will experience higher normal force and torque. This makes failure to the wind turbine
parts like hub and also affects the performance of the wind turbine.
The cord length distribution formulas or methods for reference numbers 11–18 in
Table 1 depend on tip-hub loss factor (F), while those formulas for 11 to 14 include tip
losses, F, and thus drag is not omitted. Comparing the results of those four chord length
distributions, the value for the combined relation of Maalawi [18] and Elkoda [7] are
Comparative Study of the Effect of Chord Length Computation Methods 111
very high, while the other three formulas give almost the same results. On the other
hand, small chord length distributions are obtained by combining work of Maalawi [18]
with El khchine [20] as shown in Fig. 2(c) and (d).
The formula from 15 to 18 include tip losses, F, and thus drag force is omitted.
When the results of those four chord length distributions are compared, the combined
work of Elkoda [7] and Biadgo and Aynekulu [21] gives very high values while the
other three values are almost the same. By combining the formulas used by Shen et al.
[19] and Biadgo and Aynekulu [21], however, small chord length distribution is
obtained (Fig. 2(b) and (d)).
Almost all formulas used from 11–18 give high chord length near the root (r/R
0.2) portion of the blade, which makes the manufacturing cost of this type of blade
design very expensive. Primarily, the mold to be used for fabricating the blades would
have complicated shape and thus more expensive. Secondly, a large increase in the
chord from 0.1 r/R 0.4 adds considerable weight to the blade where there is very
little contribution to power generation. Finally, the size of the inboard rotor section
adds considerable weight to the rotor blade, which affects the cost of every major
component that makes up a wind turbine. For example, an increase in blade weight
requires a stronger drive shaft, gearbox, tower and foundation that ultimately add to the
cost of a new wind turbine [8].
for most of them. It is thus observed that the chord length must be reduced or the twist
angle be increased. The blades are modeled using Qblade software for each of the
methods, and blade shapes for selected formulas are given in Fig. 4. As can be
observed, the shape of many of them is very different from each other.
It is further observe from Fig. 4 that some portions of the shapes have beam like
shape, which may have great risk on life of the parts of wind turbine. To model the
twist angle, each method is optimized within the same optimum lift/drag ratio. Fur-
thermore, many of the models have tapered shape. As the bending stresses are reduced
toward the tip of a cantilevered beam, tapered blades towards their tip are expected to
be lighter than straight blades. The length of the chord obtained by the combined
methods in formula nr. 13 (Maalawi [18] and Elkoda [7]), combined methods in
formula nr. 16 (Elkoda [22] and Biadgo and Aynekulu [21]) and that of formula
Comparative Study of the Effect of Chord Length Computation Methods 113
nr. 9 (Gundtoft [16]) are observed to give very high chord length. Thus, it is concluded
disregard these shapes from further study of their performance as their shapes are not
proper for designing the blade.
Weight Comparison: Weight of the blade, which has great relation with the chord
length, affects the power output of the wind turbine. As given in Table 3, the weight of
each model of the wind blade is compared using the same material for each method.
The materials used for shell material and internal materials are polyurethane 20GF 65D
and foam respectively. As overall tabulated values show the used methods in different
researches give different values of chord length distribution and hence the power output
obtained from each of them show big variations. The analysis also shows that the
weight of the blade increases with increasing chord length. On the other hand, a low
weight blade is favored for high power output.
Table 3. Comparison of weight of each model for polyurethane 20GF 65D/foam material
No. 1 2 3 4 5 6 7 8
Weight [kg] 28.353 119.032 1.601 313.453 5217.690 3.231 42.261 106.198
No. 10 11 12 13 14 16 18 19
Weight [kg] 42.261 4682.62 4682.78 4509.87 273.925 1216.79 1174.76 0.4809
114 T. Batu and H. G. Lemu
Observing the conducted analysis closely, the formulas used by Duran and Manwell
et al. [14, 15], Ingram [6], Manwell et al. [6], without considering wake rotation and
Jamieson [12] may be recommended from the point of view of low weight for the
blade, the best blade shapes which do not need any modification for design, power
output obtained, and relative ease of manufacturing near the root.
4 Conclusion
Wind turbine blade is considered as the most critical component is the wind turbine
system because both the manufacturing cost of the blades and their maintenance costs
represent significant portion of the total manufacturing and maintenance costs
respectively. The cord length of the blade is the basic parameters of the wind turbine
blade that highly influences the manufacturing cost, complexity of the shape of the
blades, general performance of the wind turbine, reduction of weight of the blade, etc.
The present work is carried out to compare the methods used to calculate chord length
based on chord length distribution, manufacturing complexity of the blade, weight of
the blade and power output obtained. The comparison for each method is based on the
same airfoil, input parameters like radius, wind speeds, number of blades, tip speed
ratio, the same material, etc.
The result indicates that for many of the method’s chord length distribution is very
large. The shape of the blades obtained from some of them is complicated especially
around the root and some of the models have unexpected shape which will make the
manufacturing of wind turbine design very costly. Since fabricating the blades of
irregular shapes is an expensive affair, the observed large increase in the chord for the
range (0.1 r/R 0.4) would add considerable weight to the blade where there is
very little contribution to the wind turbine power generation. The weight of the blade
for each method is also very different. This variation affects the cost of every major
component of the turbine such as the drive shaft, gearbox, tower and foundation that
ultimately add to the purchase cost. In general the chord length distribution variations
affect the whole wind turbine performance.
References
1. Mahmoud, N.H., El-Haroun, A.A., Wahba, E., Nasef, M.H.: An experimental study on
improvement of Savonius rotor performance. Alex. Eng. J. 51, 19–25 (2012)
2. BoroumandJazi, G., Rismanchi, B., Saidur, R.: Technical characteristic analysis of wind
energy conversion systems for sustainable development. Energy Convers. Manag. 62, 87–94
(2013)
3. Karamanis, D.: Management of moderate wind energy coastal resources. Energy Convers.
Manag. 52, 2623–2628 (2011)
4. Hansen, M.O.L., Johansen, J.: Tip studies using CFD + and comparison with tip loss
models. Wind Energy 7(4), 343–356 (2004)
5. Hau, E.: Wind Turbines, Fundamentals, Technologies, Application, and Economics.
Springer, Heidelberg (2006)
Comparative Study of the Effect of Chord Length Computation Methods 115
6. Ingram, G.: Wind turbine blade analysis using the blade element momentum method. http://
www.dur.ac.uk/g.l.ingram/download/wind_turbine_design.pdf. Accessed 18 Aug 2019
7. El-Okda, Y.M.: Design methods of horizontal axis wind turbine rotor blades. Int. J. Ind.
Electron. Drives 2(3), 135–150 (2015)
8. Corke, T., Nelson, R.: Wind Energy Design. CRC Press, Boca Raton (2018)
9. Raju, B.K.: Design optimization of a wind turbine blade. Master thesis, The University of
Texas at Arlington (2011)
10. Edon, M.: Meter wind turbine blade design. Internship Report, Folkecenter for Renewable
Energy (2007)
11. Burton, T., Sharp, D., Jenkins, N., Bossanyi, E.: Wind Energy Handbook. Wiley, New York
(2001)
12. Jamieson, P.: Innovation in Wind Technology. Wiley, London (2011)
13. Schubel, P.J., Crossley, R.J.: Wind turbine blade design. Energies 5(9), 3425–3449 (2012)
14. Duran, S.: Computer-aided design of horizontal-axis wind turbine blades. Master thesis,
Middle East Technical University (2005)
15. Manwell, J.F., McGowan, J.G., Rogers, A.L.: Wind Energy Explained: Theory, Design and
Application, 2nd edn. Wiley, London (2009)
16. Gundtoft, S.: Wind turbines. Technical report, University of Aarhus, Denmark (2009)
17. DNV/Risø: Guide Lines for Design of Wind Turbines. Wind Energy Department, Risø, 2nd
edn. ISBN 87-5502870-5 (2002)
18. Maalawi, K.: Special Issues on Design Optimization of Wind Turbine Structures (2011).
http://www.intechopen.com/books/wind-turbines/special-issues-on-designoptimization-of-
wind-turbinestructures. Accessed August 2018
19. Shen, W.Z., Mikkelsen, R., Sørensen, J.N., Bak, C.: Tip loss corrections for wind turbine
computations. Wind Energy 8(4), 457–475 (2005). https://doi.org/10.1002/we.153
20. El khchine, S., Sriti, M.: Improved blade element momentum theory (BEM) for predicting
the aerodynamic performances of horizontal axis wind turbine blade (HAWT). Tech. Mech.
38(12), 191–202 (2018)
21. Biadgo, A.M., Aynekulu, G.: Aerodynamic design of horizontal axis wind turbine blades.
FME Trans. 45, 647–660 (2017). https://doi.org/10.5937/fmet1704647m
22. Raut, S., Shrivas, S., Sanas, R., Sinnarkar, N., Chaudhary, M.K.: Simulation of micro wind
turbine blade in Q-blade. Int. J. Res. Appl. Sci. Eng. Technol. 5(IV), 256–262 (2017)
23. Qblade website. http://www.q-blade.org/. Accessed 18 Aug 2019
24. Marten, D., Wendler, J., Pechlivanoglou, G., Nayeri, C.N., Paschereit, C.O.: QBLADE: an
open source tool for design and simulation of horizontal and vertical axis wind turbines. Int.
J. Emerg. Technol. Adv. Eng. 3(3), 264–269 (2013)
On Modelling Techniques for Mechanical
Joints: Literature Study
Abstract. Most mechanical joints are in one way or another exposed to some
tear, wear, corrosion and fatigue, and likely to fail over time. If the contacting
surfaces in a connection are exposed to tangential loading due to vibrations,
small amplitude displacements called fretting can be induced at the surface, and
might result in crack nucleation and possible propagation. The review and
analysis reported herein are based on study of a wide range of reported literature
during the last 25 years, and it looks into which methods, techniques and tools
that have been used in those studies, including laboratory and full-scale testing,
especially around fretting fatigue issues in interference fit joints, and pre-tension
issues in pre-loaded bolts. In modelling of mechanical joints, the main tech-
niques can be categorized under either stochastic (or probabilistic) type, where
the random variation is usually based on fluctuations observed in historical data
for a selected period of time using standard time-series techniques, or deter-
ministic techniques, with analytical and numerical models and methods, which
are widely applied. Analytical techniques typically include models like; New-
ton’s laws of motion, Laplace transformation methods, Lagrange differential
equations, D’Alambert’s principle of virtual work, Hamilton’s principle, Fourier
series and transform, Duhamel’s integral and Lamé’s equations. Today, com-
puters are widely used to find approximate solutions to complex engineering
problems by combining the above-mentioned methods. A large number of these
combined methods, also referred to as numerical methods, fall under finite
element analysis (FEA). One way to reduce fretting issues in mechanical con-
nections could be to increase the research and investigation effort on expanding
pin solutions, which are different from the traditional press and shrink fit
methods.
1 Introduction
In most mechanical systems, there are needs for power transmission and connection
techniques using various shapes and dimensions, and the most familiar transmission
systems might be of electrical or mechanical type. Among those, spline, dovetail,
riveted, interference fitted, pre-stressed and open-hole pin/bolt connections are the
typical mechanical joint types [1].
This literature study aims to look deeper into which contact modelling techniques
and tools are used in previous studies and investigations on the mechanisms of dam-
ages that occur in connections in mechanical systems. The goal of using joints and
other fasteners in many engineering and mechanical structures is to transfer moments
and forces through frictional contact surfaces into substructures. When there is a lack of
information about certain parameters, such as contact stiffness, damping, surface
roughness quality, preload etc., the frictional contact interfaces introduce uncertainty in
the dynamic properties of assembled structures, and then measurements, quantification
and modelling of this uncertainty becomes important [2].
Different mechanical connection techniques are used for joints of mechanical
systems including the following:
– Interference fit, also known as press fit or friction fit, is a fastening method where
two parts, typically circular cross section shape, are pushed together (one into the
other) and kept in place due to the pressure between them.
– Shrink fit is an interference fit technique where typically the inner part is being
cooled down (shrinks), or the inner surface of the outer part is being heated up, to
make the fit possible. After assembly, the temperature-affected part will adjust its
size back to original size, and there will be an interference fitted connection.
– The mechanical radial expansion by companies like Bondura and Expander creates
the interference fit slightly different from both the “normal” press fit and shrink fit
techniques, and one of its advantages is the possibility to easily control and adjust
the fit level over time.
– A typically open-hole connection is a type of connection with a cylindrical pin in a
joint with an operation tolerance equal to the installation tolerance. In this type of
connection, the pin is “loose” within the bore and has the possibility to move
relative to the bore during operation, as moveable joints in cranes or heavy
machinery.
– A pre-loaded bolt that contains typically of the skank, bolt head and a threaded part
with a nut. The bolt is typically tightened up by torqueing or hydraulic tensioning of
the skank, to reach a pre-defined tension force in the bolt skank.
• Newton’s three laws of motion, which are the foundations for classical mechanics
and Newton’s second law in particular, to describe the motions of macroscopic
objects.
• Laplace transformation method, which can be used for calculating the response of a
system to a variety of force excitations, including periodic and non-periodic
responses. This method can threat discontinuous functions with no difficulty and it
On Modelling Techniques for Mechanical Joints 119
automatically takes into account the initial conditions [5]. The Laplace transform is
an integral transform named after its French inventor Pierre-Simon Laplace
(1749–1827).
• Lagrange’s equations are differential equations in which one considers the energies
of the system and the work done instantaneously in time, and obtains the equation
of motion in generalized coordinates approaching the system from the analytical
dynamics point of view [5].
• Application of D’Alembert’s principle, which is an extension of the principle of
virtual work and stated as “the virtual work performed by the effective forces
through infinitesimal virtual displacements compatible with the system constraints
is zero”. The principle of virtual work is essentially a statement of the static or
dynamic equilibrium of a mechanical system [5].
• Hamilton’s principle is the most important and powerful variational principle in
dynamics, and it derives from the generalized D’Alembert’s principle. The varia-
tional principle views the motion as a whole from the beginning to the end.
• Fourier series and transform is a tool that breaks a waveform into an alternate
representation, characterized by sine and cosines, and it shows that any waveform
can be re-written as the sum of sinusoidal functions [5].
• The Duhamel’s integral or convolution integral, or sometimes referred to as the
superposition integral, is used to find the total response x(t) to an arbitrary input [5].
• Lamé’s equations are typically used for finding hoop and longitudinal stresses in a
thick-walled cylinder.
Analytical models and methods are widely applied in studies and investigations of
problems in relation with mechanical joints, and are often combined with other
methods, like numerical methods and experimental tests or real case studies. An
analytical model can be formulated as deterministic when the input values are worst-
case values.
Analytical modelling techniques can be considered as the basis of the development
of computer simulation of mechanical joints such as in finite element methods (FEM).
Examples of analytical models and methods in previous investigations include:
a. Analysis of a shrink-fit failure on a gear hub/shaft assembly was conducted by an
analytical approach, in combination with applying a Finite Element method. In the
analytical approach, the well-known Lamés equations were applied, and the FEM
was implemented using ABAQUS [6].
b. An analytical and experimental study has been completed to assess the effectiveness
of a direct cold expansion technique on the fatigue strength of fastener holes, where
ANSYS 3-D Finite Element was used to carry out the analysis. The fatigue spec-
imens were made from 6.32 mm thick aluminum alloy (grade 7075-T6). An
oversized pin was pushed through the hole to create the cold expansion necessary
[7, 8].
Table 1. (continued)
Type of study Objective/methodology used Ref.
Numerical FE (ABAQUS) To verify simulations by comparing the results of [14]
several analyses. For instance, crack growth in a
shrink-fit assembly subjected to rotating bending was
studied using two FE methods, and the results were
compared with each other and with results from the
literature
Numerical 2D FE The stress and deformation in a shrink-fitted hub-shaft [15]
axisymmetric model (ANSYS) joint have been analysed using Finite Element method
(two-dimensional axisymmetric model – ANSYS), and
compared with the results from Lamés equations, and
they was found to be in good agreement
Numerical FE (ADINA) An interference assembly and fretting wear analysis of [16]
a hollow shaft were studied using FE, in combination
with an analytical approach. The modelling and
simulation work was intended to compute the
equivalent contact stresses due to interference assembly
of the shaft and identify the fretting damage under
different loading cases, levels of interference and
friction
Numerical FE (ABAQUS) Predicting of the fretting fatigue is a big challenge both [17]
for experimental tests and numerical simulation. In an
attempt reported by Sabelkin and Mall, however, a FE
was developed using a four node plane strain
quadrilateral element to compute the local relative slip
on the contact surface from measured global relative
slip away from the contact surface. The results of the
FE simulation were compared with test results
measured using extensometers
Continuum Damage CDM based numerical simulations are used to [18]
Mechanics (CDM in investigate the fatigue damage evolution at a fastener
ABAQUS) hole, which is treated by cold expansion or with
interference pin, in combination with fatigue
experiments. The test specimen was made of Al alloy
7075-T6, and the cold expansion level ranged from
2%–6%
Numerical 3D elastic FEM In a similar problem area, the effect of interference fit [19]
(ANSYS) on fretting fatigue in a single pinned plate in Al 7075-
T6 was studied using FEM, and experimental tests with
1%, 1.5%, 2% and 4% degree of interference fit
Numerical 2D FE (ABAQUS) Analysis of the fretting conditions in pinned [20]
connections of AA7075-T6 aluminum alloy sheet and
aluminum and steel pins is also reported using a FE to
evaluate the local mechanical parameters that control
fretting wear damage
122 Ø. Karlsen and H. G. Lemu
Fretting problems and other challenges are well known and still are continuous issues
in many industries when it comes to mechanical joints, although there have been
conducted numerous investigations for many years. In general, the newer investigations
are building their results on results from earlier investigations combined with new and
often better techniques, test equipment and improved FE tools and methods, in addition
to improved materials and palliatives/surface protection techniques.
Numerical simulations with finite element models in combination with laboratory
tests and/or analytical approaches seems to be well-known, much applied and efficient
methods when investigating around contact problems in mechanical joints. ABAQUS
and ANSYS seem to be the most applied codes in the papers reviewed. Simulations are
also compared to the results from damage analysis from real equipment, in certain
cases.
The mechanical radial expansion, or expanding pin technology [30] mentioned
earlier, i.e. by companies like Bondura Technology and Expander, is not mentioned in
any of the investigations found in this literature study. The mechanical radial expansion
creates the interference fit slightly different from both the “normal” press fit and shrink
fit techniques, and it is possible to control and adjust the fit level over time. Further
research and investigations will be necessary to clarify how efficient an expanding PIN
solution can be, compared to existing solutions, like the traditional press-fit and shrink-
fit.
4 Conclusion
Fretting problems and other challenges are sources for concerns in most mechanical
industries, but also in many other industries that experience vibrations and micro
movements relatively between adjacent material surfaces. In addition to continuing the
investigations, it is imperative to improve material qualities, palliative methods, test
techniques and the way of how to connect the parts of the joint. New methods, as the
mechanical radial expansion technique have been in the market for many years, but are
possibly not well enough known among the researchers. The mechanical radial
expansion technique should be applied more in future investigations regarding how to
reduce fretting fatigue and other related issues in interference fitted joints. Its advan-
tages when it comes to installation, retrieval, elimination of the need for heat/cold, and
possibility to quickly adjust the interference fit level, makes this solution worth
investigate further.
The finite element analysis software tools are improving continuously, and in
combination with increasing processing capacities, new developments in laboratory
equipment, and correct comparison of numerical simulations with analytical results and
laboratory specimen/real life equipment damages, it should be possible to give con-
tinuous feedback to the FE analysis and achieve a continuous improvement of the
results from the FE tools.
124 Ø. Karlsen and H. G. Lemu
References
1. De Pauw, J.: Experimental research on the influence of palliatives in fretting fatigue. Ghent
University, Faculty of Engineering and Architecture, Ghent, Belgium (2016)
2. Jalali, H., Khodaparast, H., Madinei, H., Friswell, M.I.: Stochastic modelling and updating
of a joint contact interface. Mech. Syst. Signal Process. 129, 645–658 (2019)
3. Mignolet, M.P., Song, P., Wang, X.Q.: A stochastic Iwan-type model for joint behavior
variability modeling. J. Sound Vib. 349, 289–298 (2015)
4. Amir, Y., Govindarajan, S., Iyyanar, S.: Bolted joints modeling techniques, analytical,
stochastic and FEA comparison. In: ASME Proceedings of International Mechanical
Engineering Congress and Exposition, (IMECE), Houston, USA (2012)
5. Dukkipati, R.V.: Solving Vibration Analysis Problems Using MATLAB. New Age
International Ltd. (2007)
6. Truman, C.E., Booker, J.D.: Analysis of a shrink-fit failure on a gear hub/shaft assembly.
Eng. Fail. Anal. 14(4), 557–572 (2007)
7. Chakherlou, T.N., Vogwell, J.: The effect of cold expansion on improving the fatigue life of
fastener holes. Eng. Fail. Anal. 10(1), 13–24 (2003)
8. Chakherlou, T.N., Vogwell, J.: A novel method of cold expansion which creates near-
uniform compressive tangential residual stress around a fastener hole. Fatigue Fract. Eng.
Mater. Struct. 27(5), 343–351 (2004)
9. Stefanou, G.: The stochastic finite element method: past, present and future. Comput.
Methods Appl. Mech. Eng. 198(9–12), 1031–1051 (2009)
10. Nikfam, M.R., Zeinoddini, M., Aghebati, F., Arghaei, A.A.: Experimental and XFEM
modelling of high cycle fatigue crack growth in steel welded T-joints. Int. J. Mech. Sci. 153–
154, 178–193 (2019)
11. Zeng, D., Zhang, Y., Lu, L., Lang, Z., Zhu, S.: Fretting wear and fatigue in press-fitted
railway axle: a simulation study of the influence of stress relief groove. Int. J. Fatigue 118,
225–236 (2019)
12. Makino, T., Kato, T., Hirakawa, K.: Review of the fatigue damage tolerance of high-speed
railway axles in Japan. Eng. Fract. Mech. 78(5), 810–825 (2011)
13. Abbas, F., Majzoobi, G.H.: An investigation into the effect of elevated temperatures on
fretting fatigue response under cyclic normal contact loading. Theoret. Appl. Fract. Mech.
93, 144–154 (2018)
14. Gutkin, R., Alfredsson, B.: Growth of fretting fatigue cracks in a shrink-fitted joint subjected
to rotating bending. Eng. Fail. Anal. 15(5), 582–596 (2008)
15. Özel, A., Temiz, Ş., Aydin, M.D., Şen, S.: Stress analysis of shrink-fitted joints for various
fit forms via finite element method. Mater. Des. 26(4), 281–289 (2005)
16. Han, B., Zhang, J.: Interference assembly and fretting wear analysis of hollow shaft. Sci.
World J. 2014, 919518 (2014)
17. Sabelkin, V., Mall, S.: Investigation into relative slip during fretting fatigue under partial slip
contact condition. Fatigue Fract. Eng. Mater. Struct. 28(9), 809–824 (2005)
18. Sun, Y., Hu, W., Shen, F., Meng, Q., Xu, Y.: Numerical simulations of the fatigue damage
evolution at a fastener hole treated by cold expansion or with interference fit pin. Int.
J. Mech. Sci. 107, 188–200 (2016)
19. Mirzajanzadeh, M., Chakherlou, T.N., Vogwell, J.: The effect of interference-fit on fretting
fatigue crack initiation and DK of a single pinned plate in 7075 Al-alloy. Eng. Fract. Mech.
78(6), 1233–1246 (2011)
20. Iyer, K., Hahn, G.T., Bastias, P.C., Rubin, C.A.: Analysis of fretting conditions in pinned
connections. Wear 181 to 183(Part 2), 524–530 (1995)
On Modelling Techniques for Mechanical Joints 125
21. Hirakawa, K., Kubota, M.: On the fatigue design method for high-speed railway axles. In:
Proceedings of the Institution of Mechanical Engineers, Part F (2001)
22. Luke, M., Burdack, M., Moroz, S., Varfolomeev, I.: Experimental and numerical study on
crack initiation under fretting fatigue loading. Int. J. Fatigue 86, 24–33 (2016)
23. Alfredsson, B.: Fretting fatigue of a shrink-fit pin subjected to rotating bending: experiments
and simulations. Int. J. Fatigue 31(10), 1559–1570 (2009)
24. Kubota, M., Kataoka, S., Kondo, Y.: Effect of stress relief groove on fretting fatigue strength
and index for the selection of optimal groove shape. Int. J. Fatigue 31(3), 439–446 (2009)
25. Song, C., Shen, M.X., Lin, X.F., Liu, D.W., Zhu, M.H.: An investigation on rotatory
bending fretting fatigue damage of railway axles. Fatigue Fract. Eng. Mater. Struct. 37(1),
72–84 (2014)
26. Chakherlou, T.N., Mirzajanzadeh, M., Abazadeh, B., Saeedi, K.: An investigation about
interference fit effect on improving fatigue life of a holed single plate in joints. Eur. J. Mech.
A/Solids 29(4), 675–682 (2010)
27. Chakherlou, T.N., Taghizadeh, H., Aghdam, A.B.: Experimental and numerical comparison
of cold expansion and interference fit methods in improving fatigue life of holed plate in
double shear lap joints. Aerosp. Sci. Technol. 29(1), 351–362 (2013)
28. Ferjaoui, A., Yue, T., Wahab, M.A., Hojjati-Talemi, R.: Prediction of fretting fatigue crack
initiation in double lap bolted joint using continuum damage mechanics. Int. J. Fatigue 73,
66–76 (2015)
29. Chakherlou, T.N., Shakouri, M., Akbari, A., Aghdam, A.B.: Effect of cold expansion and
bolt clamping on fretting fatigue behavior of Al 2024-T3 in double shear lap joints. Eng.
Fail. Anal. 25, 29–41 (2012)
30. Bondura Technology AS homepage. www.bondura.no. Accessed 14 Sept 2019
Research on Magnetic Nanoparticle Transport
and Capture in Impermeable Microvessel
1 Introduction
magnetic field, i.e. the additional magnetic field effect. The analysis results show that as
the distance from the magnet to the blood vessel is getting closer, the magnetic field,
magnetic force, and capture rate will become larger, and the magnetic particles can be
trapped near the vessel wall to be captured. Therefore, in the microvessels a few
centimeters from the magnetic field, it is feasible to use non-invasive magnetic drug
nanoparticles as transport particles to target the diseased tissue.
2 Theoretical Model
When the magnetic drug nanoparticles move in the blood, the viscous resistance of
the blood fluid can be obtained according to the Stokes formula. At the same time, the
reaction force of the magnetic particles on the fluid is equal and opposite [7]:
f ¼ 6pgf Rp ðv vf Þ ð1Þ
Where gf represents the fluid viscosity; Rp represents the magnetic particle radius; v
represents the magnetic particle velocity; and vf represents the fluid velocity.
It is assumed that the distribution of magnetic particles in the microvessels is
isotropic, and the distribution function f ðy; x; v; tÞ is used to describe the distribution
state of the magnetic particles. When the magnetic particle distribution is in equilib-
rium, the influence of the time parameter t on the distribution function can be
neglected, and f ðy; x; v; tÞ is denoted as f ðy; x; vÞ.
128 J. Cao and J. Wu
The first term in the formula represents the viscous force; the second term repre-
sents the pressure gradient; and the third term represents the viscous resistance of the
magnetic particles to the fluid.
ðy þ d Þ
Fay ¼ l0 Vp f ðHa ÞMs2 R4mag h i3 ð4Þ
2 ð y þ d Þ 2 þ x2
x
Fax ¼ l0 Vp f ðHa ÞMs2 R4mag h i3 ð5Þ
2 ð y þ d Þ 2 þ x2
Then the additional magnetic field strength of the magnetic particles acting at the
* *
position rn at the position r is [6]:
0 * * * * 1
*
v a3 3 H a r n r r n * * * *
@ r r n Ha r n A
*
dH f r ¼ * * 2 ð6Þ
v þ 3 *r *r 3 r r
n n
Since every two magnetic particles will interact, the magnetic field force at the
position ðy; xÞ is actually the sum of the additional magnetic forces generated by the
magnetic particles at other N 1 positions. The specific expression is:
Z
Ffy ðy; xÞ ¼ Ffy ðy; x; yi ; xi Þf ðyi ; xi ; vÞv2 dyi dxi dv ð7Þ
Z
Ffx ðy; xÞ ¼ Ffx ðy; x; yi ; xi Þf ðyi ; xi ; vÞv2 dyi dxi dv ð8Þ
@f @f @f Fa @f Ff @f
þv þv þ þ ¼0 ð9Þ
@t @r @x m @v m @v
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Where Fa ¼ Fay2 þ F2 ; F ¼
ax f Ffy2 þ Ffx2 .
Microvascular inlet and outlet boundary conditions:
Z 1
4p f ðy; x; v; tÞx¼0;x¼l v2 dv ¼ n0
0
and the axial position is defined within the length of the applied magnet. The capture
rate expression is as follows:
R xe R ye R 1
x R 0 f ðy; x; vÞv2 dydxdv
CE¼ R l eR R R 1 ð10Þ
l R 0 f ðyi ; xi ; vÞv2 dydxdv
Where yce and xce indicates the radial and axial maximum values at which the
magnetic particles are captured, respectively.
3000
Bax Fax
0.01
Bay Fay
2000
0.00
External magnetic force (pN)
The Magnetic Field (T)
1000 -0.01
-0.02
0
-0.03
-1000
-0.04
-2000 -0.05
-4 -3 -2 -1 0 1 2 3 4 -4 -2 0 2 4
X/Rmag X/Rmag
Fig. 2. Magnetic field components along the Fig. 3. Magnetic force components on a
axis of a microvessel magnetic nanoparticle
As can be seen from Figs. 2 and 3, the extreme value of the radial component Bay of
the magnetic induction occurs at the center position (x=Rmag ¼ 0) of the cylindrical
magnet, and the extreme value of the axial component Bax appears at the edge position of
the magnet (x=Rmag ¼ 1). The axial component Fax of the external magnetic force is
similar to the axial component Bax of the magnetic induction, but the extreme values are
opposite, which indicates that the external magnetic force is not only related to the
magnitude of the magnetic field, but also related to the magnetic field gradient. The radial
component Fay of the external magnetic field force is the dominant force for capturing the
magnetic particles, and the extreme position appears at the center of the magnet.
Research on Magnetic Nanoparticle Transport and Capture 131
Through the control variable method, the change of the external magnetic field
force when the distance between the magnet and the blood vessel is 20 mm, 30 mm
and 40 mm respectively is discussed. The result is shown in Fig. 4:
0.08
0.00
d=20mm -0.02
0.06
d=30mm -0.04
External magnetic force Fax (pN)
-0.04 -0.20
-0.22
d=20mm
-0.06 -0.24
d=30mm
-0.26
d=40mm
-0.08 -0.28
-0.30
-4 -2 0 2 4 -4 -2 0 2 4
X/Rmag X/Rmag
(a) (b)
Fig. 4. External magnetic force when d = 20 mm, 30 mm, 40 mm. (a) Fax ; (b) Fay
It can be seen from Fig. 5 that the closer the magnet is to the blood vessel, the
larger the external magnetic force, the greater the attraction to the magnetic particles,
and the better the capturing effect. Therefore, in order to obtain greater magnetic force
and better capture effect, the closer the magnet is to the blood vessel, the better.
0.0026
0.0034
0.0025
0.0032
Dimensionless Distribution
0.0024
Dimensionless Distribution
0.0030
0.0023
0.0028
0.0022
0.0026
0.0021
0.0024
0.0020
-0.4 -0.2 0.0 0.2 0.4 -0.4 -0.2 0.0 0.2 0.4
Dimensionless Axis(y=0) Dimensionless Diameter(x=0)
(a) (b)
Fig. 5. Magnetic nanoparticles distribution along the microvessel. (a) axial direction; (b) radial
direction
132 J. Cao and J. Wu
obtained from the formula (10). The result is shown in Fig. 6. It can be seen from the
trend of the curve that the closer the magnet is to the blood vessel when other con-
ditions are constant, the higher the efficiency of capturing the magnetic particles.
40
reference
model
35
30
CE%
25
20
15
10
1 2 3 4 5
Distance between microvessel and magnet d(mm)
Fig. 6. The influence of distance d between magnet and blood vessel on capture efficiency
4 Conclusion
In this paper, a mathematical model for studying the transport mechanism and capture
efficiency of magnetic drug nanoparticles in non-permeable wall microvessels was
established. Through theoretical analysis and numerical simulation, the conclusions are
as follows:
(1) When other conditions are constant, the smaller the distance from the magnet to
the blood vessel, the greater the magnetic field and magnetic field force, the
stronger the attraction to the magnetic particles, and the higher the capture effi-
ciency of the magnetic particles.
(2) Most of the magnetic particles will be concentrated near the vessel wall after
being captured. Since this model discusses non-permeate wall microvessels,
particles arriving at the vessel wall will be bounced back into the blood, so it is
more reasonable to define the radial position to be captured within the interval
near the vessel wall when calculating the magnetic particle capture rate.
(3) When other conditions are constant, the capture efficiency of the magnetic par-
ticles increases as the distance from the magnet to the blood vessel decreases. The
closer the position of the magnet is to the diseased tissue, the higher the capture
rate of the magnetic particles, and the more drug molecules are released, the better
the therapeutic effect.
Research on Magnetic Nanoparticle Transport and Capture 133
References
1. Mondal, A., Shit, G.C.: Transport of magneto-nanoparticles during electro-osmotic flow in a
micro-tube in the presence of magnetic field for drug delivery application. J. Magn. Magn.
Mater. 442, 319–328 (2017)
2. Jafarpur, K., Emdad, H., Roohi, R.: A comprehensive study and optimization of magnetic
nanoparticle drug delivery to cancerous tissues via external magnetic field. J. Test. Eval.
47(2), 681–703 (2019)
3. ChiBin, Z., XiaoHui, L., ZhaoMin, W., et al.: Implant-assisted magnetic drug targeting in
permeable microvessels: comparison of two-fluid statistical transport model with experi-
ment. J. Magn. Magn. Mater. 426, 510–517 (2017)
4. Shaw, S.: Mathematical model on magnetic drug targeting in microvessel. Magn. Magn.
Materials 83 (2018)
5. Wang, S., Zhou, Y., Tan, J., et al.: Computational modeling of magnetic nanoparticle
targeting to stent surface under high gradient field. Comput. Mech. 53(3), 403–412 (2014)
6. Mikkelsen, C., Fougt, H.M., Bruus, H.: Theoretical comparison of magnetic and
hydrodynamic interactions between magnetically tagged particles in microfluidic systems.
J. Magn. Magn. Mater. 293(1), 578–583 (2005)
7. Morrison, F.A.: An Introduction to Fluid Mechanics. Cambridge University Press,
Cambridge (2013)
8. Hehl, F.W., Obukhov, Y.N.: Foundations of Classical Electrodynamics: Charge, Flux, and
Metric. Springer, Heidelberg (2012)
9. Jones, T.B.: Electromechanics of Particles. Cambridge University Press, Cambridge (2005)
10. Kazem, S., Rad, J.A., Parand, K.: Radial basis functions methods for solving Fokker-Planck
equation. Eng. Anal. Bound. Elem. 36(2), 181–189 (2012)
11. Cao, Q., Han, X., Li, L.: Numerical analysis of magnetic nanoparticle transport in
microfluidic systems under the influence of permanent magnets. J. Phys. D Appl. Phys.
45(46), 465001 (2012)
Analysis of Drag of Bristle Based on 2-D
Staggered Tube Bank
Abstract. In this paper, a 2-D staggered tube bank of bristle pack is established
to examine the effect of flow on bristle pack, and Gambit is used to generate the
mesh. The Resistance of the bristle in the transient condition is analyzed using
simulation with Fluent. The resistance of the brush seal includes friction drag,
pressure drag, and interference drag. Results show that the pressure drag plays
the leading role owing to the shape of the bristle, and pressure drag increases
slowly in the axial direction but increases significantly in the end row. The drag
grows gradually with increasing pressure. The results also show that when the
value of bristles gap decreases, the drags in the front rows increases more slowly
but decrease significantly in the end row.
1 Introduction
Brush seal is a contact seal, and it has been widely applied to aero-engine and gas
turbine. Due to the excellent sealing performance, the brush seal can reduce the leakage
effectively. Comparing to a labyrinth seal, the leakage rate of brush seal is only 5%–
10% of leakage from the labyrinth seal, and the leakage mainly occurs between the
small gap of bristles. The inhomogeneity of the bristle pack leads to self-sealing effect.
The researches on the brush seal develop in two directions, experimental measurements
and numerical simulation. Numerical simulation has become an important approach for
studying performance of the brush seal. The staggered arrangement of tube bank model
can analyze the bristles’ pressure drop and flow media. Therefore, this model is an
important model for analyzing the performance of the brush seal.
Air passing through the bristle pack can be treated as flowing around a cylinder.
When air passes through the bristle pack, drag will be produced. Depending on how the
drag generates, the drags of the brush seal can be divided into friction drag, pressure
drag, and interference drag. Because the bristles are blunt body, the pressure drag is
much greater than the friction drag. Therefore, the main analysis is focused on the
pressure drag.
Dai et al. [1] analyze the pressure distribution and the velocity of the flow field in
the compact cross-sectional tube bank. Huang et al. [2] create a 3-D bristles model to
study both the flow and temperature fields. Kang et al. [3] analyze the entry number of
brush seals built on the 2-D staggered tube bank model. Liu et al. [4] analyze the flow
resistance of the brush seal based on a 2-D staggered arrangement of elliptical tube
bank model. Fuchs et al. [5] analyze the both 2- and the 3-D brush seal models without
the front and back plates, and they found that when the bristle’s gap d = 0.008 mm
(that is the ½ diameter of bristles in the literature, the leakage rate of the 2-D model was
the most consistent with the experimental data. In this paper the 2-D staggered
arrangement of tube bank is used to analyze the pressure drop and the bristle gap effects
on the bristle drop.
2 Model
SD ¼ ST ¼ d þ d ð1Þ
pffiffiffi
3
SL ¼ ðd þ dÞ ð2Þ
2
where ST and SL are the radial spacing and the axial spacing, respectively. SD is the
slant spacing. d is the diameter of bristle, and d is the gap between bristles.
136 X. Song et al.
3 Mesh Generation
Since the size of the brush seal is at the micron level, the mesh has to be very dense. In
this paper the mesh is generated using Gambit. The zone of the bristle pack is the
important domain in the simulation, so the mesh of this zone requires very high quality.
According to Tan’s test of the grid independence, the node of the cylinder surface is 220
[6]. The bounding layer is four. The mesh of the bristle is the unstructured mesh, and it
must ensure that the mesh between the gap is greater than 2. However, since the zone of
the upstream and the downstream is not the essential computational domain, sparser
structure grid can be used. The final structure of grid used here is presented in Fig. 2.
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank 137
4 Computing Method
@q @ ðquÞ @ ðqvÞ
þ þ ¼0 ð3Þ
@t @x @y
where p is the gas pressure; q is the gas density; u and v are the axial velocity and the
radial velocity, respectively; l is the viscosity of fluid; CP is the specific heat; k is the
heat transfer coefficient of the fluid; T is the temperature, and grad stands for the
gradient.
The gas medium is an ideal-gas, and satisfies the ideal gas state equation,
p ¼ RqT ð7Þ
qud
Re ¼ ¼ 2085 ð8Þ
l
where Re is the Reynolds number. The flow of bristle pack is considered as the unsteady
circular cylinder. This state belongs to the subcritical area, when 300 < Re < 3 105.
The boundary layer is the laminar boundary layer, but the flow separation can occur
around the cylinder and the Karman vortex street can be formed. These phenomena
influence the downstream [7]. Therefore, the standard k-e Model is selected in this
paper.
4.3 Solver
In this paper, the standard software Fluent is used for conducting the simulation, and
the two-dimensional pressure-based solver is chosen. As discussed above, the working
medium is an ideal-gas. Furthermore, the standard k-e model is selected with viscidity
follows the Sutherland Formula. Since the Karman vortex street influences the flow
condition of the downstream, the analysis uses the transient state.
Fluent is based on the finite volume method. The staggered mesh calculates and
stores the pressure and velocity components on the nodes in different mesh systems.
For the 2-D simulation model, the p, u, and v are stored in three different mesh systems.
The calculation firstly uses SIMPLE format for the steady value, and the result is
the initial value for the transient state simulation. Since PISO is superior to both
SIMPLE and SIMPLEC in the transient state, it is adopted for the transient state in the
simulation, and the time step is 10−4. The reference area is the frontal area, and it used
the diameter of the bristle.
The boundary layer of the upstream is the pressure inlet, and the values can be
0.201325 MPa, 0.301325 MPa, 0.401325 MPa, 0.501325 MPa or 0.601325 MPa. The
boundary layer of the downstream is the pressure outlet and the value is
0.101325 MPa. The top and bottom margins are the symmetric boundary conditions.
The surface of the bristle is the non-slip wall.
When there is a relative velocity between the body and fluid, the force from the fluid
will be acted on the body. The drag is the component of the force that is parallel to the
direction of the relative speed.
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank 139
Pressure drag is dependent on the pressure drop between the front and back of the
body. Usually, unless the value of Re is very low, the separation phenomenon is
inevitable in the flow of the pure bluff body. Therefore, the pressure is the leading role
in the drag. The pressure drag can be calculated by the following equation,
Z
FP ¼ pcoshdA ð9Þ
A
where p is the pressure acts upon the area of elements dA; FP is the pressure drag; h is
the angle of the normal of dA and the flow direction, shown in Fig. 3.
6 Discussion
The effect of Dp on the drag are shown in Fig. 4. The drag increases with Dp and with
the number of rows. When Dp is higher than 0.2 MPa, the drag in the end row will
increase significantly. The reason might be that the bristles are the blunt body, and the
pressure drag plays a leading role in the drag of the bristles [8]. The pressure drop at the
end row of the bristles is higher than the other rows, as illustrated in Fig. 5. The drag in
the end row increases dramatically. In the end row the drag can reflect the pressure
drop. The decline of pressure leads to the stress concentration and the friction between
the back plate and the bristles. The multistage brush seal can be adopted to decrease the
pressure in the end row.
140 X. Song et al.
0.011 ∆p=0.1MPa
0.010 ∆p=0.2MPa
∆p=0.3MPa
0.009 ∆p=0.4MPa
0.008 ∆p=0.5MPa
0.007
Drag (N)
0.006
0.005
0.004
0.003
0.002
0.001
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Row
*∆p is the pressure difference between upstream and downstream
Δp=0.1MPa
600000 Δp=0.2MPa
Δp=0.3MPa
500000 Δp=0.4MPa
Δp=0.5MPa
Static Pressure (Pa)
400000
300000
200000
100000
0.001 0.002
Position (mm)
Fig. 5. Pressure distribution along axial gap of bristle under pressure differentials
0.010
Drag of End row
0.008
0.02
0.006
0.004
0.01
0.002
0.000
0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5
∆p (MPa) ∆p (MPa)
A B
Fig. 6. The drags in the front rows (A) and the end row (B)
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank 141
When Dp is lower than 0.3 MPa, the resistance reduces with decreasing d. How-
ever, the rate of rise increases when d decreases, as shown in Fig. 6(A). In Fig. 6(B), it
shows that the drag in the end row can be cut down by reducing the d, since small d
could lead to the turbulence intensity power. The uniformity of the pressure distribution
becomes well with decreasing d [10]. The pressure in the front rows may increase
indistinctively and the turbulent flow reduces the pressure drag at a low Dp [9]. When
Dp increases, the pressure in the anterior rows increases obviously and the effect of the
turbulent flow weakens. Therefore, the increasement of the drag in the front rows is
more observable with reducing d.
7 Conclusion
In this paper, the drag of bristle based on 2-D staggered tube bank is analyzed using
numerical simulation with Fluent. From the results, the following conclusion can be
drawn,
(1) The drag increases when the pressure difference increases. Due to the shape of the
bristles, the reduction of the pressure drop is reflected by the drag.
(2) The reduction of the gap between bristles can decrease the drag in the end row,
but the drag increasement in the front row is not significant. Due to the reduction
of gap, the rate of increment is quicker with the increasing pressure difference.
Acknowledgement. The current research has been supported by National Natural Science
Foundation of China (granted no. 51765024).
References
1. Dai, W., Liu, Y.: Numerical simulation of fluid flow across compact staggered tube array.
Manuf. Autom. 33(2), 107–110 (2011). (in Chinese)
2. Huang, S., Suo, S.F., Li, Y., et al.: Flows in brush seals based on a 2-D staggered tube
bundle model. J. Tsinghua Univ. (Sci. Technol.) 56(2), 160–166 (2016). (in Chinese)
3. Kang, Y., Liu, M., Kao-Walter, S., et al.: Predicting aerodynamic resistance of brush seals
using computational fluid dynamics and a 2-D tube banks model. Tribol. Int. 126, 9–15 (2018)
4. Liu, J., Liu, M., Kang, Y., et al.: Numerical simulation of the flow field characteristic in a
two-dimensional brush seal model (2018)
5. Fuchs, A., Oskar, J.: Numerical investigation on the leakage of brush seals. In: Proceedings
of Montreal 2018 Global Power and Propulsion Forum (2018)
6. Tan, Y., Liu, M., Kang, Y., et al.: Investigation of brush seal vortex separation point based
on 2-D staggered tube bundle mode. J. Drain. Irrigat. Mach. Eng. 35(7), 602–608 (2017). (in
Chinese)
7. Jiang, K., Zhang, D., Qi, Y., et al.: Study on the characteristics of flow around cylinder at
subcritical reynolds number. Ocean Eng. Equip. Technol. 4(1), 37–42 (2017). (in Chinese)
8. JSME, Zhu, B., et al.: Fluid Mechanics. Peking University Press (2013). (in Chinese)
9. Shigenao, M., Wang, S., et al.: Heat Transfer. Peking University Press (2011). (in Chinese)
10. Kang, Y., Liu, M., et al.: Numerical simulation of pressure distribution in bristle seal for
turbomachinery. J. Drain. Irrigat. Mach. Eng. 36(5), 58–63 (2018). (in Chinese)
Design of Large Tonnage Lift Guide Bracket
Abstract. Due to the great improvement of industrial automation level and the
rapid growth of economy, the demand for large tonnage lift is increasing day by
day. Large tonnage lifts usually have carrying capacity up to thousands of tons,
some even tens of thousands of tons. As one of the main components of fixing
guide rail, the car guide rail bracket transfers the force of the whole lift to the
shaft wall, which is very important for the deformation of the whole guide rail
system. The stiffness and strength of the car guide rail bracket have a certain
impact on the stability of the lift, and its size determines the cost of lift pro-
duction, so it is necessary to save materials to ensure safe and reliable operation.
The dimension and structure of the guide bracket are designed. Then, the
structural dimension of the product is changed through the study of the opti-
mization design method. The finite element analysis is used to check whether
the guide bracket meets the standard requirements, the feasibility of the design
scheme are verified by calculation and software simulation, the comprehensive
performance improved and the material reduced at the same time.
1 Introduction
In the rapid development of modern production environment, large tonnage lifts need
to be used in more and more occasions. In other words, it is the continuous devel-
opment of large tonnage lifts that gives the possibility of the development of modern
heavy industry. The rated load of large tonnage lift is very large, so the maximum
effective area of the corresponding car is large, which can meet the needs of most
factories with special cargo. The lift guide rail bracket is mainly composed of the main
parts of the bracket, the guide rail clamp plate and the fastening elements. A guide
bracket can only use the same fastening element, usually using fastening bolts as the
fastening element of the guide bracket. The guide rail bracket fixes the guide rail on the
wall of the shaft, thus transferring the force of the whole lift to the wall. In addition, the
guide rail bracket can also make the adjacent guide rail installed according to the set
distance, so as to provide sufficient strength support for the fixed guide rail [1]. In the
process of lift design, production, manufacture and installation, people often concen-
trate most of their attention on the guide rail, thus ignoring the guide rail bracket,
resulting in the final production of the guide rail bracket cannot meet the national
standards, or even lead to potential safety hazards. Whether or not to choose a suitable
guide bracket is related to the whole guide system, sometimes threatening the safety
and reliability of the lift, making the lift unable to run smoothly [2].
According to different uses, guide brackets can be divided into the following three
categories: car guide bracket, counterweight guide bracket, car and counterweight
sharing guide bracket. According to different structural forms, guide brackets can be
divided into two categories: integral structure and combined structure. According to
different shapes, guide brackets can be divided into the following three categories:
mountain guide bracket, angle guide bracket and frame guide bracket. The guide rail of
the lift is mainly fixed by the guide rail bracket, so the strength of the guide rail bracket
plays a decisive role in the safe and reliable operation of the lift. In the actual instal-
lation process, considering that the guide bracket of the lift car may follow the guide
rail and lead to other unexpected torsion and synchronous rotation of the guide plate
relative to the guide rail, at the same time, the traditional guide plate can no longer
adapt to the movement caused by the adjustment of the guide and the calibration of the
guide. Therefore, it is necessary to design a different kind of rail bracket, so that it can
not only adapt to different specifications and sizes of the rail installation, but also
facilitate the dynamic fixing of the rail in the installation process [3]. We are paying too
much attention to the selection and design of the guide rail, but not to the bracket of the
guide rail. Once the guide bracket of the lift is unable to provide sufficient support, the
quality of the lift will not meet the national standards, resulting in complaints from
customers, loss of potential market, and even security risks. As for the design of guide
bracket, most manufacturers choose the steel frame structure according to the char-
acteristics of the rigid frame structure itself. First of all, steel has high strength, high
density and good mechanical properties. Secondly, steel is an ideal elastic material with
uniform quality. At the same time, the quality of steel products should be strictly
controlled in production, and the stress situation in the project should be most con-
sistent with the mechanical calculation results, so that the running reliability can be
high [4]. The strength of lift guide bracket directly affects the stability of operation, and
its number is determined by the height of the floor. The higher the floor, the larger the
number, so the cost must be considered. In order to achieve the balance between the
two, the structure needs be optimized and allocated reasonably.
GB 7588-2003 10.1.2.2 maximum allowable deformation of “T” type guide: (a) For the
car (counterweight) guide rail with safeties, when the safeties operate, the two direc-
tions are 5 mm; (b) For the car (counterweight) guide rail without safeties, the two
directions are 10 mm. The allowable deformation of guide bracket can adopt the
allowable deformation of guide, that is, when the safety is in action, it is 5 mm in both
directions. Each guide shall have no less than two guide brackets with spacing less than
or equal to 2.5 m. Under special circumstances, measures shall be taken to ensure that
144 L. Guo and X. Jiang
the installation of the guide meets the bending strength requirements specified in GB
7588-2003 [5]. The car guide rail bracket belongs to statically indeterminate steel
frame, which can be calculated by force method. Force method requires that the
original statically indeterminate steel frame be transformed into statically indeterminate
base, and additional unknown forces are added to statically indeterminate base [6].
Then the displacement or deformation of statically indeterminate base at redundant
constraints under the action of loads and unknown forces are listed [7]. In addition,
their constraints, i.e. the compatibility conditions of deformation, can meet the con-
ditions of the original statically indeterminate steel frame. The complementary equa-
tions including load and unknown forces are transformed into the deformation
compatibility conditions by physical equation. Then the unknown forces can be
obtained by solving these equations, and the other unknown forces can be obtained by
static equilibrium equation [3]. Using the formula of simple statically indeterminate
rigid frame in “mechanical design manual” [8], the bending moments of guide brackets
under the action of Fy and Fx are calculated respectively by superposition method, and
then superimposed and summarized. Figure 1 is type p car guide bracket.
Fig. 1. (a) Structural drawing of car guide bracket (b) Fy calculation (c) Fx calculation
k ¼ II21 hl ¼ HL ; N1 ¼ k þ 2 ¼ HL þ 2 ; N ¼ 6k þ 1 ¼ 6H
L þ1
2
F
L L
2 1
1 F L 2
MA ¼ Pab 2N1 2N2
1 2b1
¼ L2 2 2 H 1þ 2 2 6H2þ 1 ¼ 8ðH þ 2LÞ
y y
l ðL Þ ð Þ
L
Fy L2L2 F L2
MB ¼ Pab 1
þ 2b1
¼ H
1
þ
¼ 4ðHyþ 2LÞ
l
N1 2N2
L
L 2
F LL F L2
MC ¼ Pab N1 2N2
1 2b1
¼ y L2 2 H þ1 2 ¼ 4ðHyþ 2LÞ
l
L
F LL 211 F L2
MD ¼ Pab 2N1 þ 2N2
1 2b1
¼ y L2 2 2 H 1þ 2 2 6H2þ 1 ¼ 8ðHyþ 2LÞ
l ðL Þ ðL Þ
Design of Large Tonnage Lift Guide Bracket 145
The composite bending moment of the guide rail bracket can be seen from the
figure that the maximum bending moment of the car guide rail bracket is at C.
Fy L2 3Fx H
MC ¼
4ðH þ 2LÞ 2ð6H þ LÞ
The car guide bracket belongs to the three-time statically indeterminate rigid frame,
on which the guidance force of the guide rail Fx and Fy acts. The formulas for cal-
culating the bending moment of car guide bracket based on the rigid frame calculating
formulas in “mechanical design manual” [8] have been derived, and the load-bending
moment diagram MFx and MFy of Fx and Fy can be drawn separately on the basis of the
formulas for calculating the bending moment of the car guide. In order to obtain the
displacement of the guide support under the action of Fx and Fy , a unit force is applied
at the statically determinate base action points of Fx and Fy respectively, and the unit
moment diagrams MPFx and MPFy of the unit force acting on the statically determinate
base are drawn at the same time. In addition, a statically indeterminate rigid frame has
many statically determinate bases to choose from. If the statically determinate bases
multiplied by the unit moment diagram and the load moment diagram are selected, the
calculation can be simplified.
Calculating the displacement of the guide bracket under Fy action
The load bending moment diagram and unit bending moment diagram of the car
guide bracket under the guidance force Fy are shown in Fig. 2. When calculating the
displacement in the direction of Fy action, the area of the unit bending moment diagram
Because the area of the unit bending moment diagram and the corresponding
centroid position coordinate in the load bending moment diagram are located on the
same side of the baseline, the displacement is positive, that is, the direction of action is
the same as that of Fy .
Calculating the displacement of the guide bracket under Fx action
The load bending moment diagram and unit bending moment diagram of the car
guide bracket under the guidance force Fx are shown in Fig. 3. Similarly, when cal-
culating the displacement in the direction of Fx action, the area of the unit bending
moment diagram MPFx is multiplied by the corresponding centroid position coordinates
in the load bending moment diagram MFx by the diagram multiplication method [6].
In this paper, a car guide rail bracket is designed for a large tonnage lift with load of
9500 kg. The crossbeam and side beams of the bracket are made of 80*80*8 angle
steel, E = 2.06 1011 Pa, I = 11.21 10−8m4, W = 3.13 10−6m4, rb = 370 MPa,
Q = 9500 kg, P = 7000 kg.
Calculating the action force of the guidance force on the Y axis with safety activated
k1 gn Qxq þ Pxp 2:0 9:8 9500 1:4
8 þ 7000 10
1:4
Fx ¼ ¼ ¼ 1414:88 N ð1Þ
nh 2 20
Design of Large Tonnage Lift Guide Bracket 147
Calculating the action force of the guidance force on the X axis with safety
activated
k1 gn Qyq þ Pyp 2:0 9:8 9500 1:6
8 þ 7000 10
1:6
Fy ¼ nh ¼ ¼ 2959:6 N ð2Þ
2
2 20=2
MC 419
r1 ¼ ¼ 106 ¼ 133:87 MPa\165 MPa ð4Þ
W 3:13
In conclusion, the preliminary design size of the p type guide rail bracket, that is,
500 mm long, 1000 mm high, can meet the load of 9500 kg of large tonnage lift
requirements.
In order to further optimize the designed car guide rail bracket, considering saving
material and space, the original design of guide rail bracket was changed into 350 mm
long and 1000 mm high (in order to ensure sufficient strength, the size involved could
not be reduced too much), and a reinforcing bar (300 mm long) was added to each of
the two bracket feet. The material is 80 80 8 angle steel, as shown in Fig. 4 below.
148 L. Guo and X. Jiang
According to the metal structure of lift guide rail bracket, the original wire frame
model is established by using Pro/E software. Creating beam idealization: angle steel
with section shape 80 80 8; Material allocation for the model: steel is defined as
the guide bracket material. Load: According to the design calculation mentioned above,
the load on the Y axis of the guide bracket is 1414.88 N, and the load on the X axis of
the guide bracket is 2959.6 N. The final guide bracket definition completion model is
established. According to the actual structure, fixed the lower support angle steel of the
guide rail bracket (actually fixed with civil engineering) [9, 10]. Define rotation and
translation constraints of supporting angle steel in X, Y and Z direction. According to
the load diagram of the guide bracket, all loads are loaded on the guide bracket. From
the mentioned above, it can be seen that the load on the Y axis of the guide bracket is
1414.88 N, and the load on the X axis of the guide bracket is 2959.6 N. According to
the load diagram of the guide bracket, all loads are loaded on the guide bracket, and the
load diagram of the guide bracket is obtained. Finally, the fully defined model of the
guide bracket is established as shown in Fig. 5.
The deformation of the metal structure of the guide rail bracket after loading is
shown in the following Fig. 6. The maximum displacement of the bracket is 0.19 mm
and arises on the upper bracket plate on the side.
The stress of the metal structure of the lift guide bracket after loading is shown in
Fig. 7 below. The figure shows that the maximum stress is 31.84 Mpa, which occurs at
the contact point between the bracket foot and the bracket plate.
5 Conclusions
The basic dimensions of the guide rail bracket are preliminarily determined by referring
to the design calculation book. The finite element analysis of the newly designed guide
bracket is carried out by using Pro/E software. The displacement, stress, static stiffness
and static strength are checked. The optimized guide bracket is not only more stable,
but also conforms to the current development trend of green economy and environ-
mental protection. The design of guide bracket is optimized, which not only improves
the stability of the product, but also saves material and installation space. By adding
reinforcing ribs, the original length and height are shortened, and the improved design
is more safe, stable and green.
References
1. Koe, S., Imrak, C.E., Targit, S.: Design parameters and stress analysis of elevator guide rail
brackets. China Elevat. 12–15 (2012)
2. Shi, L., Le, S., Lu, Q.: Elevator guide selection, design and analysis of guide bracket
spacing. Technol. Innov. Appl. 14–15 (2015)
3. Chen, H.: A calculation example of box beam rail bracket of elevator. China Elevat. 35–41
(2013)
4. GB/T7025.2-2008 Lifts-Main specifications and the dimensions arrangements for its cars,
wells and machine rooms- Part 2: Lifts of class IV
5. GB 7588-2003 Lift manufacture and installation safety norm
6. Liu, H.: Material Strength, 5th edn. Higher Education Press, Beijing (2010)
7. Jiang, X.: Design of marine elevator car frame. In: International Workshop of Advanced
Manufacturing and Automation. Lecture Notes in Electrical Engineering, pp. 583–591.
Springer, Singapore (2018)
8. Wen, B.: Mechanical Design Manual. Machinery Industry Press, Beijing (2010)
9. http://www.chinaelevator.org. Accessed 21 May 2019
10. GB/T10060-2011 Lift installation and acceptance specification
Stress Analysis of Hoist Auto Lift Car Frame
Abstract. Compared with the traditional auto ramp, the auto lift can save 80%
of the building area and more than double the car turnover efficiency. Auto lift is
referred to as non-commercial auto lift for carrying passenger auto, one of the
cargo lifts in the catalogue of special equipment in the GB7024-2008. The auto
lift shaft structure should be strong enough to bear not only the load of ordinary
lift but also the load produced by loading and unloading of auto. The size and
structural type of the car are specially designed to facilitate the access of autos,
the selected frame beam system checked, mainly based on whether the bending
stress can meet the maximum allowable stress of the selected material; Finally, a
three-dimensional model of part of the structure is established and analyzed by
finite element method including analysis of the front wheel of auto into the car,
analysis of the auto entered into the car, analysis of the car frame compress car
buffer, and the stress analysis of main components.
1 Introduction
As we all know, lifts can be divided into passenger lift, cargo lift, passenger lift, bed
lift, residential lift, dumbwaiter lift, marine lift, sightseeing lift, auto lift from the use,
be divided into low speed, medium speed, high speed and ultra-high speed lift from the
running speed, be divided into direct current, alternating current, hydraulic, pinion and
rack, spiral and linear motor driving lift from the driving method, be divided into
machine room lift and machine-room- less lift, be divided into gear and gearless tractor
lift from the traction structure [1]. The development of human beings is in an infinite
process, while the development of cities is in a limited space. The three-dimensional of
urban architecture and the three-dimensional of urban traffic correspond to each other.
In order to solve the three-dimensional traffic problem in the city, the auto lift has come
to reality from the dream. Auto lift is a special lift to solve the problem of auto vertical
transportation. It has been widely used in 4S stores, repair workshops and large
shopping malls with parking lot on the building roof. Its standard load is 3000 kg,
5000 kg. The speed is 0.25 m/s and 0.5 m/s [2, 3].
The sheave traction ratio which is also called rope ratio refers to the circumferential
speed of the traction sheave namely the ratio of rope moving speed to the car vertical
lifting speed, when ignoring the movable sheave weight and the sliding friction
between wire rope and sheave, friction between the sheave internal structure in the
ideal case, the input and output power should be the same [4], so the lifting weight is
far greater than the weight of the movable sheave on large tonnage elevator, labor-
saving ratio can be approximately regarded as being inversely proportion to traction
ratio. In addition, the winding method of the steel wire rope is closely related to the
elevator machine room type. The rated load of the lift car frame is 3200 kg, the rated
operating speed is 0.5 m/s, and the guide rail type is the symmetrical arrangement of
two guide rails.
Referring to GB/T 7025.2-2008 [5], it can be seen that the car width is the same as
the length of the car inlet and outlet, which is very helpful for the entry and exit of the
auto and the handling of the parts. As shown in Fig. 1, 1 represents the safety device,
b1 = b2 represents the same width and length of the inlet and outlet of the car, and d1
represents the depth of the car.
The dimensions of the actual elevator car and the size and type to be designed are
used for reference. The height of the car and the height of the entrance and exit are all
equal to 2500 mm. According to the design requirements of minimum shaft size, the
schematic diagram of auto elevator car frame is obtained in Fig. 2.
The car is composed of a car frame, a car bottom tray, a car wall and a car roof
guardrail device. The car frame is the supporting structure of whole elevator, which
bears the effect of the car system (including the travelling cable) deadweight P and the
design rating load Q, which is the direct guarantee of elevator safety performance. The
car frame supports its own weight and the weight of various loads, so when designing
the car frame, choose materials with good rigidity and strength. In general, the 1.2–
1.5 mm thick steel plate is pressed and cast as a channel combination of the car top
compartment. The material of the car wall is roughly the same as the car top. And the
car bottom is generally composed of frame and bottom plate. In order to make the car
bottom stable, the frame is mostly designed for welding of channel steel and angle
steel, and adding a layer of 3–6 mm thick steel plate (hairline steel bottom). Generally,
the car frame is made of section steel or cold roll forming steel plate connected by bolts
or welded into a beam structure, statically indeterminate rigid frame, channel steel 25a
is the first choice, according to the “Non-standard Mechanical Design Manual” [6], the
theoretical mass for channel steel 25a is 27.410 Kg=m, L = car clear width +straight
beam leg width 2 + stiffening plate thickness 2 = 2500 + 112 2 + 8 +
24 = 2756 mm,
3 Kg
Mubeam ¼ 2 2756 10 m 27:410 ¼ 151:08392 Kg
m
The vertical beam use the I - steel 22b, the theoretical mass for 22b I - steel is
36:524 Kg=m [7], calculate the length of the vertical beam of the car frame, L = car
height + car bottom thickness + bottom beam height + upper beam height + return
sheave radius + car top thickness +car floor thickness = 2500 + 120 + 200 + 250 +
520/2 + 8 + margin = 3394, therefore Mvbeam ¼ 2 ð3394 103 m 36:524 Kg=m)
¼ 247:925 Kg. The material generally select 345 (16Mn), which is a low alloy high
strength steel; in design of some light load car frame will also choose low carbon steel
Q235. Material selection of lower beam is the same as that of upper beam. Then
calculate the length of the lower beam of the car frame.
3 Kg
Mlbeam ¼ 2 2508 10 m 22:637 ¼ 113:547192 Kg
m
In the design, because the car frame is two rail center oriented, car depth is
3170 mm, in order to keep the car bottom balance, so the lower beam is designed as
welding component, lower beam assembly includes 4 transverse beams, 2 longitudinal
beams; the longitudinal beam length is 3170/2 = 1585 mm, so
12:059 Kg 12:059 Kg
Mbracket ¼ 2 6 m þ 7 2:5 m þ 445:9302 Kg
m m
¼ 892:113 Kg
The rest mass of the attachment is 1072.4348 kg, so the car weight is
P = 2953.567 kg.
The upper beam check:the acceleration will produce when lift starting, the accel-
eration acts on the wire rope, producing a traction force K:
ðP þ Q þ MTrav Þðgn þ aÞ
K¼ þ MSRcav ðgn þ r:aÞ
r
MTrav -traveling cable weight, MSRcav -suspension wire rope weight on the car side
qL
F1 and F2 are support reaction, F1 ¼ F2 ¼ 2 ¼ 30183:2408 N
q L qL L qL2
Mmax ¼ :ð Þ2 þ : ¼
2 2 2 2 8
K1: progressive safety gear action impact factor, by the “lift standard compilation
GB 7588-2003” [2], K = 2.0. The lower beam select channel steel 20a, bending section
modulus W ¼ 178 106 m3 , and bending stress [7]:
0
Mmax 37894:7906 N:m
r1 ¼ ¼ ¼ 106:446041 Mpa
2W 2 178 106 m3
n ¼ rrbp ¼ 106:446041
470 MPa
MPa ¼ 4:415 [ 2:0, So the lower beam can meet the safety
requirements.
In order to ensure the sufficient section area and the bending section coefficient, the
22b I-steel is chosen for the vertical beam, A = 46.528 10−4m2, the tensile force and
stress of the vertical beam under normal conditions:
ðP þ QÞg F 30183:2461 N
F¼ ¼ 30183:2461 N; r1 ¼ ¼ ¼ 6:487 MPa
2 A 46:528 104 m2
0:5 g MTrav
MTrav ¼ 0:5Hqt ¼ 0:5 18 1:56 ¼ 14:04 ðKg), Fm ¼ ¼ 20:27 N
h
So n = 520 MPa/64.267 MPa = 8.091 > 7, the allowable stress of the I-steel is
increased to 520 MPa, safety requirements attain.
The return sheave shaft is a symmetrical structure, and the maximum diameter of
the sheave shaft is corresponding to the return sheave hub, and two tapered roller
bearings are positioned by the shaft shoulder. Among them, the bearing model is
C2914, which can bear a certain axial load when the return sheave is in axial motion.
Because return sheave axle belongs to the mandrel, it will bear bending moment when
ðP þ Q) g
the return sheave rotates [9]. F1 ¼ F2 ¼ F3 ¼ F4 ¼ 4 ¼ 15091:6231 N,
3
pd 6 2 1260:15053 N:m
W¼ 32 0:1d ¼ 51:2 10 m , r ¼
3
ca 51:2106 ¼ 24:612 MPa
According to “mechanical design” [10], select material 20Cr, r1 ¼ 60 MPa,
rca ½r1 , so return sheave axle can meet the strength requirements.
156 X. Jiang et al.
The auto car frame is the load-bearing component with sufficient strength, and the
mechanical calculation shall be carried out to necessarily ensure that the car frame has
sufficient strength and safety coefficient. The car size of the auto lift is usually relatively
large, so it also determines that the structure of the car frame of its carrying part is more
complex than that of the lift with equal rated load. It is difficult to use the simplified
beam to check the force, and the calculation error will be large. Therefore, the car frame
is usually checked and calculated by finite element method.
Its arrangement contains four guides for guiding, 4:1 traction structure. The main
structure is shown in Fig. 3. The main structural components are upper beam, lower
beam, upper crossbeam, lower crossbeam, door machine beam, straight beam, upper
crossbeam oblique brace, upper beam oblique brace, pull rod, car bottom (the car
bottom is welding composition of the car bottom beam and the car bottom plate). The
finite element calculation software ANSYS, is selected as follows: pull rod as 3d rod
element link180, car bottom plate as 3d shell element shell181 and the other beams as
3d beam element beam189 (setting the section shape of beam).
The weight of car wall and car roof is 1100 kg, the contact area of the auto wheel
and car bottom plate is 200 mm * 200 mm in the above external load of car frame. The
distance between left and right auto wheel is 1600 mm, the distance between front and
rear auto wheel is 2500 mm, when loading on the car bottom plate. The stresses of
main components of car frame are as follows (Figs. 4, 5, 6, 7 and 8):
Fig. 4. Constraints and loads when auto front wheel just entering the car
Stress Analysis of Hoist Auto Lift Car Frame 157
Fig. 5. Stress of pull rod when auto front wheel just entering the car
Fig. 6. Stress of main components when auto front wheel just entering the car
Fig. 7. Stress of pull rod when auto entered into the car
Fig. 8. Stress of main components when auto entered into the car
158 X. Jiang et al.
The following Table 1 is the maximum stress table of the main components of the
car frame when the auto front wheels are just entering the car and the auto has entered
into the car. From the maximum stress and material of the beams, the safety factor of
the main components under this working condition can be obtained. The beam of the
car frame meets the requirements of this working condition.
Table 1. Stress on main components when auto front wheel just entering the car and auto
entered into the car
Main components Material Maximal Safety Maximal Safety
stress 1 factor 1 stress 2 factor 2
Pull rod Q235 18 Mpa 17.1 31.7 Mpa 9.7
Upright beam Q345 23.9 Mpa 17.8 33.1 Mpa 12.8
Front upper beam Q345 25.3Mpa 16.8 57.7 Mpa 7.4
Rear upper beam Q345 4.41 Mpa 96.4 6 Mpa 70.8
Front lower beam Q345 1.56 Mpa 269 15.7 Mpa 27
Rear lower beam Q345 0.78 Mpa 544 3.4 Mpa 125
Upper crossbeam Q345 44.8 Mpa 9.5 71.9 Mpa 5.9
Lower crossbeam Q345 15.4 Mpa 27.6 26.3 Mpa 16.2
Crossbeam brace Q235 30.4 Mpa 10.1 85 Mpa 7.4
Upper beam Q235 8.28 Mpa 37.2 26.2 Mpa 10.9
brace
Table 2 is the maximum stress table of the main components of the car frame when
the car stalls downward and the safety is activated. The safety factor of the main
components under this working condition can be obtained from the maximum stress
and material of the beams. The beams of the car frame can meet the requirements of
this working condition.
Table 2. Stress of main components with car stalling downward and activated safety
Main components Material Maximal stress Safety factor Load type Impact coefficient
Pull rod Q235 27.7 Mpa 6.6 Dynamic 2
Upright beam Q345 23.3 Mpa 10.9 Dynamic 2
Front upper beam Q345 0.5 Mpa 510 Dynamic 2
Rear upper beam Q345 1.6 Mpa 159 Dynamic 2
Front lower beam Q345 4.0 Mpa 63 Dynamic 2
Rear lower beam Q345 0.5 Mpa 510 Dynamic 2
Upper crossbeam Q345 11.9 Mpa 21.4 Dynamic 2
Lower crossbeam Q345 22.9Mpa 11.1 Dynamic 2
Crossbeam brace Q235 21.8 Mpa 8.5 Dynamic 2
Upper beam brace Q235 2 Mpa 92.5 Dynamic 2
Stress Analysis of Hoist Auto Lift Car Frame 159
5 Conclusions
From the checking calculation of auto lift car frame under various working conditions,
it can be concluded that the car frame in this paper meets the requirements of use. The
finite element method has great advantages in checking calculation of complex car
frame structure, and has higher calculation accuracy than traditional simplified calcu-
lation. Compared with the safety factor of each working condition, some structural
beams still have room for optimization, and the targeted structural optimization can be
carried out according to the calculated results, which can reduce the self-weight of the
car frame itself and thus reduce the production cost.
References
1. National Elevator quality supervision and Inspection Center (Guangdong). Elevator standard
compilation. China Standard Press, Beijing (2012)
2. GB 7588-2003 Elevator manufacture and installation safety norm
3. GB 25856-2010 Freight elevator manufacture and installation safety norm
4. Jiang, X.: Vibration analysis and simulation of traction inclined elevator. In: International
Workshop of Advanced Manufacturing and Automation, pp. 221–224. Atlantis Press,
Manchester (2015)
5. GB/T7025.2-2008 Lifts-Main specifications and the dimensions arrangements for its cars,
wells and machine rooms- Part 2: Lifts of class IV
6. Cen, J.: Non Standard Mechanical Design Manual. National Defense Industry Press,
Changsha (2008)
7. Liu, H.: Material Strength, 5th edn. Higher Education Press, Beijing (2010)
8. GB/T10060-2011 Elevator installation and acceptance specification
9. Guo, L.: Design of large tonnage elevator sheave block. In: International Workshop of
Advanced Manufacturing and Automation, pp. 207–217. Springer, Changshu (2017)
10. Lianggui, Pu: Mechanical Design, 8th edn. Higher Education Press, Beijing (2006)
Research on Lift Fault Prediction
and Diagnosis Based on Multi-sensor
Information Fusion
Abstract. Nowadays, most of the fault diagnosis methods are based on the
collected data, which can not realize the timely prediction of fault diagnosis, and
the measurement information based on a single sensor cannot fully and accu-
rately reflect the working status of lift, thus causing the uncertainty and inac-
curacy of fault diagnosis. An lift fault diagnosis algorithm based on DS data
fusion is proposed. Multi-sensor fusion is used to initialize the initial reliability
distribution of the sensor according to the membership degree of each diagnostic
category. The data collected by each sensor is taken as evidence body. The final
diagnosis results are obtained by data fusion method. Experiments on lift fault
diagnosis show that the proposed method can correctly and timely predict the
fault, overcome the uncertainty and inaccuracy of single sensor fault diagnosis,
and improve the accuracy of lift fault diagnosis and prediction.
1 Introduction
With the rapid increase of the number of lifts and accidents, real-time and effective fault
prediction and diagnosis of lifts is of great significance to ensure the safe and reliable
operation of lifts. Therefore, it has become one of the key topics in lift research in
recent years [1, 2]. Due to the low intelligence of sensors installed on lifts and the
measurement errors caused by sensors themselves, the detection information is
uncertain and incomplete. In addition, the current lift prediction and diagnosis system is
often based on the collected sensor data, which has a certain delay, while the lift has a
strong real-time demand [3, 4]. When the fault diagnosis symptoms data are collected
in real time, the lift has been faulted, which will greatly threaten the safety of lift
passengers. Therefore, the accuracy of lift fault prediction and diagnosis is not limited
by the delay of sensor data samples.
Based on the above factors, a fault diagnosis method based on neural network
prediction and DS data fusion is proposed. This method predicts the possible data of
the sensor at the next moment by neural network, and fuses the data by DS data fusion
theory, and obtains the fault diagnosis results based on the predicted data of multiple
sensors. It overcomes the shortcomings of the traditional method of delay. Based on the
historical data of all sensors received by the system, the lift fault diagnosis is carried out
by using the method of data fusion.
Using multi-sensors of on-line intelligent detection system for data fusion processing
can eliminate the uncertainty of single or single sensor detection, improve the reliability
of detection system, obtain more accurate knowledge of detection objects, help
inspectors to make effective judgments on lift status, and reasonably determine the
verification time and verification plan. Therefore, this topic intends to establish a
distributed online multi-sensor system, and apply information fusion technology to the
dynamic intelligent detection of lifts, so as to provide a reasonable evaluation of the
operation status of lifts. Because the traditional lift monitoring and detection sensors
mostly adopt mechanical structure and are limited by mechanical structure, they have
inherent defects, such as low accuracy, small dynamic response range, difficulty in low
frequency detection, difficulty in distinguishing background noise and so on. With the
development of lift intelligence, the requirement of network technology for detection is
constantly increasing.
When load and position of lift change, the natural frequency of lift system will
change as shown in Fig. 1. X axis is the coordinate of lift load change, Y axis is the
coordinate of lift car position change, Z axis is the coordinate of natural frequency
change. When the load increases, the natural frequency decreases, but with the increase
of the load, the natural frequency decreases slowly. The influence of lift position
change on natural frequency is complex. The natural frequency decreases with the
increase of car height and change symmetrically with height.
Fig. 1. Natural vibration frequency with the car load and position change
162 X. Jiang et al.
3 D-S Theory
D-S theory is a general method proposed by Basir et al. in 2007 to directly calculate the
basic trust allocation function based on multi-sensor measurements. Basir method
embodies the design idea of generating evidence according to the difference between
the measured and theoretical values of sensor data [8]. However, this method does not
consider the general situation of more sensors, and it is difficult to reflect the relative
evidence information between sensors through the difference of sensor data. More
importantly, many faults are not suitable for D-S theory to calculate evidence diagnosis
directly from sensor measurements [9].
2 3 2 3
x1 x1 x12 ... x1n
6 x2 7 6 x2 x22 ... x2n 7
6 7 6 7
x¼6 .. 7¼6 .. .. .. .. 7 ð1Þ
4 . 5 4 . . . . 5
xq xq1 xq2 ... xqn
164 X. Jiang et al.
8h
> Pnk P i1p
< j¼1 skj xji k¼1
dkj ¼ h i1 j ¼ 1; 2; ; N ð2Þ
: Pnu skj xji P p
>
k ¼ 2; 3; ; M
i¼1
2 3
d11 d12 d1q
6 d21 d22 7
6 d2q 7
D¼6 .. .. .. 7 ð3Þ
4 . . . 5
dp1 dp2 dpq
The smaller the distance from dkj , the greater the possibility of judging the object in
class j state according to the kth sensor’s information. Hence the definition:
1
mkj ¼ ð4Þ
dkj
PQ
Normalization: j¼1 mkj ¼ 1
2 3 23
m11 m12 m1Q M1
6 m21 m22 7 6 M2 7
6 m2Q 7 6 . 7
M¼6 .. .. .. 7¼4 . 5 ð5Þ
4 . . . 5 .
mP1 mP2 mPQ Mp
Mk ¼ MK1; MK2; ; MkQ; ð6Þ
So it can be used as the reliability value of the kth sensor for state recognition. On
this basis, the theory is optimized and improved. Extracting fault signals from multi-
source signals is the first condition for lift fault prediction and diagnosis. The frame-
work of fault recognition chooses fault characteristic parameters, and establishes basic
trust allocation function. The combination rules are used to calculate the fault. Finally,
the fault is predicted and judged according to the fusion results. In order to prevent
multi-source evidence information from being influenced by subjective factors, which
lead to large differences in forms, high conflict and poor real-time, it has a negative
impact on the realization of the fusion process and the rationality of the results. Using
the above data characteristic value with high degree of objectivity, strong real-time,
easy to implement and analyze and adjust, the evidence information generated can be
automatically calculated according to multi-source information data.
The vibration signal of lift will change with the change of its running state. Therefore,
its time domain index will also change. There are many time domain indexes of
vibration signal, such as kurtosis index, peak value, pulse index, square root amplitude,
waveform index, effective value, mean value, margin index, etc. [10]. For different
Research on Lift Fault Prediction and Diagnosis 165
types of fault signals, different time-domain indicators show different advantages. For
example, kurtosis index, impulse index and margin index are more suitable for impulse
type faults. At the beginning of the fault, these indexes will change significantly, but
with the gradual aggravation of the fault, the sensitivity of sending indicators will
gradually decline, and the stability is poor, which is only applicable to early faults; the
peak value is the signal at one time. The maximum value in the section is sensitive to
the fault of instantaneous impact and has a good diagnostic effect. Mean value reflects
the average value of signal amplitude, which is relatively stable. Mean square root
represents the average energy of the signal, and peak amplitude is better for the signal
with slow time variation.
According to the four different states of car trajectory, an identification framework
is set up = {Normal, X-direction vibration signal, Y-direction vibration signal, wear
signal}to determine whether there is conflict between evidences before evidence
fusion. If there is conflict, BP neural network method is used to improve evidence
combination for conflict evidence. In order to validate the effectiveness of the algo-
rithm, vibration sensors and microphones are used to collect vibration and noise signals
in four different states of the car. In the experiment, multiple sets of data are collected in
each state, and multiple points are sampled in each group. Three of them are randomly
selected as training of the neural network set, the remaining two groups as test sets. The
specific sensor installation location is determined by the selected lift. The specific route
and scheme of extraction of fault features are as follows [11].
a. Fault feature extraction of vibration signals
According to the characteristics of non-stationarity of vibration signals, eight charac-
teristic parameters of time-domain statistics (including RMS, peak, skew, kurtosis,
peak index, margin index, impulse index and waveform index) and the energy char-
acteristics of the first eight IMF components decomposed by EEMD are selected as the
characteristic parameter space 1.
b. Fault feature extraction of noise signal
Using wavelet packet analysis, we can get the characteristics of low frequency and high
frequency noise signal decomposition. In this paper, we use the wpdencmp function of
the wavelet packet to de-noise the collected noise signal, and then decompose the three-
layer wavelet packet with coif5 as the basis function of the wavelet packet to get eight
frequency bands, and extract the energy characteristics of each frequency band as the
characteristic parameter space 2.
c. Construction of neural network model
The construction of BP neural network is determined according to the characteristics of
input and output data. For BP sub-network 1 and 2, the spatial dimension of feature
parameters is 16, and for BP sub-network 3, the spatial dimension of feature parameters
is 8. There are four classes of states to be identified. The BP algorithm is improved by
using additional momentum and variable learning rate.
166 X. Jiang et al.
Fig. 3. Neural network based flow chart of evidence theory diagnosis and prediction
Before evidence fusion, judge whether there is conflict or not between evidences. If
there is conflict, BP neural network method is used to improve evidence combination of
conflict evidence. Otherwise, it is fused according to Dempster combination rule. The
fusion process is shown in Fig. 4. According to the normalized basic reliability dis-
tribution of the diagnosis results of the neural network, the fusion diagnosis results of
the network output are obtained by using Dempster combination rules and decision
method based on basic probability assignment.
Research on Lift Fault Prediction and Diagnosis 167
5 Conclusions
This research can provide new theory and technology for lift early monitoring, and has
positive significance for ensuring lift safety. It is also used for analysis and fault
diagnosis of analog data and part of actual data. The comprehensive research and
engineering application of monitorability design theory will help to improve the
development of condition monitoring and fault diagnosis technology of lift mechanical
system, improve the acquisition and identification of weak signals in fault diagnosis of
lift mechanical system, how to adapt the threshold according to the characteristics of
data flow is the next step of experimental research.
References
1. Jiang, T., Liu, G.: Lift safety monitoring and early warning information platform based on
internet of things. China J. Constr. Mach. 162–167 (2015)
2. Li, J., Li, L., Guo, X., Liu, L., Fang, J.: Design and application of service platform for
emergency rescue and disposal of lift. J. Saf. Sci. Technol. 133–138 (2016)
168 X. Jiang et al.
3. Lin, C., Wang, X., He, B., Li, Z.: Lift condition monitoring device based on current features.
Autom. Inform. Eng. 6–10 (2013)
4. Wang, J., Zhang, L.: Lift operation monitoring system based on wireless sensor network.
Sens. World, 31–34 (2012)
5. Jiang, X.: Research on vibration control of traction elevator. In: International Industrial
Informatics and Computer Engineering Conference, pp. 2144–2147. Atlantis Press (2015)
6. Jiang, X.: Research on intelligent elevator control system. Adv. Materials Res. 605–607,
1802–1805 (2012)
7. GB T 31821-2015 Specifications for discard of the main parts of elevators
8. Han, C., Zhu, H., Duan, Z.: Multi-source Information Fusion, 2nd edn. Tsinghua University
Press, Beijing (2010)
9. Han, D., Yang, Y., Han, C.: Advances in DS evidence theory and related discussions.
Control Decis. 1–11 (2014)
10. Guo, L., Jiang, X.: Research on horizontal vibration of traction elevator. In: International
Workshop of Advanced Manufacturing and Automation, pp 131–140. Springer, Singapore
(2018)
11. Xu, J., Xu, L., Wang, H., Zheng, B., Tang, B.: Condition monitoring of elevators based on
vibration analysis. J. Mech. Electr. Eng. 279–283 (2019)
Design of Permanent Magnet Damper
for Elevator
1 Introduction
With the increase in the number of elevators and the years of use, people have become
more and more demanding on the reliability of elevator safety systems. In general, the
elevator safety system includes speed limiters, safety gears, rope grippers, and dampers.
Once the elevator fails, the safety devices (speed limiters, safety gears, rope grippers)
ensure the safety of the elevator car. If three safety devices are failed, the car will
eventually fall, the damper installed in the ground is called the last “safety line” of the
elevator and is one of the necessary safety devices for the elevator [1].
The damper currently used in elevators come in two main forms: energy storage
and energy-consuming [2, 3]. Energy storage damper refers to spring damper, which is
dampened by the spring force of the spring. The spring can return to its original shape
after the end of the movement, which is the most commonly used buffering method.
Energy-consuming dampers generally refer to hydraulic dampers that are dampened by
the pressure of hydraulic oil when subjected to an impact [4]. The permanent magnet
reducer uses a permanent magnet to form an eddy current field on the surface of the
conductor, and the two magnetic fields are coupled to each other to generate a force
opposite to the motion, thereby achieving the purpose of deceleration. This feature is
widely used in the fields of brakes, buffers and so on [5, 6].
In this paper, a magnetic eddy current damper is designed. The deceleration effect
of different magnetic field combinations is analyzed, and the time and distance of
deceleration at different speeds are analyzed. In the last, the magnet spring also is
simulated.
The structure of the elevator permanent magnet damper is shown in Fig. 1(a). The
permanent magnet damper contains two parts: eddy current permanent magnet reducer
on both sides, and the permanent magnet spring in the car bottom, the 3D model is
shown in Fig. 1(b).
As shown in Fig. 1, the permanent magnet damper contains a magnet and con-
ductor. The permanent magnets are respectively installed on the inspection layer, the
bottom of the car and the sidewall of the well. The conductors are mounted on the outer
wall of the car. When the car suddenly falls, the speed of the car is increasing. The
permanent magnet was placed upon the bottom. Under normal operating, the car speed
is low, at that time, the eddy current field in the conductor is weak and the force is
small. The elevator can be work normally. When a disaster occurs, the conductor
falling with the car, and getting in touch with each other before reach the bottom. The
conductor cuts the magnetic lines of the permanent magnet with high speed and
induces an eddy current on the conductor surface. The two magnetic fields are coupled
to each other to generate a force opposite to the direction of motion, forcing the car to
Design of Permanent Magnet Damper for Elevator 171
slow down. The isotropic permanent magnet at the bottom provides support and
eventually causes the car to stop.
In order to verify the feasibility of the scheme, the permanent magnet buffer sim-
ulation device was designed and manufactured, and its main structural parameters:
weight is 30 kg, the size of the car is 300 300 300 mm; the car line is 1200 mm.
Figure 6 shows the change in the curve of buffer time and buffer distance for 16
different combinations. From the figure, the permanent magnets and the conductors in
different combinations can increase in different degrees as the distance of action
increases. The max value is 1500 N in a–b’,but unstable. It can be seen from the overall
picture that the overall rise of a–c’ and b–c’ is relatively flat and meets the requirements
for use. Considering the processing and installation costs, a combination of b–c’ (each
magnet isotropic, multiple vertical uniform distribution) is used here. The magnetic
field distribution during motion is shown in Fig. 7.
Design of Permanent Magnet Damper for Elevator 173
Fig. 6. The magnet force VS place Fig. 7. The magnetic field of magnet and
conductor
Fig. 8. Different speed versus time Fig. 9. Stopping time with the speed
distribution and magnetic field size, the distance between the magnetic field size and
the coupled magnetic field is simulated and the results are shown in Fig. 10.
Fig. 10. The magnetic field of magnet spring Fig. 11. The spring force VS distance
Figure 11 shows the expulsive force generated by five sets of magnets with dif-
ferent diameters with increasing spacing. The repulsive force of any area with the same
magnetic field decreases with the increase of the spacing, and its trend is an exponential
decline, the initial decline is more obvious, and the later stage tends to be stable. When
the distance reaches a certain value, the repulsive force disappears. Increasing the
contact area is the best way to increase the repulsive force. It can be seen from the
figure that when the coupling area reaches 50 mm, the maximum repulsive force
generated when the spacing is 5 mm reaches 1800 N (180 kg), which is much larger
than the 30 kg requirement. In the specific use of the elevator, it can have a certain
assembly error, set to 20 mm, according to the above figure, when the contact area is
40 mm, it can meet the use requirements.
5 Conclusion
(1) A permanent magnet elevator damper is designed, which is decoupled by cou-
pling the side eddy current field with the permanent magnet, and the suspension of
the elevator car is realized by the permanent magnet spring installed at the bottom.
(2) The eddy current reducer has been optimized, and various combinations have been
designed. The eddy currents generated by different combinations are compared. The
best solution for the permanent magnets and the array of conductors is preferred.
(3) The stopping time of the eddy current buffer at different decreasing speeds is
analyzed. The falling speed of the car is non-linear. The faster the speed, the more
obvious the deceleration, and the longer the deceleration time of the speed. The
deceleration rest time is substantially linear with the initial speed.
(4) The permanent magnet spring installed at the bottom of the car is analyzed, and
the permanent magnets of different diameters are simulated and analyzed. The
supporting force of the permanent magnet spring decreases nonlinearly with the
increase of the spacing. The larger the magnetic field strength, the more the
supporting force.
Design of Permanent Magnet Damper for Elevator 175
References
1. Husmann, J.: Elevator car frame vibration damping device. Google Patents (2005)
2. Uchida, T., Sato, S., Nakagawa, T.: A proposal of control method for an elevator emergency
stop device with a magnetic rheological fluid damper. IEEE Trans. Magn. 50(11), 1–4 (2014)
3. Qiong, Z.: Application of piston accumulator in oil buffer. Fluid Power Transm. Control 3,
25–28 (2006)
4. Yao, R., Zhu, C., Zhan, Y., et al.: Impact research on the oil buffer with air accumulator.
Zhendong yu Chongji/J. Vib. Shock 25(1), 153–155 (2006)
5. Gulec, M., Aydin, M., Lindh, P., et al.: Investigation of braking torque characteristic for a
double-stator single-rotor axial-flux permanent-magnet eddy-current brake. In: 2018 XIII
International Conference on Electrical Machines (ICEM), Investigation of Braking Torque
Characteristic for a Double-Stator Single-Rotor Axial-Flux Permanent-Magnet Eddy-Current
Brake, pp. 793–797. IEEE (2018)
6. Chen, Q., Tan, Y., Li, G., et al.: Design of double-sided linear permanent magnet eddy current
braking system. Prog. Electromagnet. Res. 61, 1–73 (2017)
Design and Simulation of Hydraulic Braking
System for Loader Based on AMESim
Abstract. The loader develops towards the direction of heavy haul and high
speed intellectualization, and puts forward higher requirements for its braking
system. The power braking system is widely used in the field of engineering
equipment due to its superior braking performance and reliability. Based on the
analysis of the working principle of the hydraulic braking system, this paper
establishes the mathematical model of the hydraulic braking system, and
establishes the simulation model of the hydraulic braking system using the
AMESim simulation platform, and carries on the simulation analysis to it. The
results show that the pressure rise time is 0.1 s, the pressure drop time is 0.1 s
when the friction disc contacts with the brake disc, and the brake pressure rise
can be divided into three stages when the brake disc is unloaded; the accumu-
lator can perform about 4 brakes until the next filling, and the piston reaches the
limit position within 0.1 s. 0.8 mm..
1 Introduction
accumulator in the parking brake circuit enters the parking brake through the parking
brake valve, and pushes the piston backwards to compress the spring in the parking
brake, thus separating the friction disc from the brake disc. As far as the shoe brake
system is concerned, the piston of the parking brake cylinder is reset under the action of
spring tension, and the pull rod prompting the parking brake is driven to make the shoe
open and press tightly on the brake disc for braking. When the parking brake is
released, the high-pressure oil in the system will flow into the parking brake tank from
the accumulator and reverse-drive the piston to compress the bullet in the parking brake
cylinder. Reed, relax the pull rod of the shoe-type parking brake, so that the brake shoe
and the brake disc are separated [6].
dVa
Qs ¼ ð3Þ
dt
dPs 1 d 2 QS dQS nPa
¼ 2 ma 2 þ Ba þ Qs ð5Þ
dt Aa dt dt Va
1n
Pa0
Substitute Va ¼ Va0 Pa into (5),
1
dPs 1 d 2 QS dQs nPa Pa n
¼ 2 ma 2 þ Ba þ Qs ð6Þ
dt Aa dt dt dt Pa0
Formula (6) is the relationship between the outlet pressure of accumulator and the
outlet flow rate.
(2) Flow rate of fluid in moving cylinder
Regardless of the influence of the pipeline from accumulator to brake valve and
brake valve to brake cylinder, it is assumed that the oil is incompressible and that the
pipeline, components and their interfaces are fully sealed. Represents the flow of oil
from the accumulator through the brake valve to the brake cylinder.
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2jPs PL j
QS ¼ Cd Ab1 signðPs PL Þ ð7Þ
q
sign—Symbolic function,
8
< 1 ða\0Þ
signðaÞ ¼ 0 ða ¼ 0Þ
:
1 ða [ 0Þ
The brake is a sliding valve structure, the control valve orifice is a circular orifice,
which is a non-circular valve orifice. The area gain of the valve orifice is non-linear, the
flow area is an arcuate area, and the calculation formula of the arcuate area is [8]:
" rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi #
y y
2
d2 1 2y 2y
Ay ¼ cos 1 2 ð1 Þ ð9Þ
4 d d d d
Design and Simulation of hydraulic braking system for Loader Based on AMESim 181
dvb 1
¼ Ki ðXi Xb Þ Kf Xf 0 þ Xb PL Ab Bb vb ð12Þ
dt mb
dXb
¼ vb ð13Þ
dt
182 J. Liu et al.
dPL be
¼ ½Qs Qt AL VL Ce PL ð14Þ
dt VL
dvL 1
¼ ½PP AL BL vL KL ðXL0 þ XL Þ ð15Þ
dt mL
dXL
¼ VL ð16Þ
dt
According to the working principle of the hydraulic braking system and the mathe-
matical model mentioned above, the simulation model of the hydraulic braking system
is established, as shown in Fig. 4.
Fig. 5. Input signal (displacement of ejector Fig. 6. Brake pressure response curve in
under pedal) brake cylinder
The brake pressure curve in the brake cylinder is shown in Fig. 6. From the figure,
it can be seen that the maximum brake pressure of the brake cylinder is 91242.4 N.
From the contact between the friction pad and the brake disc, the pressure rise time is
0.1 s, and the pressure drop time is 0.1 s when the brake pedal is released. It can be
seen that when the brake pedal is pressed, the brake pressure rise is divided into three
stages (Fig. 5).
The pressure curve of the accumulator is shown in Fig. 7. It can be seen from the
diagram that the pressure of the accumulator decreases from 136 bar to 102.72 bar after
one brake. It can be seen that under ideal conditions, after the accumulator is filled with
oil, the accumulator can be braked about 4 times until the next filling.
The displacement curve of the piston in the brake cylinder is shown in Fig. 8. It can
be seen from the figure that the limit displacement of the piston is 0.8 mm, and it
reaches the limit position in 0.1 s, which takes a very short time.
5 Conclusions
Based on the design and analysis of the working principle of the hydraulic braking
system of loader, this paper establishes a mathematical model, establishes a simulation
model with the aid of AMESim simulation platform, and studies the reliability and fast
response of the system. The hydraulic braking system has an enlightening and refer-
ence function for the design and research of the braking system of construction
machinery such as loader.
184 J. Liu et al.
References
1. Han, W., Lu, X., Yu, Z.: Braking pressure control in electro-hydraulic brake system based on
pressure estimation with nonlinearities and uncertainties. Mech. Syst. Sign. Process. 131(15),
703–727 (2019)
2. Li, H.C., Yu, Z.P., Xiong, L., Han, W.: Hydraulic control of integrated electronic hydraulic
brake system based on lugre friction model. SAE Technical Paper No. 2017-01-2513 (2017)
3. Zhong, L.: Hydraulic Drive Principle. Fault Diagnosis and Elimination of Construction
Machinery. Machinery Industry Press, Beijing (2018)
4. Xinwei, Z., Hong, Z.: Dynamic characteristic simulation of valve-controlled hydraulic
system. Mech. Electr. Eng. 28(1), 47–50 (2011)
5. Yinding, L., Pengwei, Q., Lei, W.: Design and analysis of loader hydraulic system based on
FluidSIM. Equipment Manuf. Technol. 5, 16–20 (2018)
6. Todeschini, F., Corno, M., Panzani, G., Fiorenti, S., Savaresi, S.M.: Adaptive cascade control
of a brake-by-wire actuator for sport motorcycles. IEEE/ASME Trans. Mechatron. 20(3),
1310–1319 (2015)
7. Yu, Z.P., Han, W., Xiong, L., Xu, S.Y.: Hydraulic pressure control system of integrated-
electro-hydraulic brake system based on byrnes-isidori normalized form. J. Mech. Eng.
52(22), 92–100 (2016)
8. Lv, C., Wang, H., Cao, D.P.: High-precision hydraulic pressure control based on linear
pressure-drop modulation in valve critical equilibrium state. IEEE Trans. Ind. Electron.
64(10), 7984–7993 (2017)
Dynamic Modeling and Tension Analysis
of Redundantly Restrained Cable-Driven
Parallel Robots Considering Dynamic
Pulley Bearing Friction
1 Introduction
friction between the cable and the pulley cannot be simply described by the Coulomb
friction model. When the end-effector moves slowly or the direction of motion changes
rapidly, it will affect the pre-sliding range between the cable and the pulley [8].
Therefore, the Coulomb friction model is not suitable for high precision control [9].
Therefore, it is necessary to introduce a dynamic friction model to describe the friction.
In this paper, based on the Coulomb friction model and Dahl friction model [10], a
mathematical models of the friction is established, and the dynamic equations of the
RRPR considering the dynamic pulley bearing friction are derived. The influence of
various parameters on the tension and friction are analyzed.
2 Dynamic Modeling
Due to the cable actuation, CDPRs are not very similar to parallel-actuated robot in
dynamic modeling [11]. A RRPR model of 6-DOF in which the end-effector is con-
nected to the base through 8 cables is shown in Fig. 1 for model simplicity The cable
i (i = 1, 2, … 8) is connected to the end-effector through fixed pulley Bi after output
from the motor. Also, the coordinate system and parameters defining the system are
presented.
B3
B2
B4
Ze
meg B1
A2
A 3 B7
A6
A1
A4
A7 Oe
Z Xe
A5
A8
Ye
(B8) O X B6
B5
The dynamic equation of the end effector based on force screw theory can be
written as follows:
M ( X ) X + N ( X, X ) − Wg − We = − J TT ð1Þ
where: MðXÞ is the mass term; N ( X, X& ) is the velocity term, Wg ¼ me gT is the gen-
eralized gravitational term; We is the external load term of end-effector.
Dynamic Modeling and Tension Analysis 187
Wrap
angle
Friction Tj
90°-
End-
Pulley j
effector
Tj+1
FN,j
Friction Tj+1
j
Tj Tj
Pulley j
Tj+1
As shown in Fig. 2, the tension on the cable can be expressed as Tj and Tj + 1. The
tension of the cable Tj can be expressed based on Coulomb friction as follows:
where FN,j represents the force on the pulley and the normal force can be derived from
the cosine theorem:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
FN;j ¼ Tj2þ 1 þ 2Tj þ 1 Tj cosðaj Þ þ Tj2 ð3Þ
where aj is the wrapping angle between cables. According to Eq. (3), the friction
depends on the wrapping angle and the cable tension. Because of the moving end-
effector, the angle aj can be calculated based on the kinematics. Since the vector
triangle is 180°, the wrapping angle aj is 90° + d and
Li;x
d ¼ arctanð Þ ð5Þ
Li;y
where Li;x and Li;y are the components of the ith cable length vector along the X di-
rection and Y direction respectively.
188 Z. Hong et al.
In combination with Eqs. (3) and (4), the Coulomb friction model is:
h qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii
Fc;j ¼ lj g2j 2gj cosðaj Þ þ 1 Tj ¼ kj Tj ð6Þ
where Fc,j is the coulomb friction acting on the jth pulley. According to Eq. (6), the
dynamic equation based on the Coulomb friction model is:
M(X )X + N(X, X )X
ð7Þ
= We + Wg + J ⎡⎣1 + λ j sgn( v j ) ⎤⎦ T
No Friction
μ=0.04
μ=0.06
μ=0.08
μ=0.04
μ=0.06
μ=0.08
Time/s
η=0.8
η=0.6
η=0.4
Time/s
The Coulomb friction will change abruptly when the speed direction of the cable
changes, because the Coulomb friction model does not take into account the pre-sliding
between the cable and pulley. Since the Coulomb friction model can cause a large error
when the cable speed direction changes suddenly, and Dahl friction model should be
introduced to improve it.
where Fd,j is the Dahl friction between the cable and the pulley, r is the contact
stiffness coefficient determining the size of the pre-sliding area, x is the relative dis-
placement and b determines the shape of the pre-sliding region, generally (b = 1). The
dynamic equation based on Dahl friction model can be expressed as:
M(X )X + N(X, X )X
⎡ ⎛ −σ x
⎞ ⎤ ð10Þ
= We + Wg + J ⎢1 + ⎜ 1 − e j ⎟ λ j sgn( v ) ⎥ T
λT
⎢ ⎜ ⎟ ⎥
⎣ ⎝ ⎠ ⎦
Equation (10) indicates that the dynamic equation based on Dahl friction model is a
non-linear system equations, which will be solved by the least square method.
190 Z. Hong et al.
No Friction
Coulomb Frction
σ=100
σ=200
σ=300
Coulomb Friction
Friction/N
σ=100
σ=200
σ=300
Cable Speed/(m/s)
Time/s
No Friction
Coulomb Friction
σ=100
σ=200
σ=300
Coulomb Friction
Friction/N
σ=100
σ=200
σ=300
Cable Speed/(m/s)
Time/s
Figure 5 shows the effect of contact stiffness coefficient r on the tension and Dahl
friction of cable. The blue, green and purple lines represent the contact stiffness
coefficients r when the values are 100, 200 and 300, respectively, and the red circles
represent the changes of cable tension when the speed direction changes. Figure 6
shows the effect of contact stiffness factor r on the tension and Dahl friction of cable1
at high speed of end-effector. Compared with Figs. 5 and 6, Dahl friction at high speed
is closer to Coulomb friction, and the effect of contact stiffness coefficient r on Dahl
friction is not obvious. Whether the end-effector is in a low or high speed state, the
Dahl friction model can better describe the change of friction when the direction of
speed changes compared with the Coulomb friction model, so that the cable tension can
be smoothly transited.
Dynamic Modeling and Tension Analysis 191
4 Conclusion
The increases in the friction coefficient l and failure coefficient η cause the increase in
cable tension and Coulomb friction. The contact stiffness factor r has an obvious
influence on Dahl friction and cable tension when the end-effector is moving at a low
speed. The Dahl friction and the cable tension increase with the increase of contact
stiffness factor. The contact stiffness coefficient r influence the Dahl friction and cable
tension slightly when the end-effector is moving at high speed. In conclusion, Dahl
friction model can better describe the change of friction when the speed direction
changes, so as to smooth the transition of cable tension.
References
1. Fortincôté, A., Faure, C., Bouyer, L., Mcfadyen, B.J., Mercier, C., Bonenfant, M., et al.: On
the design of a novel cable- driven parallel robot capable of large rotation about one axis
(2018)
2. Bak, J.H., Hwang, S.W., Yoon, J., Park, J.H., Park, J.O.: Collision-free path planning of
cable-driven parallel robots in cluttered environments. Intell. Serv. Robot. 12(3), 243–253
(2019)
3. Albus, J., Bostelman, R., Dagalakis, N.: The nist robocrane. J. Robot. Syst. 10(5), 709–724
(2010)
4. Tjahjowidodo, T., Ke, Z., Dailey, W., Burdet, E., Campolo, D.: Multi-source micro-friction
identification for a class of cable-driven robots with passive backbone. Mech. Syst. Sign.
Process. 80, 152–165 (2016)
5. Pott, A.: Pulley friction compensation for winch-integrated cable force measurement and
verification on a cable-driven parallel robot. In: IEEE International Conference on Robotics
and Automation. IEEE (2015)
6. Miyasaka, M., Matheson, J., Lewis, A., Hannaford, B.: Measurement of the cable-pulley
Coulomb and viscous friction for a cable-driven surgical robotic system. In: IEEE/RSJ
International Conference on Intelligent Robots and Systems (2015)
7. Heo, J.M., Choi, S.H., Park, K.S.: Workspace analysis of a 6-DOF cable-driven parallel
robot considering pulley bearing friction under ultra-high acceleration. Microsyst. Technol.
23(7), 2615–2627 (2017)
8. Xin, D., Yoon, D., Okwudire, C.E.: A novel approach for mitigating the effects of pre-
rolling/pre-sliding friction on the settling time of rolling bearing nanopositioning stages
using high frequency vibration. Precis. Eng. 47, 375–388 (2017)
9. Chen, H., Pan, Y.C.: Dynamic behavior and modelling of the pre-sliding static friction. Wear
242(1), 1–17 (2000)
10. Dahl, P.R.: A solid friction model. Aerospace Corp El Segundo Ca (1968)
11. Merlet, J.P.: Parallel Robots. Kluwer, Dordrecht (2006)
Industry 4.0
Knowledge Discovery and Anomaly
Identification for Low Correlation
Industry Data
1 Introduction
failure and normal records. According to our analysis results, the applied data-driven
model could not separate the failure records from the nonfailure records. We assume
that the sensor data have no strong correlation to the failures. We generated a new
parameter to indicate failure conditions and further validated and confirmed our
assumption that available sensor data is not strong enough to capture the phenomenon
of failures. Actually, how to extract valuable information and discover useful knowl-
edge from low correlation data is a common dilemma in many practical applications
[5]. To fill the gap, we proposed a method to discover knowledge and identify anomaly
for low correlation data. The applied data-driven model is constructed through a fully
connected deep neural network since this structure has an excellent performance in
discovering information and knowledge about failures [6]. The idea is to make the
constructed neural network learn the behavior of collected samples and output the
severity degree of each sample. The severity degree can indicate the difference of
performance between individual record and all other records. From the severity degree,
we can identify anomalous records. Through analyzing the sensor data of the identified
anomalous records, we could acquire knowledge about which sensor data could be
possible indicators or predictors of failures. Although the study is based only on one
equipment data, knowledge acquired from the case study could help us derive
hypotheses for validation by using other similar equipment.
The rest of this paper is organized as follows: Sect. 2 explains the process of data
correlation analysis and validation. Section 3 details the applied method for knowledge
discovery and anomaly identification for low correlation data. Discussion and con-
clusions are summarized in the last section of this paper.
The data we used during the research is collected from the equipment of an industrial
equipment manufacturer1. The primary datasets leveraged during analysis includes two
parts: fault records and sensor data from the monitoring system. However, the sensor
data is mainly collected to help the user understand the working condition of the
equipment, instead of to indicate fault information. The target of our study is to
discover the potential correlation between the two databases and knowledge about
impending failures.
1
Due to Non-Discloser Agreement, we are not allowed to give detailed information of the equipment
and the company in the paper.
Knowledge Discovery and Anomaly Identification 197
observation units for further analysis. The leveraged monitoring data mainly include a
total number of starts, running time, load, voltage, and so on.
There are 538 records from 2009 to 2017 in the provided fault dataset. The col-
lected data includes 315 days and 367 timestamps. Several failures may happen in one
timestamp, and several timestamps may be recorded in one day. During the sampling
period, most of the recorded failures are about faults in multi-hoisting.
To integrate monitoring data and failure information, we used the recorded
timestamps in each database as connections. Since both monitoring data and failure
information are necessary for fault identification, we included only the records which
have been recorded in both datasets as observations in the analysis.
The number of valid records is 2931 after the merge, in which 307 records are
labeled as failures, and 2624 ones are not labeled as failures. As mentioned above, the
number of collected timestamps in monitoring dataset and failure dataset are 2931 and
367, respectively. However, there are 60 records in fault dataset which are not recorded
in monitoring dataset. Thus, we discarded these 60 records without monitoring infor-
mation in the analysis.
To improve the performance of data mining and avoid potential inconvenience, we
applied standard normalization to adjust values measured on different scales to a
0
notionally common scale. The parameter Pi after normalization is Pi , which is shown in
Eq. (1):
0 Pi Pmean
i
Pi ¼ ð1Þ
Pstd
i
Here, Pmean
i and Pstd
i are the mean value and standard deviation of the parameter Pi ,
respectively.
During normalization, we found that the standard deviations of seven parameters
are zero or very close to zero, which means those parameters rarely changed during the
sampling period. We removed these parameters from the merged dataset since constant
values hold no meaning for condition monitoring.
As mentioned above, since the currently available monitoring data is used for
operation management, there is probably no direct connection between the available
monitoring data and failure information. Therefore, our first research step focused on
answering whether the collected monitoring data is sensitive or strong enough to
predicate impending failures.
impending failure condition and 2624 records which are not labeled as failures. To
balance the number of samples in both classes, we expanded the number of failure
samples by repeatedly used them during the training stage.
The applied data-driven model is established through the fully connected deep neural
network with seven layers, Leaky Relu is used as activation functions of hidden layers,
and SoftMax is used as activation functions of the output layer. Since the dimension of
inputs is 35 (42 selected parameters minus seven constant values), the number of nodes in
hidden layers of the constructed deep neural network is 64, 32, 32, 16, 16, 8, 2 (i.e., 64
nodes in the first layer, 32 nodes in the second and third layer, 16 nodes in the fourth and
fifth layer, 8 nodes in the sixth layer, and 2 nodes in the last layer) to learn and represent
the inputs data smoothly. We selected Adam as the optimizer and categorical cross-
entropy as the loss function due to their broad applicability. The maximum number of
training epochs and dropout rate are 2000 and 0.3, respectively, to avoid overfitting.
Batch size has been set as 32 to accelerate the training process. Figure 1 shows the
training and validation error with training epochs. Figure 2 illustrates the prediction
result, in which values above 0.5 in y-axis can be considered as identified failure.
Since we used SoftMax as the final output layer, records with higher values in y-
axis are more likely to indicate a failure condition. According to the result, we can
notice that only some extreme normal and failure conditions can be identified. Most
records have the prediction result close to 0.5, which means those records are difficult
to be separated by the sensor data. We assume the main reason is that current sensor
data is not sensitive enough to capture the phenomena of failures, or not strong enough
to identify the impending failures.
result in Fig. 3, the collected parameters are sensitive enough to the failures, and the
data-driven model can identify the failure conditions. Results of Fig. 3 confirms that the
original sensor data cannot capture the phenomenon of impending failures directly.
As the available sensor data is not sensitive enough to capture the phenomena of
failures, we tried to leverage the data to discover possible useful knowledge hidden in
the data. Since most of the records are collected without failures, i.e., only 307 in 2931
records are labeled as a failure, our core idea is anomaly identification. The idea is to
make a data-driven model learn behaviors of the equipment first. The second step is to
give scores to each record to describe the degree of difference from other records. The
high-level analysis process is shown in Fig. 4.
During data calibration, we labeled all the failure records as “1” and the nonfailure
ones as “0”. Therefore, the records with higher scores are more likely to have
impending failures or anomaly condition since their behaviors are different from others.
The target is to identify records with abnormal behaviors [7], which are the records
having very different sensor data with other records. The anomalous records may or
may not have been labeled as failure ones in the original dataset.
200 Z. Li and J. Li
The applied data-driven model for anomaly identification is very similar to the
failure identification model, which is deep neural works with seven fully connected
layers. The difference from the failure identification model is that the final layer is
replaced with a regression model to evaluate the anomaly degree and output severity
degree. Figure 5 shows the evaluation results of the trained network with a hyperbolic
tangent (Tanh) as the activation function of the final output layer.
During the evaluation phase, the first 307 records are the ones with failures, and the
rest records are the nonfailure ones. In Fig. 5, the red line indicates the actual severity
degree, and the blue line represents the prediction result. According to the evaluation
result, the severity degree of most records is around 0.2. Thus, we selected 0.22 as the
threshold. Twenty-nine out of 2931 records are identified as the anomalous ones from
the analysis. Among the anomalous 29 records, 15 records are labeled as nonfailure
ones in the original dataset, and 14 are labeled as failure ones. According to the results,
all records can be divided into four categories:
Category 1. 2609 nonfailure records are also identified by our data model as normal.
Those samples represent normal behaviors without failures.
Category 2. 293 records, which are recorded as failure ones in the original dataset, are
not identified as anomalous by our data-driven model. The data-driven model cannot
identify any difference between these records labeled as failure and records labeled as
nonfailure ones. We analyzed the sensor data of many of those records in details and
compared their sensor data with the ones labeled as nonfailure and find that their sensor
data are very similar to the sensor data of the nonfailure ones. As these records cover
95% of all the records labeled as failures, their sensor data may hide some inter-
esting correlations between the sensor data and the failures. Such interesting cor-
relations probably exist in some data but are not statistically significant. This
probably explains why our initial analysis reported in Sect. 2 did not find strong
correlations between the sensor data and the failure.
Knowledge Discovery and Anomaly Identification 201
Category 3. 14 records, which are recorded as failures in the original dataset, are
identified by our data-driven model as anomalies. Those samples are recorded with
failure and have very different sensor data with other records. Thus, they are captured
by our data-driven model. The 14 records in Category 4 are the real interesting records
to be analyzed further, because such record may contain implicit knowledge that can
potentially explain the reasons for the failures and can also possibly help us identify
real indicators from the sensor data to predicate failures. Thus, we analyzed the sensor
data of these records in details. Four out of the 14 records have similar conditions that
one or several vital parameters such as “total running time,” “Remaining Safe Working
Period of the brake in percentage,” and “a total number of starts” are recorded with
extremely high or low values. Those unusual high or low values indicate sensor failures
of the equipment. Six of the records are identified as an anomaly because the “actual
loads” of the equipment are higher than the average value, but values of other
parameters are different with most samples with high actual loads. Thus, our data-
driven model identifies them as anomalies since their conditions are different from
most. Among these 6 records, 4 records have high values in “actual load,” while one or
several “line voltages” are lower or close to the average values. As these 4 records are
also labeled as failures in the original dataset, these records may make one out of many
possible reasons for the failures stand out. Thus, we can hypothesize that “the target
equipment under high load without enough line voltages is more likely to have an
impending failure.” For the rest 4 records, our visual inspection could not find obvious
abnormal of the value of their individual sensor parameter. As each record has 35
sensor parameters, it is possible that some complex combinations of sensor parameter
values make them very different from the other records. More domain knowledge and
more data are needed to understand these 4 records in depth.
Category 4. Fifteen records, which are not categorized as failed one in the original
data set, are identified as abnormal by our data model. By analyzing the sensor data of
the 15 records in depth, we found 4 out of the 15 records have sensor failures that were
not recorded or noticed by the users. These indicate sensor errors. However, due to
unknown reasons, the sensor errors do not lead to actual failures or the actual failures
are overlooked. There are 9 records which have high “actual loads” as the 4 records,
which also have high “actual loads,” in Category 3, Although these 9 records are not
labeled as failures in the original dataset, their sensor data are very similar to the 4
failure records with high “actual loads.” That is probably why these 9 records are also
identified as abnormal by our data-driven model. Again, there are 2 records we cannot
figure out how different their individual sensor parameter values are from other records.
We need more in-depth domain knowledge and more data to explain the reasons why
our data-driven model classifies these 4 four records as abnormal.
According to the result of failure identification in Sect. 2, the currently available sensor
data are not strong enough to predicate failures. Thus, we change our research focuses
on anomaly identification and proposes a method to evaluate severity degree by
202 Z. Li and J. Li
comparing the behavior of each record with the records which are recorded as non-
failures. As shown in Sect. 3, we first established a data-driven model to evaluate the
severity degree of each record. The core idea is to train the model and make it learn the
behaviors of the majority. Thus, the evaluated severity can indicate the degree of
anomaly condition of each record compared with all other records. A record with high
severity is more likely to have anomaly behaviors.
The proposed anomaly identification method can identify anomaly behaviors of the
target equipment and obtain hypotheses about machine fault from low correlation data
environment. In this case study, our approach filtered out most of the records which are
labeled as failures in the original dataset but are not able to differentiate themselves
from the nonfailure records by inspecting the sensor data. Our approach managed to
find out 4 records which have extremely high or low sensor values and are labeled as
failures. These 4 records indicate that sensor error is probably one reason for failure.
Our approach also highlighted other 4 records which have high “actual load” but “line
voltages” and are labeled as failures. Such 4 records may indicate that the parameters
“actual load” and “line voltages” can possibly be used as indicators for predicting
some categories of failures.
The limitation of the study is that the proposed method is only applied and tested
with data from one equipment. We, therefore, need to validate the method proposed in
this study and the hypotheses identified from this study with data from several similar
types of equipment.
Acknowledgment. The work described in this article has been conducted as part of the research
project CIRCit (Circular Economy Integration in the Nordic Industry for Enhanced Sustainability
and Competitiveness), which is part of the Nordic Green Growth Research and Innovation
Programme (grant numbers: 83144), and funded by NordForsk, Nordic Energy Research, and
Nordic Innovation.
References
1. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-based
manufacturing systems. Manuf. Lett. 3, 18–23 (2015)
2. Lee, C.K.M., Zhang, S.Z., Ng, K.K.H.: Development of an industrial Internet of things suite
for smart factory towards re-industrialization. Adv. Manuf. 5(4), 335–343 (2017)
3. Bodrow, W.: Impact of Industry 4.0 in service oriented firm. Adv. Manuf. 5(4), 394–400
(2017)
4. Li, Z., Wang, Y., Wang, K.-S.: Intelligent predictive maintenance for fault diagnosis and
prognosis in machine centers: industry 4.0 scenario. Adv. Manuf. 5(4), 377–387 (2017).
https://doi.org/10.1007/s40436-017-0203-8
5. Khan, A., Turowski, K.: A survey of current challenges in manufacturing industry and
preparation for industry 4.0. Paper presented at the Intelligent Information Technologies for
Industry, Cham (2016)
6. Li, Z., Wang, Y., Wang, K.: A deep learning driven method for fault classification and
degradation assessment in mechanical equipment. Comput. Ind. 104, 1–10 (2019)
7. Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: a survey. ACM Comput. Surv.
41(3), 15 (2009)
A Method of Communication State Monitoring
for Multi-node CAN Bus Network
1 Introduction
In the distributed control system, CAN is generally used to communication between the
master and slave controllers. However, due to the large number of CAN nodes, as the
usage time of CAN bus system increases, cable fatigue, aging and damage are
inevitable.
In order to improve the communication performance of CAN bus network and
monitor them effectively, a lot of research has been done at home and abroad. Mat-
sudaira [1] proposed an oversampling, self-adaption and pre-emphasis technique to
achieve complete recovery of distortion caused by long transversals in a multi-node
topology. In the car driving control system, Wagh [2] studied the inter-node commu-
nication based on CAN protocol to realize the monitoring and control of vehicle speed,
temperature, seat belt and battery in case of emergency. Barranco [3–5] intro-duced an
active and star topology named CANcentrate, all nodes communicate with the bus
through the Hub module, preventing one node from malfunctioning and affecting the
communication of other nodes, which can have higher fault sensitivity and fault tol-
erant. Long [6] designed a simple, credible and versatile multi-node CAN bus com-
munication protocol. Slave nodes report their state information to master nodes in real
time, the master node uploads or operates the slave node accordingly after receiving the
information, which can well meet the requirements of the monitoring system. Zhang [7]
applied node monitoring to the greenhouse control system. The monitoring host
checked the state of each node regularly. If a node is found to have failed to respond
several times, the node is considered closed, any data sent to the node is stopped, and
an alarm is given to the operator.
If a node of multi-node CAN bus network breaks down, it is necessary for the staff
to check all the nodes one by one to find the fault, this is inconvenient for trou-
bleshooting. In order to facilitate maintenance and optimization, a method of com-
munication state monitoring for multi-node CAN bus network is proposed in this paper,
which can quickly find the faulty node and greatly facilitates the system maintenance
and optimization.
the main node will be alerted through PC. However, the disadvantage of the method of
polling the state of child nodes one by one by the main node is that it takes a long time.
Start
No
Generate error
information and
send it to the upper
computer
End
In this paper, the main node sends an inquiry to the child node one by one, and the
child node replies immediately when receives. The main node takes down the reply
information of all child nodes and sends it to the PC to judge.
In order to respond to the query of the master controller immediately, the software
architecture of the slave controller is to judge the received instructions in the main
function in a loop and execute corresponding actions according to the instructions,
which are executed in the function of CAN receive interrupt and communicate with the
main function through global variables.
The block diagram of CAN receive interrupt function is shown in Fig. 3. First, the
structural variable CanRxMsg is defined, used to store the data ID, check code, eight
bytes of data, and then call the system function CAN_Receive, the data in the buffer
FIFO0 is copied into the structure, and the eight bytes of instruction data are assigned
to the global variable, and the flag variable start is set. The main function loops are to
determine whether the variable start is set. If it is, the controller receives a frame of
206 L. Li-xin et al.
Start
Define structural
variables:
CanRxMsg
End
The main program diagram in the master controller is shown in Fig. 4. The first
step is to set the ID of each node in the CAN network. The settings are as follows: the
ID of the master node is set to 0x08, and the IDs of the seven sub-nodes are set to 0x01,
0x02, 0x03, 0x04, 0x05, 0x06, 0x07 respectively, next we create an eight-bit variable
named error indicating error status flag of CAN. The 0–6 bits of error represent the
communication state of the slave controller 1–7 respectively, and 0 indicates the
communication is normal. 1 means communication is abnormal. Then one frame
instruction is sent to the seven slave controllers respectively. If the feedback infor-
mation is received, the communication is normal, otherwise the communication is
abnormal, and the corresponding bit is set to 1, and all possible conditions can be
combined by the bitwise operation.
After sending the instructions to the seven slave controllers separately, a final value
of error is generated according to the feedback of each slave controller. This is the
value that is sent to the upper computer and contains all the error information.
A Method of Communication State Monitoring for Multi-node CAN Bus Network 207
Start
Receive feedback
from slave Yes i=7
controller?
No
Switch(i)control instruction
error = 0x01 error |= 0x02 error |= 0x04 error |= 0x08 error |= 0x10 error |= 0x20 error |= 0x40
Communication condition is
End
normal
The upper computer is implemented by LabVIEW on the computer, and the state of
seven slave controllers is judged according to the value of error in the received data.
There are two schemes here.
(1) Judge one by one according to the value of error
The traditional way is to use the conditional structure to judge one by one, as shown
in Fig. 5, but there are 27 = 128 kinds of possible communication states in the system,
and the upper computer programming workload is huge, so it is obviously inappro-
priate to judge one by one according to the value of error.
(2) Extract the wrong bit information displays directly in the dialog box
Since the error message is displayed in the dialog box, and the format is ‘block X
communicate abnormally’. Therefore, the 0–6 bits of error are respectively corre-
sponding to the numbers 1–7, and the bit information of error is directly extracted. If a
bit of error is 1, the corresponding number is filled to X. The advantage of this is that it
takes only 8 judgments to find all the cases, which is 120 times less than the one-by-one
judgment.
The upper computer block diagram is shown in Fig. 6. First, the value of error is
converted to a binary string form, next converted to a byte array and inverted, and then
each element of the array is judged. If it is 1, the corresponding number of this bit is
displayed in the message of the dialog box.
5 Experimental Results
Using the CAN bus network of this article, if the slave controller 6, 7 are disconnected,
the upper computer displays the result as shown in Fig. 7.
A Method of Communication State Monitoring for Multi-node CAN Bus Network 209
The master controller detects that the communication of the 6th and 7th boards is
abnormal, 0x20|0x40 = 0x60 is assigned to error and sent to the upper computer. The
upper computer first converts 0x60 to binary and gets 1100000, then converts to
unsigned byte array and reverses, the array {48, 48, 48, 48, 48, 49, 49} is obtained. In
the ASCII code, 48 represents the character 0, and 49 represents the character 1,
indicating that the communication state of the 6th and 7th boards is abnormal, which is
displayed in the dialog box. Test results show that this method can effectively locate the
wrong node.
6 Conclusion
Aiming at the maintenance and optimization of distributed system based on CAN bus,
a method of communication state monitoring for multi-node CAN bus network is
proposed, which can quickly locate faulty nodes. The method can be applied to a
distributed system of a CAN bus, and this communication state monitoring can be
added to a distributed system of a CAN bus with a PC as an upper computer.
References
1. Matsudaira, N., Chen, C., Ohtsuka, S., et al.: An over-sampling adaptive pre-emphasis
technique for multi-drop bus communication system. In: IEEE International Symposium on
Circuits and Systems, pp. 2338–2341. IEEE (2016)
210 L. Li-xin et al.
2. Wagh, P.A., Pawar, R.R., Nalbalwar, S.L.: Vehicle speed control and safety prototype using
controller area network. In: 2017 International Conference on Computational Intelligence in
Data Science, pp. 1–5 (2017)
3. Barranco, M., Proenza, J., Navas, G.R., Almeida, L.: An active startopology for improving
fault confinement in CAN networks. IEEE Trans. Ind. Inf. 2, 78–85 (2006)
4. Barranco, M., Proenza, J., Almeida, L.: Quantitative comparison of the error-containment
capabilities of a bus and a star topology in CAN networks. IEEE Trans. Ind. Electron. 58(3),
802–813 (2011)
5. Barranco,M., Proenza, J.: Towards understanding the sensitivity of the reliability achievable
by simplex and replicated star topologies in CAN. In: 2011 IEEE 16th Conference on
Emerging Technologies & Factory Automation(ETFA), pp. 1–4. IEEE (2011)
6. Long, L., Xu, H.: Communication protocol design of CAN bus real-time monitoring system.
Chin. Instrum. 02, 105–2852 (2015)
7. Zhang, K., Fang, Z.: Development of CAN protocol for greenhouse control system. Ind.
Instrum. Autom. 01 (2007). 1000-0682
Communication Data Processing Method
for Massage Chair Detection System
1 Introduction
With the steady development of the world economy, the development of the health
industry is becoming more and more rapid. Everyone is paying more and more
attention to their own health conditions. Health products are emerging one after
another. Among them, the massage chair industry which combines traditional Chinese
medicine methods with modern Technology is getting more and more popular, so the
detection of massage chairs is becoming more and more important. Hiyamizu proposed
a massage chair detection technology based on human sensory sensors to determine the
massage effect of the massage chair by measuring the temperature changes of the
human skin during the massage process in real time [1]. Teramae proposed to acquire
the characteristics of the brain electrical signals of the subjects during the massage
process to evaluate the discomfort of the massage chair with acupressure [2]. Jaafar
proposed a detection technique based on the evaluation of the characteristics of the
subject’s body to evaluate the effect of the massage effect of the massage chair on the
health of the subject [3]. Jin proposed a massage chair detection technology based on
EMG (electromyography signal) of the massage site [4]. This technique is quite
feasible, but the degree of response varies from person to person, resulting in unstable
test results. Different from the above massage chair detection technology, Sun proposed
a remote control and fault detection system based on neural network model to realize
automatic fault detection of multifunctional massage chair [5]. This detection technique
pioneered the use of a uniform sample to replace the subject’s role in the detection
process, facilitating the formation of uniform detection criteria. Zhang proposed the
design and implementation of a variety of communication methods based on the
massage chair detection system [6]. Finally, it was verified that the detection system
could operate normally according to the design flow and communication protocol.
This paper mainly analyzes the data flow of the detection system, studies the
implementation of the master-slave information interaction of the detection system
from the marking and recognition of information frames, and proposes the method of
disassembling and reorganizing long messages for the remote transfer of long mes-
sages. The factors affecting information collision in the process of bus information
transmission are analyzed, and the optimal frame space which can reduce the collision
probability of information is proposed.
The detection system places the robot covered with the detection point on the massage
chair of the object to be detected, acquires the data information through the detection
point sensor on the robot body, and submits the data to the upper processor for
processing. The system is mainly composed of 9 functional modules, including 7 data
acquisition modules, 1 peripheral control module and 1 flow control module. The data
acquisition module is based on the intelligent terminal and has an independent
microprocessor and sensor interface, which can handle the work independently. The
peripheral control module is responsible for detecting most of the peripheral drive and
part of the signal acquisition of the system. It’s based on non-deprivable real-time core,
which is responsible for controlling and coordinating the detection system. Each
module works according to the specified process. It has data interaction with the data
acquisition and peripheral control modules, including control instruction flow, storage
instruction flow, control instruction feedback flow, and storage instruction feedback
flow, state feedback flow, data packet flow, as shown in Fig. 1 is the data interaction
between the various functional modules of the detection system, the flow control
module is the core of the entire detection system communication network.
The CAN bus provides non-destructive bus arbitration based on frame priority.
Node messages compete for bus usage rights without being destroyed. At the same
time, the bus uses short frame transmission to reduce the time of data on the bus,
improve the reliability of data transmission and stability without compromising the
real-time nature of communication. The detection system adopts the CAN bus com-
munication method to construct the internal communication network, and realizes one-
to-many interaction between the host module and the slave module and one-to-one
information exchange between the slave module and the host module in the form of
Communication Data Processing Method for Massage Chair Detection System 213
message broadcast, and does not allow information exchange between slave modules.
In a CAN fieldbus-based communication network, the master and slave modules act as
bus nodes, and can actively send broadcast messages to the bus at any time, and other
nodes on the bus can listen to the broadcast messages.
Fig. 1. Detecting the direction of data flow between system function modules
The detection system consists of one host module and eight slave modules. The slave
module is equipped with four mechanical DIP switches, and the key value of the dial
code is read by the GPIO as the module MAC address. By changing the dialing to open
the key value, the detection system can assign up to 16 different MAC addresses to the
slave module. The uniqueness of the MAC address of the slave module is strictly
required, and the behavior of the MAC address to be arbitrarily changed is not allowed.
The detection system CAN bus message adopts the standard data frame type, and
the arbitration field in the message structure provides a frame identifier of 11 bits
length, which is used to indicate the priority of the current information frame. The
smaller the value of the identifier, the higher the priority. The frame identifier is used
creatively to describe the source or destination node of the information frame based on
the frame identifier providing the bus contention priority. For a bus message sent by the
host module, its frame identifier indicates the slave module MAC address that the
message frame is expected to receive, and the bus message frame identifier sent by the
slave module should mark the sender MAC address. The slave module MAC address
acquisition process is shown in Fig. 2.
214 L. Lu et al.
When the CAN bus node receives data, it can listen to all the information on the bus,
including some unneeded information, which is not desired by the node, so it is
necessary to filter the bus information frame. The CAN bus provides an identifier filter
to filter the bus message, and saves the valid message through the filter into the level 3
mailbox depth FIFO waiting for interrupt processing. The detection system application
configures the CAN bus filter group operating mode to be an identifier mask bit pattern,
sets a 32-bit filter identifier, and masks the desired message frame identifier by key bit
masking. The filter compares the detected bus message information frame identifier
with the expected identifier. If it is consistent, the decision is valid and stored in the
FIFO. Otherwise, no response is made to the message and the bus continues to be
listened to.
The filtering and identification of the bus broadcast information is as shown in
Fig. 3, the node first determines whether the frame identifier of the bus message is the
same as the expected filter identifier. If it is different, it is determined that the node is
invalid, and the re-listening bus is directly discarded. Otherwise, the information is
valid for the node, and the message is stored in the FIFO mailbox, and the message
information is received and processed through the ISR.
Communication Data Processing Method for Massage Chair Detection System 215
The detection system master-slave information interaction uses the standard data frame
format provided by the CAN bus for message transmission, that is, the bus provides
short frame transmission that can carry at most 8 Bytes length data segments. However,
the transmission of long messages is also essential during transmission. For long
message transmission, it must be disassembled into a short frame of 8 Bytes length and
then transmitted continuously. The principle of disassembling long messages is to
transmit in 8 Bytes short frames, and to automatically fill zeros in less than one frame.
The reassembly process of the long message is as shown in Fig. 4, and the received
short frame information is processed sequentially. First, the first frame of the long
message is determined according to the message storage protocol format, and is stored
in different containers according to the category, and the tag is also marked. The flag of
the long message type attribute is set to one. The subsequent content of the long
message is stored in the corresponding container according to the message type. If the
end of the frame is checked, the corresponding type flag is assigned 2 to notify the
application that the message is reorganized.
216 L. Lu et al.
The priority arbitration of the CAN bus is essentially to provide real-time guarantee for
high priority messages. In the case of low network load, the probability of information
collision on the CAN bus is small, and the real-time performance of the network is very
good. As the network load increases, the probability of information collision on the bus
increases. If only the underlying CAN function is used, the real-time performance of
low-priority message transmission will be affected, which may result in sending these
low-priority reports. The node of the text exits the bus because of multiple transmission
errors.
The detection system implements master-slave information interaction based on the
CAN bus. The longest transmission period Ti from the transmission of the bus message
information frame i to the processing completion mainly includes several time periods
as shown in Fig. 5.
Communication Data Processing Method for Massage Chair Detection System 217
The CAN bus can transfer at a rate of up to 1 Mb/s, in which case the commu-
nication distance can reach 40 m. The communication distance of the lower-position
CAN bus system does not exceed 15 m, which satisfies the communication distance
requirement of the highest transmission rate. The faster the communication rate, the
shorter the time the message is transmitted on the bus, and the less likely the message is
to be interfered. At the same time, the instability of the CAN bus system is increased.
The bus is easily disconnected due to a node or Crash when accessing. The bus
transmission rate B takes values of 250 Kbps, 400 Kbps, 500 Kbps, and 1 Mbps,
respectively, and obtains the relevant transmission time parameters of the information
frame i at different transmission rates, as shown in Table 1.
Table 1. Relationship between bus transmission rate and information frame transmission time
Bus transfer rate B 250 kbps 400 kbps 500 kbps 1 Mbps
Information frame longest period Ti 1132 ls 762 ls 590 ls 320 ls
Longest transmission time Ri 541 ls 338 ls 270 ls 135 ls
Longest blocking time Bi 541 ls 338 ls 270 ls 135 ls
The detection system adopts the CAN bus transmission rate of 500 kbps. The bus
does not collapse due to the sudden change of the bus suspension node, and at the same
time shortens the transmission time of the message on the physical link as much as
possible, thereby reducing the probability of data being interfered. In the worst case,
two consecutive frames of message information are received and processed by the same
node module. The frame space Fi is the longest blocking time Bi plus the time Ii + Ci
for the system reserved data processing, namely:
F i ¼ S ðB i þ Ii þ C i Þ ð1Þ
Where S is the safety factor and its value interval is [1, 2]. The detection system
CAN bus adopts 1.5 safety factor and the frame space is 500 ls frame space, which can
effectively avoid bus information collision and ensure the real-time performance of all
information.
218 L. Lu et al.
7 Conclusion
This paper studies the implementation of the master-slave information interaction of the
detection system from the perspective of information frame marking and recognition.
A new data processing method is proposed. Through this method, the efficiency of
information transmission can be effectively increased, the accuracy of information
transmission can be enhanced, the real-time performance of information transmission
can be improved, and the speed and quality of information transmission can be made
better. This data processing method changes the way data is transmitted and received. It
also can be extended to the control field using the CAN bus.
References
1. Hiyamizu, K., Fujiwara, Y., Genno, H., et al.: Development of human sensory sensor and
application to massaging chairs. In: Proceedings 2003 IEEE International Symposium on
Computational Intelligence in Robotics and Automation. Computational Intelligence in
Robotics and Automation for the New Millennium (Cat. No. 03EX694), Kobe, Japan, vol. 1,
pp. 140–144 (2003)
2. Teramae, T., Kushida, D., Takemori, F., et al.: Estimation of feeling based on EEG by using
NN and k-means algorithm for massage system. In: Proceedings of SICE Annual Conference
2010, Taipei, pp. 1542–1547 (2010)
3. Jaafar, H., Fariz, A., Ahmad, S.A., Yunus, N.A.Md.: Intelligent massage chair based on blood
pressure and heart rate. In: 2012 IEEE-EMBS Conference on Biomedical Engineering and
Sciences, Langkawi, pp. 514–518 (2012)
4. Jin, H., Shen, L., Song, J.: Massage effect evaluation of massage chairs based on EMG
signals. Packag. Eng. 35(02), 28–31 (2014)
5. Sun, L., Wang, D.: The development of fault detection system based on LabVIEW. In: 2018
5th International Conference on Electrical and Electronic Engineering (ICEEE), Istanbul,
pp. 157–161 (2018)
6. Zhang, H., Deng, T., Li, G.: Design and implementation of multiple communication methods
in massage chair detection system. Inf. Syst. Eng. (01), 35–37 (2019)
Development of Data Visualization Interface
in Smart Ship System
Guiqin Li1(&), Zhipeng Du1, Maoheng Zhou1, Qiuyu Zhu1, Jian Lan1,
Yang Lu2, and Peter Mitrouchev3(&)
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
No.2 Qingdao Road New Hi-Tech Industry Park, Changshu, Jiangsu, China
3
University of Grenoble Alpes, G-SCOP, 38031 Grenoble, France
[email protected]
1 Introduction
Classification Society DNV proposed “Shipping 2020” in 2012 and “2020 Shipping” in
2015, adding two major technological development trends: “Mixed Ship Propulsion”
and “Connectivity” [2]. In September 2015, Lloyd’s Register of Shipping (LR), Quintic
Group and Southampton University collaborated to release the 2030 Global Marine
Technology Trends (GMTT 2030) report, which listed smart boats as 18 major One of
the marine technologies [3]. The rapid development of intelligent ship technology has
made the Chinese shipbuilding industry also pay attention to the application of Internet
of Things, big data and intelligent services in the shipbuilding field. In 2015, China
Classification Society (CCS) formulated the “Smart Ship Code” based on the experi-
ence of domestic and foreign intelligent ship applications and the future development
direction of ships, which clearly stipulated the intelligent ship in the hull, engine room,
navigation, energy efficiency control, cargo management, Specific requirements for
data integration platforms, etc. [4]; On November 28, 2017, the “Dazhihao”, which was
praised by the domestic media as the world’s first smart ship, was officially delivered. It
was developed by China State Shipbuilding Corporation. The ship group) is financed
and built, and the “iDolphin” 38,000-ton intelligent bulk carrier jointly developed by
many intelligent ship research and development units at home and abroad [5].
Data visualization technology is a means of graphically using collected data,
computer graphics, human-computer interaction, and computer-aided tools to transform
data, information, and knowledge into visual representations, making it more efficient
and clearly convey information [6]. The data visualization process can be regarded as
the process of transforming and displaying the data stream. The basic process model is
shown in Fig. 1. The mission of data visualization is to clearly and intuitively present
the value of the data so that it can really play a role in decision support. Data visu-
alization focuses on top-down processing of data, information, and knowledge.
The goal is to create a chart that clearly and effectively communicates key features
in combination with aesthetics, enabling deeper and more inconspicuous data sets.
Insight [7].
This article focuses on the front-end view layer page design of the platform on the
WEB server. The terminal user is provided with a view page for accessing the WEB
server side, and the view layer page can provide the user with relevant ship driving
state information display, and the user can know the relevant equipment of each system
of the user ship according to the real-time ship driving state monitoring information
provided by the view page. State, to achieve intelligent control of the ship. The plat-
form front-end page is composed of specific functional modules, which are responsible
for uploading data display, command uploading, command feedback, etc. Users can
select corresponding operations according to their own needs.
222 G. Li et al.
(1) User information management is divided into LEVEL 1, LEVEL 2, and LEVEL 3.
According to the different permission levels, the user can operate differently. The
higher the permission, the wider the range of operations. The main function of the
user whose privilege level is LEVEL 3 is to operate the historical data of each
system and equipment of the ship, such as the specific functions of querying,
deleting and modifying historical data. Users with the privilege level of LEVEL 2
can view the real-time data display of each device on the basis of LEVEL 3, and set
the alarm value according to the actual situation. Users with the highest user-level
permission level of LEVEL 1 can implement any operation within the page, such
as adding, deleting, checking, and changing the information of any user.
(2) Ship driving state management is divided into ship condition management, fault
handling, and configuration management. Ship condition management consists of
real-time ship status information, including basic status, cabin status, equipment
operating status, and safety status. The basic state mainly queries the ship speed,
Development of Data Visualization Interface in Smart Ship System 223
engine speed, engine torque, total mileage, fuel consumption, water temperature,
lubricating oil and other information; the cabin state is used to monitor the air
pressure, temperature and flammable gas concentration in each cabin of the ship;
equipment operating status monitoring includes equipment air pressure Informa-
tion such as temperature, hydraulic pressure and normal operation; safety status
monitoring includes alarms and other conditions exceeding the set threshold; fault
handling is mainly for the ship to record the fault number, fault classification, fault
occurrence time, fault content description when the ship encounters a fault. Such
information makes it easy for users to analyze and locate the ship’s faults.
Configuration management is used to maintain the ship status management data
items and fault query items, and record the operation behavior of the equipment,
including information such as the operator, operation time and operation content.
(3) Terminal management is divided into user terminal information display and user
terminal information management. The user terminal information is mainly used
to display the current terminal device number, name, model, specifications, basic
parameters, related configuration equipment, production date, manufacturer,
maintenance record, and service life. User terminal information management is
mainly used to add, delete, check, and change the operational information of the
terminal device.
(4) OPC UA (OPC Unified Architecture) is a new generation of OPC. OPC UA does
not rely on programming languages and operating systems to run applications, it
can run regardless of the manufacturer and platform. The system unifies different
protocols into OPC UA standard protocols and performs unified information
modeling and secure transmission. OPC UA uses a semantics-based and service-
oriented architecture. With a unified architecture and mode, it can realize hori-
zontal information integration such as data acquisition and device interoperability
at the bottom of the device, and vertical information integration from device to
SCADA to MES, device and cloud. As long as the upper layer software is
designed according to the OPC UA specification, the device data can be obtained
without knowing the communication protocol of the underlying device. Data
modeling and transmission using the OPC UA standard. As an OPCUA server,
the terminal converts different protocols into a unified information model
description, and then transfers the data from the collection terminal to the data
interface server through the OPC UA transmission standard. Finally, the software
uniformly stores the data to the database.
The mainstream technology based on the MVC design pattern platform architecture
design and functional requirements analysis is used to complete the optimization design
and implementation of the platform front-end function page. The implementation of the
system back-end webpage logic mainly adopts the Django framework. The main data
of the front-end page is displayed in a dynamic loading manner. According to the
user’s specific access operation, it is divided into partial and overall dynamic loading.
224 G. Li et al.
Compared with the early front-end page, the foreground data information is loaded
more. Flexible and reduced front page code rewrites. In the specific implementation
process, we mainly adopt Django back-end framework, Bootsrap front-end framework,
JS plug-in library and AJAX technology, and use ORM (Object Relational Mapping)
framework to achieve various operations on the database using simpler statements.
Figure 4 shows the data visualization interface of the intelligent ship system.
management, fault query, and terminal management sub-block information. The related
JS file also adds a click event to these specific sub-function modules. The user clicks on
the location of the module to display the data of the related function in the functional
area through dynamic loading. The main language in the function display area realizes
the display of various status modules in the ship state management, and further refines
the specific state of the ship management. The specific status information is displayed
in the main function area, and different variable names and ids are selected through the
drop-down box. of Different status information is switching displayed. Figure 4 shows
the real-time display of the current parameter information, which mainly converts
various types of information data in the database into a dashboard form by referring to
the view plug-in in Highcharts, when the data in the dashboard exceeds the threshold
set by the variable. The alarm box will pop up and data will be stored in the history
information table.
(3) Ajax asynchronous dynamic submission
In the form submission process, the system partially adopts the Ajax asynchronous
submission mode. Ajax asynchronous submission can interact with the server in the
background, asynchronous loading to achieve page refresh. This will significantly
improve the performance of the business server, and the speed at which information is
loaded into the database will be faster. The specific code is as follows. This section of
Ajax code implements the submission of the user registration form, the url attribute
overrides or specifies the ‘action’ attribute of the form, the type attribute specifies the form
submission method, and the data attribute is used to specify the specific parameter value.
After the commit, the callback function is called to feed the information back to the user.
4 Conclusions
References
1. Burmeister, H.-C., Bruhn, W., Rødseth, Ø.J., Porathe, T.: Autonomous unmanned merchant
vessel and its contribution towards the e-navigation implementation: the MUNIN perspective.
Int. J. e-Navig. Marit. Econ. 1, 1–13 (2014)
2. Rolls-Royce Testing Drone Technology for Unmanned Cargo Ships (2014). www.rolls-royce.
com
226 G. Li et al.
3. Global Marine Technology Trends 2030. Lloyd’s Register, QinetiQ and University of
Southampton (2015)
4. Smart ship specification. China Classification Society (2015)
5. Liu, C., Chu, X., Xie, S., Yan, X.: Review and prospect of ship intelligence. Ship Eng. 38(3),
77–84 (2016)
6. Zhang, L.: The visualization of topologies and the fast metamorphosis of polyhedral models
in 3D. Master’s thesis, Jiangnan University (2007)
7. Sun, B.: Research and implementation of data visualization based on microsoft newest graph
system WPF and silverlight. Master’s thesis, Northeast Normal University (2009)
Feature Detection Technology
of Communication Backplane
1 Introduction
Since 2010, the total volume of China’s telecom services has generally increased. With
the development of mobile communications and 5G, it is expected that the proportion
of mobile communications revenue will continue to rise in the future. The communi-
cation base station is the basis for the operation of the entire communication network.
According to the requirements of the automation industry, it is of great significance to
use the image recognition method to automatically detect the backplane of the com-
munication base station [1, 2].
Wang studied the sharpening method of digital image, using one-direction first-
order differential and non-directional first-order differential algorithm to sharpen the
image [3]. Cheng and Wei studied the use of K-Means clustering algorithm to merge
similar colors of images, in order to reduce color types and improve processing effi-
ciency [4]. Yang studied digital image enhancement processing method [5]. In terms of
deep learning, Xing and Wang, Zhou, Yang studied the method of using convolutional
neural network algorithm to extract image features and enrich the image separately
[6, 7], which greatly improved the accuracy of image recognition. Ma studied the deep
2 System Design
The whole system is designed to realize the feature detection of the communication
base station backplane, including image acquisition, image analysis processing and
feedback of processing results. It mainly consists of industrial computer (control sys-
tem), image acquisition system and database system.
The image acquisition system can detect multiple products, which needs to com-
plete the automation indicators: the single product detection tempo is less than 4 min,
the image processing time is less than 45 s. The system can detect and identify the
feature on the surface of the product: whether the black Pad is pasted, whether the
positioning pin is installed, whether the label is or not, and whether it is reversed. The
detection accuracy of the communication backplane can reach 100%.
The image acquisition system is constructed as shown in Fig. 1. A detection ter-
minal and a central control PLC are included. The detection terminal can be divided
into three parts: the frame, the light source system and the industrial camera. A three-
axis coordinate system is attached to the frame which can control the movement of the
CCD camera and light. The light source selected with the best illumination condition
for the actual situation can assist the CCD camera to capture the three-dimensional
structure precisely.
The workflow of the image acquisition system is shown in Fig. 2. When the
product is loaded manually and the start button is pressed, the product will be clamped
and transport to the detection position. After detecting its position and boundary by the
position sensor, a signal is sent to the industrial computer to control the light to be
turned on to assist the CCD camera to obtain the color image of the product; To
Feature Detection Technology of Communication Backplane 229
enhance the image, the system is designed to use several methods to process it, which
includes reducing the color of the image, then converting it into a gray image and
sharpening it. Feature extraction is performed on the enhanced image, and the image is
classified by machine learning method. The result is marked on the original image; the
result is output to the display terminal and saved in the database for later analysis. The
above steps are repeated several times until the detection of the entire product to be
tested is completed.
For the original image captured by the camera, the system first performs pre-processing
to remove image interference caused by factors such as illumination, parameters and
positional direction that camera set during image acquisition, and is used to enhance
certain features in the image.
(2) Calculate the distance between each data point and the cluster center separately:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
d¼ ðx1 x2 Þ2 þ ðy1 y2 Þ2 þ ðz1 z2 Þ2
The data point with the smallest distance from the cluster center is classified into
the same category as the cluster center.
(3) Calculate the mean according to the existing data points of each category, and re-
select the cluster centers of each category according to the following formula.
1X
Ci ¼ xj
nj xi 2Ci
(4) Repeat the above steps until the cluster center Ci does not change.
A grayscale image of the original image can be obtained by changing the color of
the pixel to RBG (Gray, Gray, Gray).
For the obtained grayscale image, the image can be processed in a two-dimensional
space using spatial domain image enhancement. The steps for spatial domain image
enhancement are as follows:
(1) Image enhancement
Use the histogram homogenization method to convert the grayscale interval with
greater concentration into a uniform distribution over the entire grayscale range.
(2) Image smoothing
The median filtering method is used to smooth the image in order to eliminate or reduce
noise and improve image quality. The median filtering can’t blur the edges when
eliminating random noise.
(3) Image sharpening
Image sharpening is an important step in image preprocessing to highlight the edges
and details of the image and the outline of the image. The principle is that the gradation
difference between the pixel point of the edge portion of the image and the pixel point
of the surrounding image is made larger, so that the edge of the image is protruded, and
the sharpness of the image is improved as a whole.
Feature Detection Technology of Communication Backplane 231
The obtained image features are extracted and classified by a convolutional neural
network.
232 G. Li et al.
nXin
sði; jÞ ¼ ðX WÞði; jÞ þ b ¼ ðXk Wk Þði; jÞ þ b
k¼1
Where X is the input and W is the convolution kernel. N_in is the number of input
matrices. X_k represents the kth input matrix. W_k refers to the kth convolution kernel
matrix.
The convolution kernel is represented by a two-dimensional matrix, and each number
in the matrix is called a weight. In the convolution calculation, the convolution kernel
matrix is aligned with the upper left corner of the image matrix, and then the convolution
kernel is shifted to the right by one pixel, each weight is multiplied by the pixels of the
corresponding image, and these products are added together as a result of the convolution.
Fig. 4. Image with a dowel pin Fig. 5. Image without a dowel pin
Fig. 8. Image with correct orientation of Fig. 9. Image with wrong orientation of the
the barcode barcode
It can be seen that the designed system can accurately identify the items to be
detected on the backplane and output the results on the display interface.
5 Conclusions
The communication base station backplane feature detection system is designed with
image enhancement technology to improve the clarity of the image acquisition of the
communication base station backplane. By using deep learning, the characteristics of
the acquired image can be identified and detected with an accuracy of 100%. This
method eliminates the uncertain factors generated during the artificial processing and
improves the efficiency of mechanical automation production and processing.
References
1. Wang, T.: Application and details of computer image recognition technology. Electron.
Technol. Softw. Eng. 19, 128 (2017)
2. Zou, X.: Research on computer image recognition technology and application problems.
Electron. Test 23, 46–47 (2017)
3. Wang, Y.: Sharpening algorithm for digital image processing. Communications 26(02), 275–
276 (2019)
4. Cheng, G., Wei, Y.: Application of color quantization algorithm based on k-means in rock
image preprocessing. J. Xi’an Shiyou Univ. Nat. Sci. Edn. 34(03), 114–119 (2019)
5. Yang, C.: Research and implementation of digital image enhancement technology. Comput.
Program. Skills Maintenance 09, 138–139 (2018)
6. Xing, Z.: Application of convolutional neural network in image processing. Softw. Eng. 22
(06), 5–7 (2019)
7. Wang, M., Zhou, S., Yang, Z., et al.: A brief introduction to deep learning techniques. Autom.
Technol. Appl. 38(05), 51–57 (2019)
8. Ma, L.: Research on deep learning algorithm about handwriting based on the convolutional
neural network. Comput. Sci. Appl. 08(11), 1773–1781 (2018)
Research on Data Monitoring System
for Intelligent Ship
Abstract. Ship data monitoring systems are critical to the safety of ship
operations. They provide important information for the maintenance and man-
agement of equipment in the cabin. The network system of intelligent ship
monitoring system, as well as hardware systems such as computing, storage and
terminal, are designed to provide the underlying hardware support for the
intelligent ship monitoring system. The cloud computing platform and database
cluster are the basic software platforms to provide the operational basis for the
intelligent ship monitoring system. The intelligent ship data monitoring system
includes unified user authentication, data monitoring, data acquisition and
transmission, and warehousing software. The format of unified data interaction
provides intelligent support for ship operational efficiency and navigation safety
through the comprehensive sharing of various data, which makes up for the
deficiencies of traditional ship data monitoring systems and provides a basis for
the deployment and application of the follow-up functions of intelligent ships.
1 Introduction
At present, ship navigation system, engine monitoring and remote control system,
maintenance management is relatively independent, single function, information is
closed; and each system is designed independently, the transmission protocol is
complex, and it is difficult to integrate interaction [1]. With the rapid development of
shipping technology, the degree of ship informationization has gradually increased. In
the navigation of ships, it is necessary to reduce the navigation and maintenance costs
of ships, improve energy consumption utilization, and optimize navigation routes [2].
The degree of integration of ship systems is becoming more and more demanding, and
ship intelligence has become an inevitable trend in the development of shipbuilding
and shipping.
The intelligent ship data monitoring system collects a large amount of data through
the underlying data nodes. Based on this information, the operation and management
personnel can properly control the equipment to ensure the normal operation of the ship
[3]. The ship has a wide variety of professional systems and a large amount of data
processing tasks. The traditional C/S structure data monitoring system is responsible
for both the presentation layer and the business logic layer. It requires a large amount of
software to be installed, and the program is difficult to deploy and maintain. And
upgrades are frequent, and it is difficult to meet the demand. Through the Django
framework, this paper builds a smart ship data monitoring system based on B/S
architecture [4], which solves the standardization of data acquisition, transmission and
storage. Not only can it significantly improve the real-time, security and maintainability
of the system, but also easy to achieve centralized monitoring, unified management,
etc. [5]. At the same time, the establishment of a cloud computing platform, a large
number of limited computer resources are concentrated in a certain distribution [6]. The
remote network cloud computing host runs the intelligent ship data monitoring system,
which can rapidly expand computing power, storage space and provide various soft-
ware tools as needed.
The network system of the intelligent ship monitoring system, as well as the hardware
systems of computing, storage and terminal, provide the underlying hardware support
for the intelligent ship monitoring system.
The overall design of the hardware system is a three-layer network architecture,
information layer, network layer and application layer. The structure of the system
network communication system is shown in Fig. 2. The information layer collects the
physical information of various devices of the ship through various ship intelligent
sensing devices, and converts various signals into digital information that can be
processed. The network layer realizes the connection between devices and the com-
munication of the system, and the network connection form of the ship is Ethernet. The
application layer includes a workstation, a database server, a storage server, and the
like, and privatizes computing resources and storage resources in a private cloud
manner.
The network topology adopts a dual redundant star-type loop-free Ethernet topol-
ogy, and the network protocol and the virtual private network adopt the international
standard network protocol TCP/IP. Virtualizing the server reduces the interdepen-
dencies between the operating system and the hardware, enabling dynamic adjustment
of server resources on demand across resource pools. The storage network is built using
IPSAN [7], with at least two storage nodes to improve storage reliability and virtualize
with cloud computing technology. Compute nodes use multiple high-performance
computers and are virtualized in a private cloud to meet the dynamic allocation of
resources (Fig. 1).
236 G. Li et al.
Computer
Core
Switch Access
Client
Switch
Computer
Access
Switch PAC
PAC
Field Device
Calculate Node
The basic software includes the cloud platform and database cluster, which is the basis
for the operation of the entire system.
The cloud computing platform pools the server clusters, performs unified man-
agement and scheduling, and implements flexible deployment of virtual machines
based on resource requirements and service priorities to improve resource utilization.
Build a private ship cloud computing platform, provide IaaS type cloud host or PaaS
type container computing service to the user layer, and all upper layer application
software is built on the platform. The cloud computing platform chose Xen, which is
more mature, as the main virtualization solution [8] for large-scale infrastructure
deployment. IPSAN is a unified storage solution, network design, and core equipment
connections are dual redundant. Network isolation and IP allocation. Devices accessing
the network are isolated based on the way that ports are combined with subnets to
divide VLANs. The IP addresses of the terminal devices and the servers, storage, and
network devices are allocated by the private address of the internal network. Devices
between different subnets and VLANs are interconnected through internal routes and
gateways. A schematic diagram of the system architecture of the Citrix XenDesktop
desktop virtualization solution is shown in Fig. 3.
The overall solution of the database cluster is implemented by the MySQL database
management system and the My CAT distributed database [9]. Due to the adaptability
of My CAT, all kinds of different databases can be accessed smoothly through a unified
interface, and only need to propose the table structure and storage scale of the database,
and the rest of the work is solved by the cluster. Multiple physical databases can be run
on different physical machines or on virtual machines of the cloud platform. The
system scale can be added or deleted according to actual needs.
The application software platform includes unified user authentication, data monitoring
system, data collection and transmission, and warehousing software. It is the top-level
application that directly interacts with users. It is developed by B/S architecture and it
supports applications on the mobile side. The application software platform architec-
ture is shown in Fig. 4.
UA Client can perform real-time data collection and storage, historical data access,
instant alarm information reception, device control and other functions…
(3) Data monitoring system
The intelligent ship data monitoring system reads various state parameters of the
equipment from the database, and is responsible for providing an interactive interface
to the user, displaying various data of the ship in real time on the monitoring interface
in various presentation forms, and realizing the monitoring function of the ship running
condition. When an instant message such as an alarm is stored in the database, the
information of the HTTP communication program is invoked by the trigger of the
database to push the information to the monitoring software in real time to realize
timely display of the information.
The data monitoring system is based on the B/S architecture and is developed using the
Django framework. For data with low volume and low update requirements, the
transmission request method of the back-end data uses Ajax polling query [13], which
realizes partial data content update without re-requesting access; for large-scale and
real-time requirements High data, the back-end data transmission request method
adopts Websocket protocol [14] to realize one connection and long-term communi-
cation to save server resources.
(1) Analog signal interface design
Taking the speed and temperature simulation of the ship as an example, the speed is
displayed in the form of an instrument panel, and the temperature is displayed in the
form of a histogram. When the analog value exceeds the rated threshold, an alarm will
be given to alert the user. At the same time, the user can click the drop-down box to
select, or search in the input box, select the analog quantity to be monitored, and then
monitor the equipment to be monitored. The interface design is shown in Fig. 6.
6 Conclusions
A certain improvement has been made in view of the inadequacies of the existing ship
data monitoring system. Based on the overall cloud platform architecture of the ship’s
private cloud, providing a series of services such as virtual machines, virtual desktops,
virtual storage, virtual networks, etc., can greatly improve the reliability, security,
flexibility and economy of the system, and build and grow the system. Data intelligence
Research on Data Monitoring System for Intelligent Ship 241
analysis provides solid data and computing resources support. At the same time,
complex and diverse equipment terminals are protocol-converted to form a standard-
ized and unified data format, and gradually access, integrate and integrate various types
of ship subsystems with different levels of intelligence to achieve comprehensive
monitoring and intelligent management of ships. It is very practical and valuable for the
shipping personnel to have convenient and quick access to the data and management of
the ship’s operation.
References
1. Liu, S., et al.: Ship information system: overview and research trends. Int. J. Naval Architect.
Ocean Eng. 6(3), 670–684 (2014)
2. Kaminaris, S.D., et al.: An intelligent data acquisition and transmission platform for the
development of voyage and maintenance plans for ships. In: The Fifth International
Conference on Information, Intelligence, Systems and Applications (IISA 2014). IEEE
(2014)
3. Yang, C.S., Jeong, J.: Integrated ship monitoring system for realtime maritime surveillance.
In: IGARSS 2016 – 2016 IEEE International Geoscience and Remote Sensing Symposium.
IEEE (2016)
4. Li, X., Zhu, M., Zhan, X.: Exploration of the building model about theme_based learning
and shared community based Python add Django. In: Advances in Computer Science and
Education, pp. 129–134. Springer, Berlin (2012)
5. Li, S., Si, Z.: Information publishing system based on the framework of Django. In: China
Academic Conference on Printing & Packaging and Media Technology. Springer, Singapore
(2016)
6. Shen, Z., et al.: Cloud computing system based on trusted computing platform. In: 2010
International Conference on Intelligent Computation Technology and Automation, vol. 1.
IEEE (2010)
7. Lee, H.-J., Lee, K., Won, D.: Protection profile of personal information security system:
designing a secure personal information security system. In: 2011IEEE 10th International
Conference on Trust, Security and Privacy in Computing and Communications. IEEE (2011)
8. Singhal, R., Bokare, S., Pawar, P.: Enterprise storage architecture for optimal business
continuity. In: 2010 International Conference on Data Storage and Data Engineering. IEEE
(2010)
9. Tsuchlya, M., Mariani, M.P.: Performance modeling of distributed database. In: 1984 IEEE
First International Conference on Data Engineering. IEEE (1984)
10. Shen, H., Fan, H.: A context-aware role-based access control model for web services. In:
IEEE International Conference on e-Business Engineering (ICEBE 2005). IEEE (2005)
11. Liu, S.: Task-role-based access control model and its implementation. In: 2010 2nd
International Conference on Education Technology and Computer, vol. 3. IEEE (2010)
12. Bruckner, D., et al.: An introduction to OPC UA TSN for industrial communication systems.
In: Proceedings of the IEEE (2019)
13. Mahemoff, M.: AJAX design patterns: creating web 2.0 sites with programming and
usability patterns. O’Reilly Media, Inc. (2006)
14. Imre, G., Mezei, G.: Introduction to a WebSocket benchmarking infrastructure. In: 2016
Zooming Innovation in Consumer Electronics International Conference (ZINC). IEEE
(2016)
Research on Fault Diagnosis Algorithm Based
on Multiscale Convolutional Neural Network
1 Introduction
feature fusion is performed on the dimensions of the output channel to mine the rich
and varied features of the original signal. In this paper, three scale convolution kernels
are used. The main function of large convolution kernel is to increase the receptive field
and extract global features. The small convolution is mainly used to extract local
features. The network structure model is shown in the Fig. 1 below.
In the convolutional layer, the convolution kernel convolves the input signal or
feature vector of the previously output to generate a corresponding feature map. The
most important feature of the convolutional layer is weight sharing, which means the
same convolution kernel will traverse an input in a fixed step size. The weight sharing
effectively reduces the network parameters of the convolutional layer, avoids over-
fitting, and then performs nonlinear transformation through the activation function. The
mathematical model is expressed as:
X
xlj ¼ xl1
i kijl þ blj ð1Þ
i2Mj
to subtract the average of the mini-batch from the input of the convolutional layer or the
fully connected layer, and then divide by the standard deviation, which is similar to a
standardized operation, so the process of training can be accelerated.
The pooling layer performs the downsampling operation, and the main purpose is
to reduce the parameters of the neural network. There are two pooling methods,
maximum pooling and average pooling. The maximum pooling is used in this paper.
The mathematical model of the largest pooling is expressed as:
qli ðtÞ indicates the value of the tth neuron in the i-th feature vector of the first layer,
t 2 ½ðj 1ÞW þ 1; jW, W is the width of the pooled area, Pli þ 1 ðjÞ indicates the value
corresponding to the l + 1th layer of neurons.
The fully connected layer classifies the extracted features. The mathematical
expression is:
Xn
zl þ 1ðjÞ ¼ i¼1
Wijl alðiÞ þ blj ð3Þ
Wijl is Weights of the i-th neuron in the first layer and the j-th neuron in the l + 1th
layer. zl þ 1ðjÞ is the logits value of the jth output neuron of the l + 1th layer. blj is the
offset value of all neurons in the first layer to the jth neuron in the l + 1th layer.
Since the output of the neural network model in this paper has multiple categories
of fault types, softmax is selected as the final classifier.
Fig. 3. Loss function and accuracy of training sets and test sets
The number of iterations in the graph is the number of batch_size in the network
training. It can be seen from the above figure that the multiscale convolution and the
BN layer can effectively accelerate the convergence speed of the neural network.
The neural network must ensure the accuracy of the network model while having
fast convergence. The algorithm is compared with the results of several current fault
diagnosis algorithms, such as BP neural network (BP-NN), support vector machine
(SVM).), stack denoising encoder (SDA), etc. (Table 2).
248 X. Li et al.
It can be seen from the above table that the algorithm proposed in this paper has the
highest fault recognition accuracy while ensuring convergence speed.
4 Conclusions
The fault diagnosis algorithm based on multiscale convolutional neural network pro-
posed in this paper does not need to manually extract features and expert knowledge to
achieve an “end-to-end” fault diagnosis mode. What’s more, based on the classical
convolutional neural network model, the first layer of multiscale convolution and
feature fusion is used to mine the deeper features of the original signal. And by adding
the BN layer to further improve the convergence speed of the network, the experiment
proves that the proposed algorithm has higher accuracy and faster convergence speed
than other network models.
References
1. Miao, Z.H., Zhou, G.X., Liu, H.N., et al.: Tests and feature extraction algorithm of vibration
signals based on sparse coding. J. Vibr. Shock 33(15), 76–81 and 118 (2014)
2. Hu, B., Li, B.: A new multiscale noise tuning stochastic resonance for enhanced fault
diagnosis in wind turbine drivetrains. Measur. Technol. 27(2), 025017 (2016)
3. Su, Z.Q., Tang, B.P., Yao, J.B., et al.: Fault diagnosis method based on sensitive feature
selection and manifold learning dimension reduction. J. Vibr. Shock 33(3), 70–75 (2014)
4. Wang, H., Chen, P.: Intelligent diagnosis method for rolling element bearing faults using
possibility theory and neural network. Comput. Ind. Eng. 60(4), 511–518 (2011)
5. Jiao, W.D., Lin, S.H.S.: Overall-improved fault diagnosis approach based on support vector
machine. Chin. J. Sci. Instrum. 36(8), 1861–1870 (2015)
Research on Fault Diagnosis Algorithm Based on Multiscale CNN 249
6. Jing, L., Zhao, M., Li, P., et al.: A convolutional neural network based feature learning and
fault diagnosis method for the condition monitoring of gearbox. Measur.: J. Int. Measur.
Confederation 111, 1–10 (2017)
7. Gan, M., Wang, C., Zhu, C.: Construction of hierarchical diagnosis network based on deep
learning and its application in the fault pattern recognition of rolling element bearings. Mech.
Syst. Sign. Process. 72–73(2), 92–104 (2016)
8. Li, X.L.: Research on fault diagnosis algorithm based on structure optimization for
convolutional neural network. In: Information Technology & Artificial Intelligence Confer-
ence. IEEE (2019)
Proactive Learning for Intelligent Maintenance
in Industry 4.0
1 Introduction
Maintenance plans and policies are one of the most important decisions for all pro-
duction and manufacturing processes. Companies have been implementing different
maintenance activities and strategies in order to improve their overall performance in
terms of production costs, wastes, flexibility, time and reliability.
The traditional ways of maintenance have evolved over time with the introduction
of new technologies. The earliest maintenance activities are known as reactive main-
tenance activities where the management or workers deal with problems only when
they occur. With the development to a higher maturity level, companies switch to
Industry 4.0 refers to the fourth industrial revolution with the introduction of big data,
internet of things (IoT) and cyber-physical system (CPS). Today, many companies are
competing to apply industry 4.0 methodologies in order to enhance their business
performance. Companies and manufacturers are investing significantly in building up
global networks to connect their machinery, factories and facilities in order to enable
efficient communication and application of CPS.
The industrial internet of things (IIoT) in industry 4.0 has become attractive to
many businesses due to the reduction in costs of computations, storage and network
systems by the use of the cloud-computing model. The recent developments of an IoT
framework and the advancements in the sensor technologies have established an
integrated network that tightly connects systems and humans together [1]. IIoT systems
can be effectively used to create operational smart factories in which a higher level of
efficiency can be reached. IIoT objects, i.e., sensors, can be widely used to gather data
in real-time from the field for improving the productivity through advanced automatic
processes [2], improving the safety through a deeper knowledge of workers’ position
[3] and reducing the equipment failure through fast event detection capabilities [4].
Civerchia et al. [5] describes one architecture for an IIOT system, as shown in
Fig. 1. Sensors are used to measure the desired data from the environment. The
Gateway is a node in the network, which is able to interact with the monitoring sensors
and the mind of the system. Information is then passed wirelessly to the RCSR in the
main control room to store data, which can be used for data analysis with advanced
252 R. Noureddine et al.
algorithms, pattern recognition, and data visualization. Finally, the OPC server stores
all the data for authorized client access.
In IIOT systems, big data can be analyzed online via a cloud with advanced
analytics at a very high speed, which can then be used by process engineers to obtain
valuable information. As a result, future industry will be able to reach a high intelli-
gence level by being capable of sharing this useful information across different nodes in
the network and reacting to different conditions and events in an optimal way through
CPS.
The concept behind CPS in Industry 4.0 is that they are intelligent systems con-
taining embedded circuits that are connected to their environment. They do not only
respond to the predefined stimulus, but also are able to communicate and interact with
the surrounding environment. CPSs are networked and are thus able to send and
receive data from different locations. CPS allows the construction of applications that
can autonomously interact with environment and execute actions accordingly. Figure 2
shows the CPS framework for self-maintenance machines.
Finally, it is important to know that the cloud in Industry 4.0 provides everything as
a service. Three main categories are given as follows.
Proactive Learning for Intelligent Maintenance in Industry 4.0 253
• Infrastructure as a service (IaaS): in which the hardware needed and the server
rooms are presented as a service rather than buying it.
• Platform as a service (PaaS): gives access to the development languages, libraries,
APIs, etc.
• Software as a service (SaaS): provides services by providing users with web server-
based shared application instead of accessing a copy of the application hosted in a
local private server.
• A central place for data storage: is needed, which can be either indoors or cloud
based.
The structure allows data to flow from the production process to the central data
storage area where the data from different systems and devices are gathered. After-
wards, the data is sent into machine learning algorithms for extracting knowledge,
features, patterns, classes, and relations. After the data has been processed by machine
learning algorithms, the results are sent to dashboards for visualizing the system status
and predicting the future behavior. In addition, the messages or alarms are sent to
respective people at the right time in order to notify an even that has happened or is
about to happen in the production process. Data also flows in the reverse direction
where the output of the machine learning algorithms can be used as the inputs for
autonomous decision-makings. Figure 3 shows the predictive maintenance structure in
Industry 4.0. It is an updated structure of the existing models, which only consider the
one-way forward flow of information between different levels within a company.
Machine learning algorithms used for predictive maintenance in Industry 4.0 vary
in their characteristics and functions. There is not a specific algorithm type, which is
applicable to all conditions. Each situation requires a specific algorithm depending on
the characteristics, i.e., known or unknown functional form of the system, labeled or
unlabeled data, deterministic or stochastic data, etc. For instance, a good algorithm for
detecting the sequence behavior or time related behavior may consider the use of
recurrent neural networks or even a combination of several algorithms, as done by
Yuan et al. [8], in order to predict anomalies in rotating bearings behavior of a
hydropower plant.
The ability of continuous learning in real-time monitoring allows for reliable appli-
cation of intelligent maintenance systems in manufacturing companies. This requires
CPS to use adaptive learning techniques to continuously update the knowledge base for
data mining.
Proactive Learning for Intelligent Maintenance in Industry 4.0 255
With the increase on the complexity of machines and systems, the root cause
analysis for undesirable events or failures becomes a very difficult task. Several factors,
i.e., personnel related tasks and operating conditions of equipment, etc., may all be the
influencing factors of certain outcomes. In this regards, one highlighted benefit of
applying the predictive maintenance in industry 4.0 is the ability to discover the
underlying patterns in the gathered data and to model the systems using the data-driven
approach in order to provide solutions for an effective root cause analysis and an
automated response.
Our solution suggests the manufacturing process can be continuously improved by
using an adaptive learning predictive maintenance approach with constant updates of
the knowledge base of the system based on detected undesirable or abnormal states,
maintenance personnel information, and troubleshooting data.
The first step is to gather the relevant data that is expected to have an effect on the
system. Expert opinions are needed in order to decide which data should be included
256 R. Noureddine et al.
and how they are measured. The second step requires data cleansing and outlier
removal for consistent analysis. In the third step, missing data is estimated by different
techniques, i.e., moving average. In the fourth step, detected faults are gathered and
added to the historical data. Based on which, machine learning algorithms i.e., the
Naïve Bayesian filter, neural networks, etc., can be used to detect how the outcome is
affected by the change of data. These algorithms can provide helpful solutions due to
their capabilities to model complex nonlinear processes with a high level of confidence.
Finally, the algorithm performance should be validated before they can be applied in
the real operations.
Since the reaction of different personnel to different problems is also gathered and
stored for analysis. The CPS is able to assign different problems to the right person by
mobile or email notifications based on smart decisions. These decisions may consider
the expertise, speed, distance as well as other factors for each individual.
Fig. 5. Updates of the control limits based on the real-time data (left) and the data measured by
an ultrasonic sensor using a new approach (right).
5 Conclusion
New technologies in Industry 4.0 are paving the way for new practices and techniques
to be used in the maintenance operations. The proposed concept of Industry 4.0
maintenance is characterized by the ability to derive knowledge from historical data
and by the ability to learn from new live data in real-time.
To successfully realize the goals of industry 4.0, equipment in a smart factory
should possess self-awareness for current-condition evaluation and future-condition
prediction. Thus, the idea of proactive and predictive maintenance becomes extremely
Proactive Learning for Intelligent Maintenance in Industry 4.0 257
important for reducing the maintenance time and unnecessary disrupting checkups,
improving efficiency by increasing the asset life, enhancing working environment, and
creating new revenue streams by supporting product-service systems [9].
One successful example of product-based servitization is the “Total Care” scheme
established by Rolls-Royce [10], where a proactive maintenance strategy has enabled
both parties to lower down the costs while increasing effectiveness. Technicians are
able to monitor the key performance indicators in aircraft engines and use proactive
maintenance techniques to support a continuous and sustainable value creation process.
This research provides a framework for the implementation of predictive mainte-
nance in Industry 4.0 and describes the structure needed to support predictive main-
tenance. Furthermore, the methodology of adaptive learning is introduced in order to
enable the identification of the root causes and the optimization of alerting notifications.
Last but not the least, the paper also shows how proactive learning can help manu-
facturers to optimize their performances by monitoring the process variations and
updating control limits in order to improve the standardization in the system.
Acknowledgement. This research is supported by the OptiLog 4.0 project financed by the
Research Council of Norway (Grand no. 283084).
References
1. Lee, J., Kao, H.A., Yang, S.: Service innovation and smart analytics for industry 4.0 and big
data environment. Procedia Cirp 16, 3–8 (2014)
2. Breivold, H.P., Sandström, K.: Internet of things for industrial automation–challenges and
technical solutions. In: 2015 IEEE International Conference on Data Science and Data
Intensive Systems, 11 December 2015, pp. 532–539. IEEE (2015)
3. Petracca, M., Bocchino, S., Azzarà, A., Pelliccia, R., Ghibaudi, M., Pagano, P.: WSN and
RFID integration in the IoT scenario: an advanced safety system for industrial plants (2013)
4. Wang, J., Zhang, L., Duan, L., Gao, R.X.: A new paradigm of cloud-based predictive
maintenance for intelligent manufacturing. J. Intell. Manuf. 28(5), 1125–1137 (2017)
5. Civerchia, F., Bocchino, S., Salvadori, C., Rossi, E., Maggiani, L., Petracca, M.: Industrial
Internet of Things monitoring solution for advanced predictive maintenance applications.
J. Ind. Inf. Integr. 7, 4–12 (2017)
6. Mobley, R.K.: An Introduction to Predictive Maintenance. Elsevier, Amsterdam (2002)
7. Dhillon, B.S.: Engineering Maintenance: A Modern Approach. CRC Press, Boca Raton
(2002)
8. Yuan, J., Wang, Y., Wang, K.: LSTM based prediction and time-temperature varying rate
fusion for hydropower plant anomaly detection: a case study. In: International Workshop of
Advanced Manufacturing and Automation, 20 September 2018, pp. 86–94. Springer,
Singapore (2018)
9. Ferreiro, S., Konde, E., Fernández, S., Prado, A.: Industry 4.0: predictive intelligent
maintenance for production equipment. In: European Conference of the Prognostics and
Health Management Society, no. June 2016, pp. 1–8 (2016)
10. Yu, H.: Solvang WD. Enhancing the competitiveness of manufacturers through Small-scale
Intelligent Manufacturing System (SIMS): a supply chain perspective. In: 2017 6th
International Conference on Industrial Technology and Management (ICITM), 7 March
2017, pp. 101–107. IEEE (2017)
An Introduction of the Role of Virtual
Technologies and Digital Twin in Industry 4.0
Mohammad Azarian(&), Hao Yu, Wei Deng Solvang, and Beibei Shu
1 Introduction
The history of manufacturing industry has shown that the emergence and application of
new technologies is the most important driving force in determining the turning point
and structural alterations. The first industrial revolution occurred thanks to the inven-
tion of steam engine; the use of electricity drove the second industrial revolution by
enabling mass and standardized production. The combination of IT technologies and
electronic devices contributed to the automation in the third industrial revolution [1, 2].
The increasing rate of customization and demand diversity has led factories and
manufacturing companies to become more specialized and smaller in many regions [3].
In order to survive in today’s competitive global market, manufacturers have to focus
on the advances of both technologies and manufacturing theories, which further leads
to the shift to a novel manufacturing paradigm: Industry 4.0.
Since the introduction of Industry 4.0 (I4.0 in this paper) at Hannover fair in 2013,
different manufacturing ideas and technologies have been developed, among which
Intelligent Analytics and Cyber Physical Systems (CPS) are unified as two major
technical drivers of I4.0 [4]. An evaluation on CPS dimensions indicates that it
internally encompasses most of the existing technologies, which are believed as the
nine fundamental elements of I4.0 [5]. Furthermore, research works related to those
nine elements suggest that Internet of Things (IoT) not only is committed to provide a
widespread connectivity but also forms the basis for boosting the overall intelligence
and integration level [6–8]. As a result, CPS and IoT can be considered as two major
fundaments to form the technological structure of I4.0.
The elements of the CPS mainly contribute to increase the interoperability and
consciousness of a manufacturing system. More precisely, they focus on incorporating
all machines and components in a cyber environment and then add consciousness to the
system in order to increase the intelligence level within the unified factory [9]. In this
regards, this paper investigates the interoperability aspect and evaluates the confronting
challenges. Section 2 presents a structural framework for clearly demonstrating the
steps to be taken in order to achieve I4.0. Through providing a case study, Sect. 3
discusses the role of virtual technologies and simulation. In order to tackle the most
significant challenge, Sect. 4 puts forward the concept of digital twin so that a bi-
directional connection between the virtual world and the physical factory can be
established. Section 5 concludes the paper.
CPS consists of physical components and machines, which are interconnected. The
associated data is collected in an integrated cyber environment, which brings the
opportunity for remote control and data processing [10, 11]. Many believes that the
implementation of CPS is the key element to achieve I4.0. In a comparison between the
current status of manufacturing and the future production system in I4.0, component,
machine and production systems are the three key factors at the factory level. The
advancement of their attributes and technologies provides opportunities of the real-
ization of I4.0 [4]. Based on this comparison, which focuses mainly on the intelligence
and consciousness aspects, a structural model is given in Fig. 1. It considers five levels
of the implementation of CPS: connection, conversion, cyber, cognition and configu-
ration [12].
260 M. Azarian et al.
levels of CPS within the three stages of intelligence and provides more elaboration of
this aspect. This idea converts the categorical framework of CPS into a 15-step matrix
with almost the same principles, as shown in Fig. 2. The main purpose of this model is
to make a more perceptual roadmap of CPS and to provide a convergent framework for
the realization of I4.0.
The study of the nine technological components of I4.0 highlights simulation and
Augmented Reality (AR) as two of the most important elements. Simulation provides a
virtual, and mostly 3D, environment to establish more opportunities for comparing
different setups and optimizing the overall process. The general purpose of AR is to
create an interface between factory and human or software [5]. A combination of both
technologies leads to a more generic category: Virtual Technology (VT), which may
potentially contribute to both aspects of CPS. With this perspective, VT provides a
functional production system in a virtual environment where all the machines and
components work together in an integrated and intelligent manner. Moreover, VT
focuses on the interoperability aspect and opens up the possibility to test and imple-
ment new ideas, designs and algorithms.
The general attitude towards VT has been argued in many research works and been
emerged in a variety of forms and terms within the scope of I4.0. One study regarding
the intelligence aspect in the manufacturing sectors, mainly SMEs [8], demonstrates the
significance of artificial intelligence for meeting the main features of an intelligent
manufacturing system, i.e., flexibility and reconfigurability. This idea emphasizes the
role of VTs, i.e., Virtual Reality (VR) and Virtual Manufacturing (VM), as the tools to
improve the intelligence level and to achieve I4.0 [8]. Another novel paradigm is
Virtual Factory (VF), which suggests to situate not only a part but also the entire
factory in a software equipped with simulation module in order to incorporate all active
elements in a factory. This idea emphasizes some features such as agility, scalability,
and so forth. It considers four main elements for the implementation: reference model,
virtual factory manager, functional modules and integration of knowledge [13]. Thanks
to the possibility of providing a tangible virtual 3D environment, the widespread
domain of VTs, i.e., VR, not only supports the simulation attribute but also discovers
some critical issues of the new ideas before the implementation phase. In this regards, a
research divides the dimensions of VR into three functional categories or phases:
design, operation management and manufacturing processes [14]. This idea is one of
the most important driving forces for manufacturers to give suggestions to software
developers so that the focus of each simulation software can be specified.
Given the essence of VT in the field of I4.0, a simulation project is done in order to
demonstrate a unified factory in a virtual environment by using Visual Components 4.1
simulation software (link: https://youtu.be/11Ax0OZUrEU). According to the afore-
mentioned classification, this example focuses on the manufacturing process where all
the equipment including machines, sensors, robots and other technical and generic
facilities are integrated in a 3D graphical environment in order to form a fully auto-
mated factory. The main task of this factory is to produce four cylindrical parts
262 M. Azarian et al.
considering quality control and reprocessing unit. After the packaging process, the
products will be delivered to the customers at the intended batch size. Figure 3 rep-
resents the sequential flowchart of the factory’s logic.
The very first matter to be considered in a manufacturing process is the plant layout,
which strikingly affects the internal logistics, i.e., flow of material. In this example, the
layout is opted for U-Shape, as shown in Fig. 4(A). The integration and cooperation
among modules and equipment is accomplished by python programming, which is not
only used to control and unify facilities but also to provide the fundaments of
increasing flexibility and agility. From a logistic point of view, this example provides
some astonishing approaches. One example is the transportation of reprocessed parts,
which is performed by using an AGV equipped with a universal robot, as shown in
Fig. 4(B). This idea assists a company in decreasing some of the cost drivers, i.e.,
purchasing cost of the conveyors, maintenance expenses, etc. Moreover, this idea
provides companies with opportunities to test different layout configurations.
(A) Factory layout and flow of material (B) AGV with a universal robot
An ideal CPS connects all components and machines in order to provide a unified
system in a cyber environment. This condition integrates the real factory with its virtual
model in a simulation environment and provides a bi-directional connection and data
flow between them in order to control the physical system while, at the same time,
reflect the real change in the virtual environment [15]. The improvements on VTs are
capable to provide such a unified system and enable the engineers to test and imple-
ment new ideas and intelligent algorithms. However, these advances are provided in a
virtual environment, and the connection between the real world and the virtual envi-
ronment is the most significant challenge. A new concept: Digital Twin (DT) aims to
address this issue. It provides a digital representation of the physical
components/machines of a real factory in a virtual environment where a real time and
instant synchronization is established between them [16].
Inspired from the concept of DT, one research suggests improving the simulation
softwares is an essential step in this regard [17]. It proposes a five-tier logic to establish
a multi-layer simulation software based on the Model-View-View-Model (MVVM)
paradigm and follows a sequence to simulate the DT of a simple process. Although it is
only adaptable to a limited number of processes, the optimization feature can be
considered as a noticeable achievement of this research [17]. Another approach pro-
poses a concept: Versatile Simulation Database (VSD), which incorporates DT and
Virtual Testbed in order to provide the possibility to achieve improved flexibility and
functionality through integrating different types of simulation. Driving assistance
systems, i.e., automatic parking system, can be referred to as a close instance in this
regard [18]. Another industrial research presents a DT of a robotized press-brake
processing line of a factory in South Korea with the use of ROS Gazebo simulation, as
shown in Fig. 5. In this process, the information related to the conveyors, parts,
machines and robots is received by the sensors and robot controller, which is then
reflected in the simulation environment. In order to establish the connection to control
the physical elements, FlexGui 4.0 offers an online page to control the robots, machines
and conveyors [19].
264 M. Azarian et al.
5 Conclusion
In this research, the role of VTs and DT in realizing the concept of I4.0 is thoroughly
discussed. First, the concept of I4.0 is studied, where CPS and IoT are considered as the
fundamental bases of it. Inspired from the recent development on the architecture of
CPS, a categorical approach is proposed and elaborated to a higher degree in order to
incorporate existing ideas and provide a unified and convergent framework for the
realization of I4.0.
The study of the fundamental elements of I4.0 has revealed that VTs, i.e., Simu-
lation and AR, etc., are among the most crucial technologies to realize the I4.0 concept,
which provide an easy, flexible, visualized and cost effective way to evaluate and
optimize different manufacturing scenarios, decisions, and process integration. How-
ever, establishing a bi-directional connection between the virtual environment and
physical systems is still a significant challenge, which is currently under the focus of
both academicians and practitioners.
An Introduction of the Role of VT and DT in Industry 4.0 265
This issue represents a huge gap between the current status of the manufacturing
systems and the ideal condition in I4.0. In this regards, the concept of DT is introduced
as the core module to address this challenge. DT provides a real-time and bi-directional
connection, visualizes the product and equipment, and offers an integrated interface to
control the physical system through the virtual environment. Consequently, DT not
only enables the interoperability phase but also provides opportunities for improving
the consciousness and intelligence.
References
1. Thangaraj, J., Lakshmi Narayanan, R.: Industry 1.0 to 4.0: The Evolution of Smart Factories
(2018)
2. Schwab, K.: The fourth industrial revolution, Currency (2017)
3. Krämer, W.: Mittelstandsökonomik: Grundzüge einer umfassenden Analyse kleiner und
mittlerer Unternehmen [SME Economics: Principles of a comprehensive analysis of SMEs],
München (2003)
4. Lee, J.: Industry 4.0 in big data environment. Ger. Harting Mag. 1, 8–10 (2013)
5. Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., Harnisch, M.:
Industry 4.0: the future of productivity and growth in manufacturing industries, vol. 9,
pp. 54–89. Boston Consulting Group (2015)
6. Erboz, G.: How to define industry 4.0: main pillars of industry 4.0. In: 7th International
Conference on Management (ICoM 2017), At Nitra, Slovakia (2017)
7. Rahman, H., Rahmani, R.: Enabling distributed intelligence assisted Future Internet of
Things Controller (FITC). Appl. Comput. Inform. 14(1), 73–87 (2018)
8. Huang, T., Solvang, W.D., Yu, H.: An introduction of small-scale intelligent manufacturing
system. In: 2016 International Symposium on Small-scale Intelligent Manufacturing
Systems (SIMS), Narvik, Norway (2016)
9. Qin, J., Liu, Y., Grosvenor, R.: A categorical framework of manufacturing for industry 4.0
and beyond. Procedia Cirp 52, 173–178 (2016)
10. Baheti, R., Gill, H.: Cyber-physical systems. Impact Control Technol. 12(1), 161–166
(2011)
11. Molina, E., Jacob, E.: Software-defined networking in cyber-physical systems: a survey.
Comput. Electr. Eng. 66, 407–419 (2018)
12. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-
based manufacturing systems. Manuf. Lett. 3, 18–23 (2015)
13. Sacco, M., Pedrazzoli, P., Terkaj, W.: VFF: virtual factory framework. In: 2010 IEEE
International Technology Management Conference (ICE) (2010)
14. Mujber, T.S., Szecsi, T., Hashmi, M.S.: Virtual reality applications in manufacturing process
simulation. J. Mater. Process. Technol. 155, 1834–1838 (2004)
15. Lee, J.: Smart factory systems. Informatik-Spektrum 38(3), 230–235 (2015)
16. Negri, E., Fumagalli, L., Macchi, M.: A review of the roles of digital twin in CPS-based
production systems. Procedia Manuf. 11, 939–948 (2017)
17. Tavares, P., Silva, J.A., Costa, P., Veiga, G., Moreira, A.P.: Flexible work cell simulator
using digital twin methodology for highly complex systems in industry 4.0. In: Iberian
Robotics Conference (2017)
266 M. Azarian et al.
18. Schluse, M., Rossmann, J.: From simulation to experimentable digital twins: simulation-
based development and operation of complex technical systems. In: 2016 IEEE International
Symposium on Systems Engineering (ISSE) (2016)
19. Shu, B., Sziebig, G., Solvang, B.: Introduction of cyber-physical system in robotized press-
brake line for metal industry. In: International Workshop of Advanced Manufacturing and
Automation (2017)
Model Optimization Method Based on Rhino
1 Introduction
With the acceleration of the new digital era characterized by networking, information
and intelligence, the development of modeling and simulation technology has entered
the digital age. Digital twins is an important form of modeling and simulation appli-
cation in the digital age. It has been widely used in smart manufacturing, smart fac-
tories, intelligent buildings, smart cities and many other fields, showing strong vitality.
Based on the equipment based on smart factory, combined with the key technology
of digital twins, the virtual and real data synchronous communication and virtual real
mapping technology are applied to the workshop production line to realize the virtual
and real digital simulation of the physical entity objects of the production line.
According to the actual application development, a virtual model of the complete
mapping of the smart factory can be built, and the Unity 3D can be used to realize the
virtual roaming display of the intelligent production line running process.
However, due to the complexity of the production line of the workshop, the virtual
model will be too large, too many texture materials will cause the entire system file to
be too large, the computer is overloaded during virtual roaming, the loading is too
slow, and even the problem of jamming appears. In this way, it is necessary to optimize
the roaming scene and eliminate some non-essential models or materials to simplify the
system model.
2.2 3D Engine—Unity 3D
Unity 3D is a game engine developed by Unity Technologies. It is an easy-to-use
interactive graphical and comprehensive game development tool that enables us to
create 3D computer games, visual roaming of buildings, and real-time interactive 3D
animation. The game engine supports a variety of system platforms, and its develop-
ment works not only on PC, MAC and other computer platforms, but also on mobile
devices such as Android and IOS, and can be directly posted to web pages.
As a good 3D engine, the main features of Unity 3D are as follows:
(1) Excellent visual effects and rendering power. Unity 3D supports the Windows
DirectX 11 graphics API. During the run, using Shader Model 5.0, the high-
precision restore model reveals the details of the model without sacrificing per-
formance. Unity 3D uses gamma-corrected illumination and HDR rendering to
create good lighting effects.
(2) Powerful and concise editor. A powerful working set internally speeds develop-
ment and allows developers to work on a variety of different applications with as
little repetitive work as possible.
Model Optimization Method Based on Rhino 269
(3) Visual operation function. The encoding and testing functions can be performed at
the same time, and the Game window and the Scene window can be played
synchronously, which is convenient for developers to modify in time.
(4) Cross-platform features. Unity 3D has powerful cross-platform capabilities. You
can use a set of code to post to Windows, Mac, iPhone, Android and other
platforms on a small amount of modifications.
Because of these characteristics, Unity 3D has a wide range of applications. It has
considerable advantages in realizing the virtual roaming display of the intelligent
production line operation process.
In the process of creating virtual roaming in Unity 3D, there may be too many or too
large models, too many texture materials, causing the entire system file to be too large,
causing the computer to be overloaded, loading too slow, and rendering rate decreasing
(Frame rate FPS reduction) and even the problem of jamming. In this way, you need to
optimize the roaming scene, eliminate some models and materials, and simplify the
system model. Since the model affects the scene rendering rate in Unity 3D mainly
because of its mesh number (the number of triangle faces), the main goal of the
optimization model studied in this paper is to reduce the number of mesh faces of the
scene model.
Fig. 2. Flowchart for optimizing the model using Rhino’s own tools
The process of optimizing the model using Rhino’s own tools is as follows:
(1) Import the model into Rhino. In the early stage, you need to ensure that the model
file is in the format supported by Rhino, such as: .stp, .igs, .3dm, etc.;
(2) Determine whether the imported model is a mesh. If yes, directly use the Redu-
ceMesh command to reduce the mesh. If not, need to execute the Mesh command
first, convert the NURBS surface into a mesh surface, and then reduce the mesh;
(3) The reduced model is judged against the threshold value of the point-line-surface
to determine whether there is an extra point-line-surface. If it exists, it needs to
reconstruct the model, and then convert the reconstructed NURBS surface model
into mesh surface by using the Mesh command. If it does not exist, the model is
optimized;
(4) Finally, export the optimized model and ensure that its file format is supported by
Unity 3D.
At this point, if you continue to optimize the model, you can only select these re-
reduced meshes individually and perform the reduce mesh command. This is a time-
consuming and labor-intensive work to a certain extent.
Rhino is more convenient for designers, including many auxiliary design plug-ins
such as vray TSpline, sketchup, hypershot, Keyshot, Grasshopper and more. Among
them, Grasshopper is a plug-in based on the Rhino platform to generate a model by
building a program algorithm. It can be built into a whole with a logical algorithm by a
simple command combination, and a desired model can be generated by a single
element group. And Grasshopper also opened script editing based on Visual Basic, C#,
Python, and also has a number of auxiliary plug-ins to improve work efficiency.
According to Rhino’s features and tools, using the Rhino to optimize model
requires a metric to measure the extent to which the given reduction will change the
shape and topology of the original mesh. Here are a few possible solutions:
(1) Reduce the number of any mesh, but not exceed the constraints of a certain
measure that changes the number of meshes.
(2) Identify the mesh that is at or near the “maximum reduction” concept, in case any
subsequent reduction may result in mesh breakage.
(3) Some “adaptive reduction” process in which each mesh of the model is reduced as
much as possible, but does not change the shape too much.
According to the above several solutions, using Grasshopper, edit the script as
follows to achieve the purpose of filtering the mesh when selecting.
(1) Start with no choice;
(2) Traverse all visible meshes in the scene;
(3) For each mesh, calculate its surface area and the number of mesh faces it contains;
(4) Set the parameter “density” to the ratio of the number of mesh faces to the surface
area;
(5) If the “density” is higher than a given parameter, it is the mesh to be reduced,
otherwise it is a mesh that does not need to be reduced;
(6) Perform the ReduceMesh command on the meshes that need to be reduced, and
then bake to Rhino’s “reduction surface” layer; the meshes do not need to be
reduced to bake to Rhino’s “no reduction” layer.
After editing the script according to Fig. 3, you can achieve the purpose of filtering
the meshes before reducing the meshes. The meshes that need to be reduced are used in
the “reduction surface” layer after the ReduceMesh command, and the meshes without
the reduction surface exists in the “no reduction” layer. Therefore, using Grasshopper
not only retains the original model, but also makes the whole optimization model
process more automated and efficient.
4 Comparative Analysis
of mesh faces can be operated to maximize the optimization of the large scene model,
so that the loading time after loading Unity 3D is reduced, the delay is low, and the
roaming speed is also improved.
5 Conclusion
This paper studies the Rhino-based model optimization method, which mainly
describes the process of reducing the mesh surface by the two tools of InstantMeshes
and QuadriFlow in Rhino plug-in CreateQuadMesh and Rhino’s own tools, and ana-
lyze the pros and cons of the two types of tools by comparing the same model with the
same degree of mesh reduction, it is concluded that Rhino’s own tools have better
effect on the model optimization with too many mesh faces, and have little effect on the
appearance of the model. In addition, in the actual operation, it was found that the
complexity of Rhino’s own tools to reduce the mesh, and proposed improvement
measures. The Grasshopper plug-in was used to edit the script to achieve the purpose of
filtering the mesh. Based on this, the ReduceMesh command was used to improve the
efficiency of the actual work.
References
1. Changpo, F., Yufei, H.: Discussion on modeling method of 3dmax software by taking library
digital model as an example. Bol. Tec./Tech. Bull. 55(6), 478–487 (2017)
2. Yang, C.-W., Lee, T.-H., Huang, C.-L., Hsu, K.-S.: Unity 3D production and environmental
perception vehicle simulation platform. In: Proceedings of the IEEE International Conference
on Advanced Materials for Science and Engineering: Innovation, Science and Engineering,
IEEE-ICAMSE 2016, pp. 452–455 (2016)
3. Lu, G., Xue, G., Chen, Z.: Design and implementation of virtual interactive scene based on
unity 3D. In: Advanced Materials Research, vol. 317–319, pp. 2162–2167 (2011). Equipment
Manufacturing Technology and Automation
4. Tian, F., Luo, L.: Roaming of large urban scenes based on Unity 3D. In: 2018 International
Conference on Electronics Technology, ICET 2018, pp. 438–441 (2018)
5. Jakob, W., Tarini, M., Panozzo, D., et al.: Instant field-aligned meshes. ACM Trans. Graph.
34(6), 1–15 (2015)
6. Huang, J., Zhou, Y., Nießner, M., et al.: QuadriFlow: a scalable and robust method for
quadrangulation. In: The Eurographics Symposium on Geometry Processing (SGP) 2018
(2018)
Construction of Equipment Maintenance
Guiding System and Research on Key
Technologies Based on Augmented Reality
1 Preface
scene and virtual maintenance scene can be integrated, when the real environment of
the staff changes, the virtual information presented in front of the user will also change,
and maintain the consistency of virtual information, compared with other maintenance
systems, more visual, intuitive and real-time, so as to support worker to obtain rich
information and assist them to complete the work.
Boeing first applied augmented reality technology to the connection of power
cables and the assembly of wiring wires in aircraft manufacturing. It is reported that the
DART 510 aviation engine can reduce service time by 56% when it is used with the
augmented reality service system [2]. Samit et al. of the Algerian Centre for High
Technology and Development used AR for pump maintenance and developed the
ARPPSM system to solve the maintenance problems of complex photovoltaic pump
systems [3].
Domestic Cui et al. [4] built an equipment guiding maintenance system based on
augmented reality, adopted a modular design method, and induced the engine piston
damage and clutch plate replacement appearance maintenance task, Zhao et al. [5]
aimed at the key technical problems of augmented reality auxiliary maintenance system
realization, starting from the practical application needs of the maintenance, the aug-
mented reality auxiliary service prototype system is constructed, optimized and
improved.
According to the above background, this paper carried out the research on the
maintenance guiding technology based on the digital equipment of smart factory,
which is of great significance to improve the quality of equipment maintenance and
improve the production efficiency of the factory. And the research content and key
technology involved in it are expounded. Based on the analysis of the advantages of
augmented reality technology applied to equipment maintenance, this paper puts for-
ward the equipment maintenance guiding system based on augmented reality, and
expounds the research content and key technologies involved.
Fig. 1. The framework of equipment maintenance guiding system based on augmented reality
Augmented reality technology superimposes virtual objects into the physical world,
relying on computer technology to build maintenance guiding text, maintenance
interpretation voice, 3D virtual model, maintenance guiding animation sequence, and
make them combine with physical world, so that worker can achieve human-computer
interaction by gestures and voice, and finally complete the virtual-real fusion mainte-
nance operation.
(1) The conversion from the virtual space coordinate system to the real space coor-
dinate system is used to determine the position and posture of the virtual object in
the real world.
(2) The conversion from the real space coordinate system to the camera space coor-
dinate system is used to determine the relative position and posture between the
real world and the camera, and the feature points of the video stream are acquired
by the camera in real time and matched to obtain a conversion relationship.
(3) The conversion of the camera space coordinate system to the imaging plane
coordinate system is to correctly display the generated virtual object on the
projection surface to achieve virtual-real fusion.
maintenance guiding process. The information mainly includes: text recurrence of each
step of maintenance operation, that is, the text prompt of each step in the traditional
maintenance manual; the prompt of whether the maintenance operation is completed, that
is, whether the staff member correctly completed the maintenance step of the previous
step according to the operation instruction; The prompt of whether the action is normal,
that is, whether the worker has used the correct repair tool for maintenance operations.
Only by combining virtual maintenance animations with comprehensive maintenance
information can the worker be instructed to complete the correct repair process.
The information acquisition module mainly collects information through the real
camera to the maintenance equipment, including image information of various angles,
related attribute information, equipment running status information, fault database
information of the equipment, etc., which are used by offline model training, and then
used by the virtual scene generation module. The virtual scene generation module
mainly captures and matches the corresponding virtual maintenance scene through real-
time camera acquisition. The real-time tracking registration module mainly calculates
the alignment of the virtual and real coordinate system and the registration of the virtual
and real objects by estimating the real-time pose of the camera. The virtual-real fusion
display module mainly displays the virtual and real objects in the image plane
282 L. Gao et al.
5 Summary
With the advent of the information age, paper manuals are slow to update and difficult
to store. A large amount of complex professional information is difficult to understand
and cannot improve the efficiency of maintenance personnel. Based on the analysis of
the drawbacks of traditional maintenance, this paper proposes an auxiliary maintenance
method based on augmented reality, and gives the system framework and system
operation flow of equipment virtual-real fusion maintenance guiding, and elaborates the
research content and key technologies involved in the method. According to this
method, the maintenance personnel can receive the virtual information guidance in real
time during the actual work, and can not only observe the actual maintenance scene,
but also observe the superimposed virtual information, which completes the unification
in the visual field, and can better complete the maintenance task, improve the work
efficiency.
References
1. Industry 4.0: How augmented reality is transforming manufacturing. Smart Fact. (09), 28–30
(2018)
2. Liu, F., Liu, P., Xu, B.: Key technology research on enhanced reality-enhanced weapon and
equipment maintenance systems. Fly. Missiles (09), 74–80 (2017)
3. Schwald, B., Laval, B.D., Sa, T.O., et al.: An augmented reality system for training and
assistance to maintenance in the industrial context. In: International Conference in Central
Europe on Computer Graphics, pp. 425–432 (2013)
4. Cui, B., Wang, W., Qu, J., Li, Z.: Design and realization of equipment induction maintenance
system based on augmented reality. Firepower Command Control 41(11), 176–181 (2016)
5. Zhao, S.: Augmented reality assisted maintenance key technology research. Hebei University
of Technology (2016)
6. Zeng, Y.: Review and Outlook on augmented reality virtual masking methods. J. Syst. Simul.
(1), 1–10 (2014)
7. Sun, C., Zhang, M., Li, Y., et al.: Human-natured interaction in augmented reality
environment. J. Comput.-Aided Des. Graph. 23(4), 697–704 (2011)
A New Fault Identification Method Based
on Combined Reconstruction Contribution
Plot and Structured Residual
1 Introduction
Variously fault identification algorithms have currently evolved based on the use of fuzzy
logic theory, multivariate statistical analysis, artificial intelligence and other types of
algorithms. Among these algorithms, algorithms based on statistical analysis and its
improvement has been widely used. Miller [1] introduced the contribution plot method to
reflect the contribution of each variable to the statistics for the first time. Based on this
work, various PCA model-based contribution plot methods have been proposed subse-
quently, including complete decomposition contribution, partial decomposition contri-
bution, diagonal contribution, angle-based contribution and reconstruction-based
The contribution plot can be utilized to visualize the results of fault variables with a bar
chart, which can reflect the severity of the fault variables and visualize the impact of the
fault variable on the non-fault variable. The contribution rate of all principal variables
is reflected in the contribution plot, which can be used to intuitively observe the
statistical contribution and determine an abnormal data.
Due to single fault variable, the fault sample x can be decomposed as normal and
faulty components,
x ¼ x þ ni f i ð1Þ
where x is the normal component, ni is the direction vector of the fault and fi is the
fault amplitude of the fault variable.
A New Fault Identification Method 285
z i ¼ x ni f i ð2Þ
cD
i ¼ ðxC
0:5
ni Þ 2 ð4Þ
dðDðzi ÞÞ
¼0 ð6Þ
dfi
When the fault occurs, the reconstruction method is utilized to localize the fault.
When the fault direction ni is correctly localized, the fault variable identification is
correct if the monitoring measure of reconstruction sample Dðzi Þ is lower than the
control threshold. Otherwise, the fault variable identification is incorrect.
For a single variable, the reconstruction contribution [2] can be calculated as,
cRBC
i ¼ indexðxÞ indexðzi Þ ¼ xC 1 ni ðnTi C 1 ni Þ1 nTi C1 xT ð8Þ
For multiple variables, the reconstruction contribution [13] can be calculated as,
3 Structured Residual
3.1 The Principle of Structured Residual
From the analysis in the previous section, it can be seen that the fault direction vector ni
directly determines the fault localization accuracy. In order to improve the accuracy, it
is required to obtain the correct fault direction vector ni . In this paper, the fault direction
vector ni based on PCA structured residual is proposed, which is further illustrated as
follows.
Since the original residual te ¼ PTe x represents the deviation of the monitored
variable from the principal component subspace (PCS) at each sampling time, the
original residual can be used to construct the structured residual. Since the relationship
between PCS and residual Subspace (RS) is orthogonally complementary in space, it
can be obtained for system with fault,
where x0 represents the true variable value without impact of measurement and noise,
PTe represents the fault mapping vector and f represents the incident matrix.
Based on (10) and (11), it can be obtained,
/ ¼ PTe n ð12Þ
exists for (14) and there exists the following relationship between its rank and the
number of rows in /,
To guarantee that ci corresponds to the fault from the 1 value in i-th row of the
incidence matrix, the i-th row of the incidence matrix should satisfy the following
necessary condition,
(a) (b)
(c) (d)
Fig. 1. (a) Basic RBC method, (b) PCA-MRCP method, (c) PPCA-RCP method, (d) RCP-SR
method
290 B. Chen et al.
(a) (b)
(c) (d)
Fig. 2. (a) Basic RBC method, (b) PCA-MRCP method, (c) PPCA-RCP method, (d) RCP-SR
method
The fault localization analysis is shown in Fig. 2(a), (b), (c) and (d). Although the
traditional RBC method can correctly identify the fault variable to be x2, x9 and x21, the
localized fault variables are not consistent with the actual fault variables, where the
trailing effects are rather significant. Although the PCA-MRCP method can localize the
fault variables, the localization results are inconsistent and affect determination of the
fault variables. The PPCA-RCP method can localize the fault variables accurately, but
the trailing problem exists. The proposed RCP-SR can desirably resolve the drawbacks
of the above methods with significant superiority.
6 Conclusion
Acknowledgements. This work is supported by the Special Funds for Science and Technology
Innovation Strategy in Guangdong Province of China (No. 2018A06001).
References
1. Miller, P., Swanson, R.E., Heckler, C.E.: Contribution plots: a missing link in multivariate
quality control. Appl. Math. Comput. Sci. 8(4), 775–792 (1998)
2. Alcala, C.F., Qin, S.J.: Analysis and generalization of fault diagnosis methods for process
monitoring. J. Process Control 21(3), 322–330 (2011)
3. Gertler, J., Cao, J.: Design of optimal structured residuals from partial principal component
models for fault diagnosis in linear systems. J. Process Control 15(5), 585–603 (2005)
4. Hu, Z., Chen, Z., Gui, W.: Adaptive PCA based fault diagnosis scheme in imperial smelting
process. ISA Trans. 53(5), 1446–1455 (2014)
5. Lin, S., Jia, L., Qin, Y.: Research on urban rail train passenger door system fault diagnosis
using PCA and rough set. Open Mech. Eng. J. 8(1), 340–348 (2014)
6. Kerkhof, P.V., Vanlaer, J.: Analysis of smearing-out in contribution plot based fault isolation
for Statistical Process Control. Chem. Eng. Sci. 104(50), 285–293 (2013)
7. Mnassri, B., El Adel, E.M., Ouladsine, M.: Reconstruction-based contribution approaches
for improved fault diagnosis using principal component analysis. J. Process Control 33, 60–
76 (2015)
8. Liu, J., Chen, D.-S.: Fault isolation using modified contribution plots. Comput. Chem. Eng.
61, 9–19 (2014)
9. Wang, Z., Feng, S., Chang, Y.: Improved KPCA fault identification method based on data
reconstruction. J. Northeast. Univ. (Nat. Sci.) 33(4), 500–503 (2012)
10. Guo, X., Liu, S., Li, Y.: Fault detection of multi-mode processes employing sparse residual
distance. Acta Automatica Sinica 45(3), 617–625 (2019)
11. Li, Y., Wu, J., Wang, G.: k-nearest neighbor imputation method and its application in fault
diagnosis of industrial process. J. Shanghai Jiaotong Univ. 49(6), 830–836 (2015)
12. Zhang, C., Gao, X., Xu, T.: Fault detection strategy of independent component–based k
nearest neighbor rule. Control Theory Appl. 35(6), 806–812 (2018)
13. Li, G., Alcala, C.F., Qin, S.J.: Generalized reconstruction-based contributions for output-
relevant fault diagnosis with application to the Tennessee Eastman process. IEEE Trans.
Control Syst. Technol. 19(5), 1114–1127 (2011)
Prediction of Blast Furnace Temperature
Based on Improved Extreme
Learning Machine
Xin Guan(&)
1 Introduction
The iron and steel industry is a pillar industry in China, but at the same time it
gradually becomes a major source of air pollution and energy consumption. Blast
furnace (BF) is the core and major iron-making container [1]. A number of studies have
shown that reasonable BF temperature was a significant key for BF steady operation.
Because of the positive correlation between BF temperature and silicon content in hot
metal, the silicon content is commonly used to reflect the change of BF temperature.
Taking into account of the large-lag system and a harsh environment inside the BF, the
mechanism modeling becomes impractical. In this case, using data-driven technique to
predict silicon content in hot metal is a good choice.
In the existing literature, there are many prediction model of silicon content,
including linear model and non-linear model For the linear model, VARMAX model
was developed for the prediction of silicon content in hot metal [2]. In addition, partial
least squares (PLS) [3] and others algorithms [4, 5] were used in silicon content in hot
metal prediction model. For the non-linear model, major algorithm for prediction
modeling can be divided into support vector machine (SVM), neural network and
extreme learning machine (ELM). Using LSTM to improve RNN to establish predic-
tion model [6]. Moreover, the algorithm combining RBF neural network and particle
swarm optimization algorithm was also used for silicon content prediction [7].
For SVM, improved ELM algorithm [8] and improved multi-layer ELM [9], are also
applied in the prediction of hot metal silicon content. In recent years, ELM and its
variants have been successfully applied in many real fields [10–12].
In order to overcome the issues of the conventional gradient-based learning algo-
rithms for SLFNs [13], ELM algorithm is proposed in 2006. Compared with other
neural network algorithms, ELM has more fast training speed and universal approxi-
mation capability.
The rest of this paper is organized as follows. Section 2 presents the overviews of
ELM and flower pollinate algorithm, the proposed improved ELM algorithm is mainly
introduced. Prediction model of blast furnace temperature is presented in Sect. 3. Sim-
ulation results are shown in Sect. 4, and the conclusions of this paper are given in Sect. 5.
where Xit þ 1 and Xit are the positions of pollen Xi at t þ 1 and t respectively. g is the
best solution found among the current solutions. L is control parameter which repre-
sents the random strength obeying Lévy flighting mode. The parameter L is defined in
Eq. 2.
kCðkÞ sinðpk2 Þ 1
L 1 þ k ; ðs s0 0Þ ð2Þ
p s
where CðkÞ is a gamma function.
294 X. Guan
where Xjt and Xkt are selected from the same population randomly. e is a random
number subjected to [0, 1] distribution.
where the model contains L hidden layer mode. xi ¼ ½xi1 ; xi2 ; ; xiL T and bi are the
learning parameters generated randomly. oj is the corresponding output value of the jth
sample. gðxÞ is the activation function. The above Eq. 4 is expressed in matrix form as
follows:
Hb ¼ T ð5Þ
where
2 3
gðx1 x1 þ b1 Þ gð x L x 1 þ bL Þ
6 .. .. .. 7
Hðx1 ; ; xL ; b1 ; ; bL ; x1 ; ; xN Þ ¼ 4 . . . 5
gðx1 xN þ b1 Þ gð x L x N þ bL Þ N L
ð6Þ
is called hidden layer output matrix. According to the least square theory, the b can be
calculated by Eq. 7:
^ ¼ HþY
b ð7Þ
extent. In order to reduce the number of hidden nodes and improve the generalization
ability, FPA-ELM algorithm which combining FPA and ELM is proposed. The new
algorithm makes use of flower FPA to optimally select the weight values of input layer
and the biases of hidden nodes. The learning process of new algorithm is shown in
follows:
① Initialize all N flowers in the population randomly. Each flower can be
represented in vector:
h i
ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ
F i ¼ x11 ; x12 ; ; x1k ; x21 ; x22 ; ; x2k ; ; xm1 ; xm2 ; ; xmk ; b1 ; ; bk
ð8Þ
ðiÞ ðiÞ
where F i represents the ith flower in swarm. xmk and bk are input layer weight and
bias of hidden layer separately.
② The error between the real value of samples and the predicted value of ELM
model is taken as the fitness function. Find the best solution g in the initial
swarm.
③ Define a switch probability p 2 ½0; 1.
④ while ðt\MaxIterationÞ
for i = 1:N
⑤ if rand < p
Do global optimization via Eq. 1
⑥ else
Do local optimization via Eq. 3
⑦ end if
⑧ Find the current best solution g
⑨ end while
⑩ ELM network structure is established with g
analysis among parameters and the silicon content [19], we chose eight parameters as
input value of the prediction model, which are air volume, blast temperature, blast
pressure, gas permeability, oxygen enrichment, pulverized coal injection, feed speed
and the latest silicon content. The output value is the next silicon content.
4 Simulation Results
This section describes the simulation results of proposed prediction model for silicon
content. We collect 500 production data. Among them, 400 data were used as training
data, and another 100 were used as test data. All simulations have been conducted in
Matlab R2012. In order to verify the performance of the proposed prediction model, we
compared it with the basic ELM algorithm. Figures 1 and 2 are simulation results of
basic ELM and the improved ELM respectively.
It can be seen from the simulation results that for the same input data, the prediction
accuracy of the proposed improved ELM is higher than that of the basic ELM algo-
rithm. Making use of RMSE as the prediction index, the results are shown as Table 1.
Table 1 shows that the proposed improved ELM is 0.0307, and the basic ELM is
0.0381. The RMSE is reduced by 19.4%. The proposed improved ELM has better
prediction accuracy and stability.
5 Conclusions
References
1. Liu, L.M., Wang, A.N., Sha, M., Sun, X.Y., Li, Y.L.: Optimal SVM for fault diagnosis of
blast furnace with imbalanced data. ISIJ Int. 51(9), 1474–1479 (2011)
2. Ostermark, R., Henrik, S.: VARMAX-modelling of blast furnace process variables. Eur.
J. Oper. Res. 90(1), 85–101 (1996)
3. Bhattacharaya, T.: Prediction of silicon content in blast furnace hot metal using partial least
squares. ISIJ Int. 45(12), 1943–1945 (2005)
4. Lin, S., Zhi, L.I., Tao, Y.U., et al.: Model of hot metal silicon content in blast furnace based
on principal component analysis application and partial least square. J. Iron Steel Res. 18
(10), 13–16 (2011)
5. Saxén, H., Ostermark, R.: State realization with exogenous variables-A test on blast furnace
data. Eur. Oper. Res. 89(1), 34–52 (1996)
6. Li, Z.L., Yang, C.J., et al.: Research on hot metal Si-content prediction based on LSTM-
RNN. J. Chem. Ind. Eng. (China) 69(3), 992–997 (2018)
7. Liu, J.Y., Zhang, W.: Blast furnace temperature prediction based on RBF neural network and
genetic algorithm. Electron. Meas. Technol. 41(3), 3505–3508 (2018)
8. Zhang, H.G., Yin, Y.X., Zhang, S.: An improved ELM algorithm for the measurement of hot
metal temperature in blast furnace. Neurocomputing 174, 232–237 (2016)
298 X. Guan
9. Su, X., Zhang, S., Yin, Y., et al.: Prediction model of hot metal temperature for blast furnace
based on improved multi-layer extreme learning machine. Int. J. Mach. Learn. Cybern., 1–14
(2018)
10. Liao, K., Wu, Y.P., Li, L.W., et al.: Displacement prediction model of landslide based on
time series and GWO-ELM. J. Cent. South Univ. 50(3), 129–136 (2019)
11. Jing, H.X., Qian, W., Che, K.: Short-term traffic flow prediction based on grey ELM neural
network. J. Henan Polytech. Univ. 38(2), 102–107 (2019)
12. Dong, Z., Ma, N., Meng, L.: Model improvement for boiler NOx emission based on
DEQPSO algorithm, no. 3, pp. 191–197 (2019)
13. Scardapane, S., Comminiello, D., Scarpiniti, M., Uncini, A.: Online sequential extreme
learning machine with Kernels. IEEE Trans. Neural Netw. Learn. Syst. 26(9), 2212–2220
(2015)
14. Yang, X.S.: Flower pollination algorithm for global optimization. In: Proceedings of the 11th
International Conference on Unconventional Computation and Natural Computation.
Lecture Notes in Computer Science, pp. 240–249. Springer, Orléan (2012)
15. Kaur, A., Pal, S.K., Singh, A.P.: New chaotic flower pollination algorithm for unconstrained
non-linear optimization functions. Int. J. Syst. Assur. Eng. Manag. (2017)
16. Yang, X.S., Karamanoglu, M., He, X.: Multi-objective flower algorithm for optimization.
Procedia Comput. Sci. 18, 861–868 (2013)
17. Xu, X.: Modeling of blast furnace temperature based on improved particle swarm optimizer
and support vector machine, Yanshan University (2015)
18. Wu, J.H.: The analysis on blast furnace smelting process and research on hot metal silicon
content prediction model, Yanshan University (2016)
19. Yan, C.: Hot Metal Temperature Forecast Research based on Quantum Genetic Neural
Network. Northeastern University (2014)
The Economic Dimension of Implementing
Industry 4.0 in Maintenance
and Asset Management
1 Introduction
The fourth industrial revolution is expected to bring substantial changes across industry
sectors and business functions in the upcoming years, and the maintenance function is
no exception [1–5]. Predictive maintenance is often highlighted [5], but change is also
expected related to how maintenance tasks are carried out [6], and to the relationship
between manufacturers and operators of equipment [7].
According to surveys [5] and [8] some companies have already gained good results
from implementing maintenance techniques related to Industry 4.0, while others are
lagging. According to [9] unclear economic benefits are the greatest challenge to
successful implementation of Industry 4.0.
Given the examples of how, for instance smart connected products can change the
competitive environment [7, 10], it can be argued that companies that do not pursue the
potential benefits of Industry 4.0 will be in danger of losing their competitive position
in the future.
According to [11] the focus in literature has mainly been on the technological
aspects of Industry 4.0, and according to [12] many organizations lack the understand
that organizational factors are an important part of Industry 4.0. The importance of
organizational factors is also pointed to in [11] and [13], which both present models for
how this can be considered in order to succeed with a digital transformation.
But one factor that has been covered to only a limited degree in the academic
literature is the economic dimension [2]. This is however an important factor because
digital transformation is not an end, but only a means to generate economic value.
In this paper it is proposed that a concept called sensitivity analysis from a
framework called Value Driven Maintenance (VDM) can be used to get a better
understanding of how the implementation of Industry 4.0 in maintenance and asset
management can help in generating economic value.
In the next section of this paper the VDM sensitivity analysis is presented. In
Sect. 3 a presentation of Industry 4.0 and three main areas of technology innovation
relevant for maintenance and asset management is given together with an overview of
the connection between the technology and economic dimensions. In Sect. 4 a dis-
cussion of how this can form an input to strategies for implementing Industry 4.0 to
maintenance and asset management is given together with an example of how this can
be done. The paper ends with a conclusion in Sect. 5.
According to several authors, the traditional view has been to regard maintenance as
only a cost item and a necessary evil [14–16], but there is a growing understanding that
maintenance should be viewed as a value-adding activity that has an important role in
securing the competitive position of a company [15].
One tool developed with the aim of demonstrating how maintenance and asset
management can deliver economic value is the sensitivity analysis that is part of the
Value Driven Maintenance (VDM) framework developed by Haarman and Delahay
[17, 18].
The VDM sensitivity analysis builds on a framework called Economic Value
Added (EVA) developed by the Stern Stewart Corporation in the 1990s [19]. The basic
premise in this framework is that the paramount goal of any company is to maximize
shareholder value [20]. In the EVA framework a company generate value if operating
profit is higher than the opportunity cost of capital employed [21]. This is calculated by
the following equation [20]:
EVA as a metric is based on accounting data (usually for the past year) and is
therefore backwards looking [21]. To counter this, the concept of value drivers is
defined in the EVA framework as factors that can help create economic value in the
future [20].
The Economic Dimension of Implementing Industry 4.0 301
Haarman and Delahay have taken the EVA framework and specified a set of value
drivers related to maintenance and asset management. These are: asset utilization; cost
control; safety, health, environment and quality (SHEQ) control and capital allocation.
Capital allocation is again divided into the value drives: investments, spare parts
inventory and lifetime extension [17].
The VDM sensitivity analysis is done by first calculating the change in cash flow
from one percentage point improvement in year t (ΔCFt) for each of the value drivers.
Then the Incremental Present Value (IPV) from the one percentage point improvement
for each of the value drivers are calculated over the remaining expected lifetime of the
asset by using the discount rate r [17]:
X DCFt
IPV ¼ t ð1 þ r Þt
ð2Þ
The purpose of this is to better understand the relative importance of the different
value drivers. Something that is important input when developing a maintenance
strategy to help maximize the economic potential of an asset [17].
Figure 1 below show an example of how the result of an VDM sensitivity analysis
may look like. In this example publicly available data from an oil platform on the
Norwegian continental shelf (NCS) has been used. This is a good case in this context
because it demonstrates how the sensitivity analysis can be useful in a context where
changing conditions are expected. The first year of the analysis is year 2, while
expected asset lifetime is set to 20 years. According to the P2 estimate (the base
estimate for future production [22]), the oil platform will reach the end of the pro-
duction plateau in the end of year four.
The value on the y-axis for each of the value drivers represent the present value of
one percentage point improvement from the base year and throughout the expected
lifetime (IPV).
Fig. 1. Illustration of how the relative importance of the IPV for the different value drivers
change as the base year of the VDM sensitivity analysis increases. Inspired by [17].
302 T. I. Pedersen and P. Schjølberg
3 Industry 4.0
Changing to a model where you pay-as-you-go, instead of investing in assets will for
the operator have the effect that the amount of capital employed will go down while the
costs will go up. In a traditional products-based model, manufactures often get a large
part of their revenue from selling spare parts and after-market services and therefore they
have little incentive to make a more reliable product [10]. This changes with a products-
as-a-service-model, something that may lead to higher availability [8, 31].
In the table below the technology and economic dimensions have been combined
based on the references cited above.
Table 1. The link between technology and economic value potential. + and − indicates whether
the technology is expected to have a positive or negative impact on the corresponding value
driver. Less strong changes are marked with parentheses. Inspired by [1, 6, 11].
Technology dimension
Front-end tech. Smart maintenance Smart work Smart products
Base technologies Sensors Mobile solutions Same as
Big data 3D & VR smart maint.
IIoT
Income
Availability + (+) (+)
Economic dimension
- Cost
Cost control (+) + -
SHEQ control + + 0
- Capital charge
Investments (-) (-) +
Spare parts 0 0 +
Lifetime extension (+) 0 +
4 Discussion
Combining the VDM sensitivity analysis presented in Fig. 1 with the overview in
Table 1 can give an important input to strategy for implementing Industry 4.0 in
maintenance and asset management. This is illustrated in Fig. 2 below.
304 T. I. Pedersen and P. Schjølberg
Fig. 2. Example of how the focus of maintenance strategy should change with the relative
importance of the different value drivers, inspired by [17].
As one can see from Fig. 2 the value driver related to availability has the largest
IPV (present value from one percent point improvement) in the first three years of the
analysis. As stressed in [17] this does not say anything about the feasibility of this
improvement, or the level of investments needed to achieve this. Consequently, the
input from Sects. 2 and 3 does not say how big the economic value of implementing for
instance smart maintenance will be. This will depend on the current performance level
of the company [17], the potential of the technical solutions for smart maintenance that
are chosen [8] and the organizations ability to implement and make use of the new
technology [13].
But Fig. 2 offers a starting point for where the company in question should begin to
explore potential economic value of implementing Industry 4.0 to maintenance and
asset management. In addition, it illustrates how the relative importance of the value
drivers are expected to change in the future. In the case of smart maintenance, it shows
that the expected fall in production level for the example company indicates that
emphasis should be on mature solutions that can be implemented quickly.
5 Conclusion
The upcoming fourth industrial revolution is, as stated in the introduction, expected to
bring substantial changes to maintenance and asset management and companies that do
not adapt will be in danger of losing their competitive position.
Companies does however struggle to identify the economic benefits of imple-
menting Industry 4.0 in maintenance and asset management [8]. Some of this is related
to companies only regarding maintenance as a cost center and because if that do not
take proper account for how the new technological innovations have the potential to
generate economic value for the company as a whole [25].
The Economic Dimension of Implementing Industry 4.0 305
In this paper an example is given of how the VDM sensitivity analysis combined
with an understanding of three areas of technology innovations related to Industry 4.0
can give a valuable input to where companies should focus their search for economic
value related to implementing Industry 4.0 in maintenance and asset management.
This can, combined with an understanding of the relevant organizational factors, be
important input for practicing managers in the maintenance and asset management
domain when making strategies to prepare for the fourth industrial revolution.
References
1. Kagermann, H., Helbig, J., Hellinger, A., Wahlster, W.: Recommendations for implementing
the strategic initiative INDUSTRIE 4.0: securing the future of German manufacturing
industry; final report of the Industrie 4.0 Working Group. Forschungsunion (2013)
2. Roda, I., Macchi, M., Fumagalli, L.: The future of maintenance within industry 4.0: an
empirical research in manufacturing. In: Moon, I.L., Gyu, M., Park, J., Kiritsis, D., von
Cieminski, G. (eds.), pp. 39–46. Springer International Publishing, Cham (2018)
3. Fordal, J.M., Rødseth, H., Schjølberg, P.: Initiating industrie 4.0 by implementing sensor
management–improving operational availability. In: Wang, K., Wang, Y., Strandhagen, J.O.,
Yu, T. (eds.) International Workshop of Advanced Manufacturing and Automation, pp. 200–
207. Springer, Changzhou (2019)
4. Rødseth, H., Schjølberg, P., Marhaug, A.: Deep digital maintenance. Adv. Manuf. 5, 299–
310 (2017)
5. Staufen, A.G.: German Industry 4.0 Index 2018. Staufen.AG (2018)
6. Frank, A.G., Dalenogare, L.S., Ayala, N.F.: Industry 4.0 technologies: implementation
patterns in manufacturing companies. Int. J. Prod. Econ. 210, 15–26 (2019)
7. Porter, M., Heppelmann, J.: How smart, connected products are transforming companies.
Harvard Bus. Rev. 93, 97–114 (2015)
8. Haarman, M., de Klerk, P., Decaigny, P., Mulders, M., Vassiliadis, C., Sijtsema, H., Gallo,
I.: Predictive maintenance 4.0 - beyond the hype: PdM 4.0 delivers results. Pricewater-
houseCoopers and Mannovation (2018)
9. Geissbauer, R., Schrauf, S., Koch, V., Kuge, S.: Industry 4.0 – Opportunities and Challenges
of the Industrial Internet. PricewaterhouseCoopers (2014)
10. Porter, M., Heppelmann, J.: How smart, connected products are transforming competition.
Harvard Bus. Rev. 92, 64–88 (2014)
11. Akkermans, H., Besselink, L., van Dongen, L., Schouten, R.: Smart moves for smart
maintenance (2016)
12. Schuh, G., Anderl, R., Gausemeier, J., Hompel, M.T., Wahlster, W. (eds.): Industrie 4.0
Maturity Index Managing the Digital Transformation of Companies (Acatech Study).
Herbert Utz Verlag, Munich (2017)
13. Kane, G.C., Palmer, D., Phillips, A.N., Kiron, D., Buckley, N.: Aligning the organization for
its digital future. MIT Sloan Manag. Rev. (2016). Deloitte University Press
14. Kumar, U., Galar, D., Parida, A., Stenström, C., Berges, L.: Maintenance performance
metrics: a state-of-the-art review. J. Qual. Maint. Eng. 19, 233–277 (2013)
15. Simões, J.M., Gomes, C.F., Yasin, M.M.: Changing role of maintenance in business
organisations: measurement versus strategic orientation. Int. J. Prod. Res. 54, 3329–3346
(2016)
16. Smit, K.: Maintenance Engineering and Management. Delft Academic Press, Delft (2014)
306 T. I. Pedersen and P. Schjølberg
17. Haarman, M., Delahay, G.: VDM(xl) Value Driven Maintenance & Asset Management.
Mainnovation (2016)
18. Haarman, M., Delahay, G.: Value Driven Maintenance & Asset Management. Managing
Aging Plants (2018)
19. Otley, D.: Performance management: a framework for management control systems
research. Manag. Account. Res. 10, 363–382 (1999)
20. Young, S.D., O’Byrne, S.F.: EVA and Value-Based Management: A Practical Guide to
Implementation. McGraw-Hill, New York (2001)
21. Zimmerman, J.L.: Accounting for Decision Making and Control. McGraw-Hill/Irwin,
Boston (2011)
22. Etherington, J., Pollen, T., Zuccolo, L.: Comparison of Selected Reserves and Resource
Classifications and Associated Definitions. Society of Petroleum Engineers (2005)
23. Diez-Olivan, A., Del Ser, J., Galar, D., Sierra, B.: Data fusion and machine learning for
industrial prognosis: trends and perspectives towards Industry 4.0. Inf. Fusion 50, 92–111
(2019)
24. Tao, F., Qi, Q., Liu, A., Kusiak, A.: Data-driven smart manufacturing. J. Manuf. Syst. 48,
157–169 (2018)
25. Vogl, G.W., Weiss, B.A., Helu, M.: A review of diagnostic and prognostic capabilities and
best practices for manufacturing. J. Intell. Manuf. 30, 79–95 (2019)
26. Elia, V., Gnoni, M.G., Lanzilotto, A.: Evaluating the application of augmented reality
devices in manufacturing from a process point of view: an AHP based model. Expert Syst.
Appl. 63, 187–197 (2016)
27. DIN/DKE: GERMAN STANDARDIZATION ROADMAP Industrie 4.0 Version 3. (2018)
28. Visintin, F.: Photocopier industry: at the forefront of servitization. In: Lay, G. (ed.)
Servitization in Industry, pp. 23–43. Springer, Cham (2014)
29. Grubic, T., Jennions, I.: Do outcome-based contracts exist? The investigation of power-by-
the-hour and similar result-oriented cases. Int. J. Prod. Econ. 206, 209–219 (2018)
30. Grubic, T., Jennions, I., Baines, T.: The interaction of PSS and PHM - a mutual benefit case
(2009)
31. Visnjic, I., Jovanovic, M., Neely, A., Engwall, M.: What brings the value to outcome-based
contract providers? Value drivers in outcome business models. Int. J. Prod. Econ. 192, 169–
181 (2017)
Manufacturing System
Common Faults Analysis and Detection System
Design of Elevator Tractor
1 Elevator Tractor
Elevator tractor is also known as the main engine of elevator. It provides power to
make the car of elevator move relative to the weight device. Elevator tractor can be
divided into AC motor and DC motor according to the driving motor. At present, AC
motor is commonly used. Elevator tractor can be divided into gearless tractor and
geared tractor according to whether there are reducers or not. The geared tractor drives
the motor power to the traction wheel through the reducer drive. The gearless tractor
drives the motor power directly to the traction wheel.
According to the nature of the fault of elevator tractor, the common fault phenomena
and causes of elevator tractor can be divided into three categories:
Some parts are over-worn or aging, incorrect use management or routine mainte-
nance work, leading to no pre-detection so that unreliable, defective or abnormal parts
are not replaced or repaired, and ultimately the defects are further expanded or even
damaged [1].
During the use of elevator, the fasteners relax or even loosen naturally due to
normal extension or vibration. However, in daily maintenance and maintenance, the
equipment is not checked and maintained according to relevant standards and plans, so
the natural extension state of some parts is not adjusted and controlled in time, and the
moving parts are not made. The tightening and reasonable position of the elevator are
guaranteed, which results in abnormal meshing state between components and ulti-
mately leads to the damage of the elevator.
The system failure caused by poor lubrication or insufficient lubrication of
mechanical parts causes abnormal operation of rotating parts, resulting in heat gener-
ation and wear until the parts bite each other, resulting in abnormal working failure of
moving parts.
Collecting data is the primary task for the traction machine’s detection system. The
most true vibration signal can be collected by selecting a suitable vibration detection
point. The vibration detection point is generally selected at the rigid support point. In
this design, according to the vibration of the traction machine, the base of the traction
machine is selected as the detection point (Fig. 1).
Fig. 2. Program block diagram of main interface of elevator traction machine detection system.
The real-time acquisition interface can receive the original vibration signal of the
elevator traction machine detected by the hardware of the elevator traction machine
detection system, mainly from the original vibration signal waveform, difference/single
end, the number of samples, magnification, channel entrance, sampling frequency
Common Faults Analysis and Detection System Design of Elevator Tractor 313
parameters display, click start storage can be stored in the computer file. Among them, the
channel entrance is consistent with the channel entrance of the acquisition card (Fig. 3).
The noise in the elevator operation site environment is very serious, which affects
the vibration signal of the elevator traction machine and leads to signal distortion. FIR
filtering is selected to eliminate the noise. FIR filtering can let the frequency compo-
nents of the vibration signal of the elevator traction machine pass through and weaken
the unnecessary frequency components.
The vibration signal of the elevator traction machine is a low-frequency signal, and
the FIR low-pass filter is selected. The amplitude-frequency characteristic curve of the
FIR low-pass filter of the elevator traction machine is shown in the following Fig. 4.
The frequency represented by f2 is the upper limit cut-off frequency of the FIR low-pass
filter. If the signal frequency is lower than f2, it can pass, but if it is higher than f2, it
cannot pass [3].
As shown in the Table 1, the rated frequency of the elevator is 25.5 Hz, so the pass
band range of the FIR filter is between 0 and 25 Hz. As shown in the filter information
interface of the following Fig. 5, the pass band range is between 0 and 25 Hz, and the
type is low pass filter and denoise the collected waveform diagram.
In the design of elevator traction detection system, the filtered vibration signal of
elevator traction machine is irregular waveform signal, which can be superimposed or
subtracted by sinusoidal function waveform signal of different frequencies, and then the
characteristic value of elevator traction machine vibration signal can be represented by
calculating the eigenvalue superposition or subtraction of sinusoidal function of dif-
ferent frequencies.
The Frequency Domain Function (Image Function of Fourier Transform) is as
follows (Fig. 6).
Z 1
FðxÞ ¼ F½f ðtÞ ¼ f ðtÞeixt dt ð1Þ
1
Common Faults Analysis and Detection System Design of Elevator Tractor 315
The time domain function (the original function of Fourier transform) is as follows.
Z 1
1
f ðtÞ ¼ F 1 ½FðxÞ ¼ FðxÞeixt dx ð2Þ
2p 1
The frequency domain analysis interface can display phase spectrum, maximum,
minimum, maximum and minimum positions. After the signal is filtered, the size and
position of the maximum and minimum values can be obtained by image function
processing of fast Fourier transform (Figs. 7 and 8).
The vibration fault or safety can be judged by comparing the maximum value,
minimum value and peak-to-peak value of frequency domain analysis and peak-to-peak
value of time domain analysis with the maximum value, minimum value and peak-to-
peak value of alarm value. The regulation of alarm value is based on the regulation of
three characteristic values of safety elevators measured many times. As shown in the
figure, the maximum value of 3.07578 is less than the maximum value of alarm value
of 3.5, the minimum value of −3.14143 is greater than the minimum value of alarm
value of −3.5, and the peak-to-peak value of 0.488289 is less than the alarm peak-to-
peak value of 0.5. It is judged that the elevator traction machine is safe and consistent
with the safety signal lamp in the figure.
The maximum value 4 is greater than the maximum value 3.5 of the alarm value,
the minimum value −4.14146 is less than the minimum value −3.5 of the alarm value,
the peak-to-peak value 0.51 is greater than 0.5, the vibration fault lamp flashes, and the
traction machine fault is judged.
Through the storage of signal waveform and data, the experience and reference of
traction machine fault detection can be provided to the following elevator traction
machine detection operators.
References
1. Salman, H.E., Yazicioglu, Y.: Flow-induced vibration analysis of constricted artery models
with surrounding soft tissue. J. Acoust. Soc. Am. 142(4), 1913–1925 (2017)
2. Li, X.Z., Zhang, X., Zhang, Z.J., et al.: Experimental research on noise emanating from
concrete box-girder bridges on intercity railway lines. Proc. Inst. Mech. Eng. Part F: J. Rail
Rapid Transit 229(2), 125–135 (2015)
3. Deng, Y.Q., Xiao, X.B., He, B., et al.: Analysis of external noise spectrum of high-speed
railway. J. Cent. South Univ. 21(12), 4753–4761 (2014)
Balanced Maintenance Program
with a Value Chain Perspective
1 Introduction
Fig. 1. Six stages in the development towards Smart Maintenance, adopted from [6, 7].
Despite that predictive maintenance has been demonstrated with remaining useful
life (RUL) predictions supporting maintenance planning activities [7], more develop-
ment of predictive maintenance is expected. In fact, predictive maintenance seems to
fall short of its possibilities in order to deliver what it promises [8]. A possible
explanation for this is that the maintenance system cannot rule out all operating errors.
To cope with this challenge, it is of interest to explore other alternative modules for
predictive maintenance. In addition to the module of RUL it is also possible to apply
anomaly detection as a module for predictive maintenance [9], along with the use of AI
in maintenance programs for making them adaptable, which is one focus area in this
article. Figure 1 can also describe the maturity levels for maintenance programs, where
stage 1 concerns the integration of maintenance programs into computerized mainte-
nance management systems (CMMS), and, on the other end, stage 6 presents the use of
AI in maintenance programs for enabling adaptability.
Balanced Maintenance Program with a Value Chain Perspective 319
A definition of value chain is presented by [20]: “The value chain is a tool to dis-
aggregate a business into strategically relevant activities. This enables identification of
the source of competitive advantage by performing these activities more cheaply or
better than its competitors. Its value chain is part of a larger stream of activities
carried out by other members of the channel-suppliers, distributors and customers.”
Until recently, maintenance has been seen as a cost center, but findings from [22, 23]
proves that maintenance is a profit generating function. Further, maintenance is said to
have a significant impact on capacity, quality, costs, environment and safety [22], and
the introduction of Smart Maintenance is assumed to increase this impact [4]. In [22–
24], they also suggest to further investigate on the relationship between maintenance
and overall organizational performance to provide a more holistic view of maintenance
performance benefits. Summarized, the definition of value chain and findings proving
maintenance to be value-adding, supports the potential of seeing maintenance as an
activity in the value chain.
Based on experiences obtained from a process company, the internal and external
value chain effects of six maintenance program objectives have been acknowledged.
This underpins the relationship between maintenance and value chain, and, in terms of
Smart Maintenance, adaptability is one desirable objective of a maintenance program.
Figure 2 shows the external and internal value chain effects of six maintenance pro-
gram objectives: cost, dependability, health, safety and environment (HSE), adapt-
ability, quality, and effectiveness.
Fig. 2. Internal and external value chain effects of maintenance program objectives.
Balanced Maintenance Program with a Value Chain Perspective 321
Experiences from a process company shows that most CMMS today do not provide the
ability for maintenance programs to be individualized and balanced to the degree that
they could be. Currently, in most maintenance programs, data from, for example,
condition monitoring, vibration, temperature, speed, level monitor, flow, and pressure
measurements, can be integrated into the maintenance program. In case of deviations in
this data, several maintenance programs can make an error report that maintenance
personnel can follow up with a work order. This can be seen as corrective maintenance.
However, there is a need and desire to move to predictive maintenance, as performing
maintenance prior to an error occurs will improve plant cost-effectiveness and increase
availability.
The following proposed concept address the abovementioned challenges by uti-
lizing ML to create a balanced maintenance program. The concept will provide
decision-support for how the maintenance program should be adjusted, based on the
parameters “machine anomalies” and “machine load.” Combining these parameters
will enable adaptability in the maintenance program, meaning a more flexible,
dimensioned and predictive maintenance is possible. In terms of Fig. 1, the concept is
positioned in stage 6: Adaptability. Figure 3 shows the concept for balancing a
maintenance program with ML, based on machine load and machine anomalies.
322 J. M. Fordal et al.
The concept will have the existing maintenance program as a basis. Thus, the
original maintenance program will be the preference for normal conditions. The
parameter machine anomalies will take current, and expected, technical condition into
account, and give a high or low indication. High level of machine anomalies indicates a
need for maintenance, while a low level indicates the opposite. To calculate this
parameter, data from the machine, along with sensor data, historical data, and work
orders, will be gathered in a data lake. Further, ML will be used to process these data
and give a high or low indication. The parameter machine load will consider current,
and expected, machine load. High level machine load indicates that the machine is
either running over its design capacity, or that something in the process increases the
load, for example raw materials out of specification. Low level machine load indicates
that the machine is running under its design capacity. Data from production planning
and process control will be the main source for calculating level of machine load.
Four different actions are possible. First, “reduced maintenance” can be beneficial if
machine anomalies and machine load are both at low levels, meaning the need for
maintenance is lower than under normal conditions. Second, “maintenance call-out” is
proposed for low machine load combined with high level of machine anomalies.
A machine running under its design capacity, and still have numerous anomalies
indicates a need for an extra inspection to find the root cause. Third, “increased
maintenance” is proposed if both machine anomalies and machine load are at a high
level, indicating the need for maintenance is higher than under normal conditions.
Fourth, “control process” is suggested when machine load is high and machine
anomalies level is low. This control can be to see if the high machine load is due to
over-production, or if other elements in the production is causing extra load.
The four different actions can be linked to Fig. 2, and evaluation of different mea-
sures may be seen up against how it will internally and externally effect the value chain.
For example, if the machine load is high and the suggested action is to control the
process, an evaluation of dependability must be done. Hence, consider the high machine
load up against the risk of reduced process reliability, and a dependable delivery. This
will give a more holistic view to how maintenance effects the value chain.
Balanced Maintenance Program with a Value Chain Perspective 323
4 Conclusion
The aim of this article was to investigate how maintenance programs can benefit from
including a value chain perspective, and to present a concept for balanced maintenance
programs in asset intensive industrial plants.
Predictive maintenance and adaptability in maintenance programs are discussed,
and their linkage to the six development stages towards Smart Maintenance is pre-
sented. Current challenges with maintenance programs are that they are static, inflex-
ible, and inefficient. The concept presented in this article addresses these challenges by
utilizing ML to create a balanced maintenance program. The concept will provide
decision-support for how the maintenance program should be adjusted, based on the
parameters machine anomalies and machine load. The value chain perspective for
Smart Maintenance is also discussed, and six maintenance program objectives, based
on experiences from a process company, and their effect on the internal and external
value chain is presented. This gives a more holistic view of the maintenance function,
and challenges the original value chain concept presented by Michael Porter. Figure 4
provides an overview of challenges, work, and results presented in this article.
References
1. Fordal, J.M., Rødseth, H., Schjølberg, P.: Initiating industrie 4.0 by implementing sensor
management – improving operational availability. In: International Workshop of Advanced
Manufacturing 2018, Advanced Manufacturing and Automation VIII. Lecture Notes in
Electrical Engineering, vol. 484, pp. 200–207. Springer, Changzhou (2019)
2. Chan, F.T.S., Lau, H.C.W., Ip, R.W.L., Chan, H.K., Kong, S.: Implementation of total
productive maintenance: a case study. Int. J. Prod. Econ. 95, 71–94 (2005)
3. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic
initiative INDUSTRIE 4.0 (2013)
4. DIN: German Standardization Roadmap - Industry 4.0 (2018)
5. Frank, A.G., Dalenogare, L.S., Ayala, N.F.: Industry 4.0 technologies: implementation
patterns in manufacturing companies. Int. J. Prod. Econ. 210, 15–26 (2019)
324 J. M. Fordal et al.
6. Schuh, G., Anderi, R., Gausemeier, J., Ten Hompel, M., Washlster, W.: Industrie 4.0
maturity index. Managing the Digital Transformation of Companies (Acatech STUDY)
(2017)
7. Rødseth, H., Schjølberg, P., Marhaug, A.: Deep digital maintenance. Adv. Manuf. 5, 299–
310 (2017)
8. Staufen: Industry 4.0 - German Industry 4.0 Index 2018: A study from Staufen AG and
Staufen Digital Neonex GmbH (2018)
9. Nur Adi, T., Wahid, N., Sutrisnowati, R., Choi, Y., Bae, H., Seo, C.S., Jeong, S.H., Seo,
T.Y.: Cloud-based predictive maintenance framework for sensor data analytics (2018)
10. Standards Norway: Risk based maintenance and consequence classification Z-008, pp. 14.
Standards Norway (2017)
11. Biteus, J., Lindgren, T.: Planning flexible maintenance for heavy trucks using machine
learning models, constraint programming, and route optimization. SAE Int. J. Mater. Manuf.
10, 306–315 (2017)
12. Lindgren, T., Biteus, J.: Expert guided adaptive maintenance. In: European Conference of
the Prognostics and Health Management Society, Nantes, France, 8th–10th July (2014)
13. Lindgren, T., Warnquist, H., Eineborg, M.: Improving the maintenance planning of heavy
trucks using constraint programming. In: ModRef 2013: The Twelfth International
Workshop on Constraint Modelling and Reformulation, Uppsala, Sweden, 16th September
2013, pp. 74–90. Université Laval (2013)
14. Prytz, R., Nowaczyk, S., Rögnvaldsson, T., Byttner, S.: Predicting the need for vehicle
compressor repairs using maintenance records and logged vehicle data. Eng. Appl. Artif.
Intell. 41, 139–150 (2015)
15. Cholasuke, C., Bhardwa, R., Antony, J.: The status of maintenance management in UK
manufacturing organisations: results from a pilot survey. J. Qual. Maint. Eng. 10, 5–15 (2004)
16. Basak, D.: Integrating maintenance activities and quality assurance in a research and
development (R&D) system. Qual. Assur. J. 10, 249–254 (2006)
17. Susto, G.A., Schirru, A., Pampuri, S., McLoone, S., Beghi, A.: Machine learning for
predictive maintenance: a multiple classifier approach. IEEE Trans. Ind. Inform. 11, 812–
820 (2015)
18. Al-Mudimigh, A.S., Zairi, M., Ahmed, A.M.M.: Extending the concept of supply chain: the
effective management of value chains. Int. J. Prod. Econ. 87, 309–320 (2004)
19. Porter, M.E.: Competitive Advantage - Creating and Sustaining Superior Performance. Free
Press, New York (1985)
20. Walters, D., Lancaster, G.: Implementing value strategy through the value chain. Manag.
Decis. 38, 160–178 (2000)
21. Ambrosini, V., Bowman, C.: How value is created, captured and destroyed. Eur. Bus. Rev.
22, 479–495 (2010)
22. Alsyouf, I.: The role of maintenance in improving companies’ productivity and profitability.
Int. J. Prod. Econ. 105, 70–78 (2007)
23. Maletič, D., Maletič, M., Al-Najjar, B., Gomišček, B.: The role of maintenance in improving
company’s competitiveness and profitability: a case study in a textile company. J. Manuf.
Technol. Manag. 25, 441–456 (2014)
24. Rødseth, H., Fordal, J.M., Schjølberg, P.: The journey towards world class maintenance with
profit loss indicator. In: International Workshop of Advanced Manufacturing 2018,
Advanced Manufacturing and Automation VIII. Lecture Notes in Electrical Engineering,
vol. 484, pp. 192–199. Springer (2019)
Construction Design of AGV Caller System
Abstract. To realize the call request from the shop station to the AGV
(Automatic Guided Vehicle), the caller came into being. However, the existing
wireless physical caller is mostly based on a microcontroller and hardly
extended. Thus, it is unable to reflect the information between the station and the
AGV. A set of caller and CCS (Control Center System) software to realize basic
call function was designed in this paper. Moreover, the caller can display task
status, queue situation, vehicle status fed back and CCS have the function of
location management. In the experiment, a workshop caller system was con-
structed proposed by this paper to verify the validity and stability. The result of
the experiment shows that the flexible of AGV caller system has been great
enhance compared with traditional caller system. Finally, the validity and sta-
bility of the AGV caller system are verified by experiments.
1 Introduction
loaded with the wireless module. The caller software is based on Android system and
can be installed on any Android mobile device with great portability.
Fig. 2. Caller interface screenshot: 1-connection state, 2-AGV state, 3-pause/go on button, 4-IP
configuration button, 5-Destination, 6-withdraw button, 7-caller number
These two modules are constrained by AGV power and exert cross-control AGV to
perform tasks. When the AGV battery is below bad level or there is no task, AGV will
automatically charge in the charging post. On the contrary, priority is given to the task
when the battery is above good level.
4. Message log
The connection/disconnection of each device status and task completion status are
output as a log and saved as a txt file for subsequent data analysis.
3 Experiment
The steps to set up the experimental environment of the AGV caller system are as
follows:
1. Setting up the LAN so that the AGV controller, caller, and CCS are in the same
LAN.
2. Arranging the charging pile to prepare for automatic charging.
3. Making the factory map. The AGV controller uses the controller of Serer Robotics.
The controller can process the lidar scan data to obtain the map of surrounding
environment; build the path between the station through the teaching method;
Construction Design of AGV Caller System 331
establish the task, and the task’s name corresponds to the matching module in CCS;
save all tasks to the AGV controller. Figure 5 is a map of the experimental envi-
ronment of the AGV system, including stations and paths.
4. Since CCS is the server of TCP/IP communication, CCS should be opened before
configuration of caller. After CCS establishes a connection with AGV, AGV per-
forms an automatic charging task firstly. Next, the stock is initialized in CCS based
on the number of stations established in the AGV controller.
5. After the caller is connected to the network, configure IP by inputting address and
port of CCS. As soon as the connection with CCS is established, caller can send the
task.
The experimental environment is shown in Fig. 6.
The caller system runs continuously for 2 h, and 10 callers send tasks out of order.
The AGV performs call tasks and charging tasks in order without any abnormality. The
status of the task is displayed correctly on the caller interface. Log file is automatically
output after the CCS is shut down.
4 Conclusion
This paper focuses on the caller and CCS where Caller realizes basic call function.
Moreover, caller can display task status, queue situation, vehicle status fed back from
CCS. As for the problem of stock, a “one-to-many” approach was proposed to extend
the meaning of destination. Finally, the feasibility and reliability of the proposed
scheme were verified by experiments.
Acknowledgements. The authors thank for the financial support from the Nation Natural Sci-
ence Foundation of China (51205243) and National Key Research and Development Plan
(2016YFC0302402).
References
1. Zhou, L.: Neuronavigation Surgery. Reference to a chapter in an edited book
2. Omara, A.I.M.: Accuracy analysis and error correction for anatomical landmarks based
registration in image-guided neurosurgery. Fudan University (2013)
3. Roberts, D.W., Strohbehn, J.W., Hatch, J.F., et al.: A frameless stereotaxic integration of
computerized tomographic imaging and the operating microscope. J. Neurosurg. 65(4), 545–
549 (1986)
4. Sun, Y., Luebbers, H.-T., Agbaje, J.O., et al.: Validation of anatomical landmarks-based
registration for image-guided surgery: an in-vitro study. J. Cranio-Maxillofac. Surg. 41(6),
522–526 (2013)
5. Han, W., Zheng, J.B., Li, X.X.: A fast and accurate stereo matching algorithm based on
epipolar line restriction. In: 2008 Congress on Image and Signal Processing, no. 5, pp. 271–
275 (2008)
6. Wang, Z., Yang, R., Xiong, D., Wu, X.: Stereo matching for positioning instruments in near
infrared surgery navigation. Photoelectron 23(01), 203–208 (2012)
7. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach.
Intell. 22(11), 1330–1334 (2000)
A Transfer Learning Strip Steel Surface Defect
Recognition Network Based on VGG19
Abstract. The types of surface defects of strip steel are various and complex
gray gradation structure. The existing image detection technology based on
machine vision still has the disadvantages of low recognition efficiency and poor
generalization performance in strip steel defect detection. However, image
detection technology based on deep learning need large numbers of image data
to train networks. For a typical multi-class and small sample data with low
quality pixels, these data cannot complete a deep neural network training. For
this type of data, traditional convolutional neural networks have low recognition
rate for small samples and poor generalization for large samples. Combining
with deep learning and transfer learning, this paper proposes a transfer learning
strip steel defect recognition network based on VGG19. The frozen pre-training
network layers in VGG19 are not trained, the learning rates are setting in
combination with the actual use of the network layers. The convergence speed
and accuracy of the model are taken into account, and the recognition rate and
generalization force on small sample data are greatly improved. On the NEU
surface dataset 2, the recognition accuracy of our model is 97.5%, which is
much higher than the traditional machine learning algorithm. Moreover, the
network model in this paper does not require data preprocessing and model
parameter adjustment, nor does it need to manually participate in designing the
classifier. It is a simple and effective method for identifying the surface defects
of strip steel. The method of this paper has a certain practical value in the field of
surface recognition of other products.
Keywords: Strip steel surface defects Multi-classes & small samples Deep
learning Transfer learning
1 Introduction
Strip steel is one of the main products of the steel industry. It is an indispensable raw
material for aerospace, shipbuilding, automobile, machinery manufacturing and other
industries. The quality of the strip will directly affect the quality and performance of the
final product. In the strip steel manufacturing process, due to various factors such as
raw materials, rolling equipment and processing technology, cracks, crusting, roll
marks, holes, skin delamination, pitting, nclusions, oxidation occur on the surface of
the strip steel. Different types of defects such as skin, scratches and welds, which not
only affect the appearance of the product, but also reduce the corrosion resistance, wear
resistance and fatigue strength of the product. Surface defects are one of the important
factors affecting the quality of strip steel. According to statistics, more than 60% of the
quality objection incidents of domestic strip steel products are caused by surface
defects [1]. Therefore, in the rolling process, it is very important to find these defects in
time, adjust the control parameters and classify the strips steel of different grades to
improve the quality of the strip steel products.
Traditional steel companies use manual visual inspection and flash-light detection
to test the surface quality of strip steel through integrate the information on the severity
of all defects by the help of probability & statistical and testers’ experience to form a
comprehensive quality report. Due to this method is affected by the limitations of the
detection means and the subjective factors of the testers, the real-time detection is poor,
and error rates of missed detection and false detection are high. For the surface defect
identification of high-speed moving products on the conveyor belt, the human eye can
only detect 60% of defects, the width of the product should not exceed 2 m and the
moving speed should not exceed 30 m/s in the best case [2]. The most difficult thing is
to find the exact location of the defects and determine their types [3].
With the development of industrial technology, gradually appeared methods of
using sensors instead of naked eye for detection, such as eddy current detection,
capacitance detection and ultrasonic inspection. These methods can not only detect
surface defects, but even some internal small defects can also be detected. The max-
imum detection accuracy can be 5 10−4 mm3, which greatly changes the fact that
accuracy of naked eye detection is low. However, due to the limitations of the detection
principle, these detection methods cannot detect all products and their defect types. The
defects of tested products must be significantly different from those of the non-
defective parts to be effectively detected.
After entering the 1970s, machine vision applications became possible in steel plate
inspection due to the development of CCD cameras and the rapid development of
image processing technology. Machine vision is a non-contact non-destructive test
technology. It has the advantages of high resolution, strong classification, small
influence by environmental electromagnetic field, large working distance, high mea-
surement accuracy and low cost. It has become the mainstream in current research
work. The current mainstream classification algorithms for products’ surface defect
recognition based on machine vision include decision tree method, KNN algorithm,
genetic algorithm, neural network method, support vector machine method and so on
[4]. However, when identifying and classifying defect images with many categories,
complex shapes and overlapping class boundaries, the classifiers are often designed in
the form of multiple algorithms connected in series and parallel. The robustness and
real-time performance of the algorithm is poor [5]. Especially when the image contains
noise or texture background, the real defect edge may be missed due to noise or texture
background interference, which cannot meet the requirements of on-line detection of
surface defects [6].
A Transfer Learning Strip Steel Surface Defect Recognition Network 335
Model prediction accuracy and error are closely related. Since each type of data in
NEU surface strip steel defect data set 2 is class balanced, accuracy is an ideal indicator
for evaluating the performance of various algorithms. It can compare the advantages
and disadvantages of various algorithms for classification of defects. There is no need
to consider the recall rate and F1 value.
Figure 3 shows the training and verification accuracy of the network. It can be seen
from the figure that the convergence and accuracy of the network is poor. On the
training set, the model has an over-fitting phenomenon. On the verification data set, the
network The accuracy fluctuates greatly. For this problem, according to the experience
of ImageNet image recognition contest, it is known that retraining an effective con-
volutional neural network requires at least 1000 high quality images of the same
category. The dataset of this paper contains six types of images, each of which contains
only 300 images, for a total of 1800 images, each of which occupies a low-quality
grayscale image with a storage capacity of only 40.1 KB. The existing strip steel defect
images have low resolution and small samples, which is a typical multi-class and small
sample data with low quality pixels. It is also very difficult for some types of images to
be observed by artificial naked eyes (for example Class 1: Inclusions), which is not
suitable for training a deep neural network.
From the above research, it is not feasible to reconstruct and train an effective strip
steel defect identification network in the case of the existing data volume, a new
method must be sought.
It can be seen from the network’s accuracy in Fig. 4 that the transfer learning based
on VGG19 is more complicated than the previous network model. Although the
accuracy and convergence of network are greatly improved, the verified loss of data set
is quite turbulent in the early stage of training. In the medium term, there will be
occasional increases in loss. Although the loss will converge after training, the unstable
loss curve obviously has some unreasonable in network design. For the node that
suddenly increases loss during the training processes, it is called the problem of dead
relu node in deep learning. This type of problem is mainly due to excessive magnitude
of a gradient update, which causes the weight adjustment of some Relu nodes to be too
large, so that subsequent training will no longer work on the nodes. This node is
equivalent to permanent dead. What is more, after some batch losses function are
updated, these losses suddenly rise to infinity and lead training processes to fail.
There are many kinds of surface defect detection algorithms based on machine
learning, such as k nearest neighbor (KNN), support vector classifier (SVC), gradient
enhancement, random forest classifier, AdaBoost and decision tree. The idea is manual
design and train feature extractors to extract texture features such as contrast, dis-
similarity, uniformity and asymmetry from the image data to complete image recog-
nition and classification. Because the feature extractors of machine learning are very
sensitive to data and parameter settings, they must be designed according to the cor-
responding data set and prior knowledge, so data preprocessing and model parameter
adjustment are the key step in building the model. However, steel plate’s defects highly
depend on the texture of surface and any pretreatment methods (such as smoothing or
sharpening) will change its texture characteristics. Moreover, the careful parameter
adjustment of algorithm is very time-consuming and less relevant theoretical basis. It
mainly relies on the experience of the experimenter. Therefore, the recognition rate of
the machine learning algorithm models on the strip steel defect are relatively low.
The transfer learning network model based on VGG19 can directly input the
original data image into network without data preprocessing. The model has high
robustness and accuracy in small sample data sets and is insensitive to parameter
settings. It is a practicality and flexible deployment method, suitable for implementa-
tion in the identification of steel surface defects.
Comparison of various algorithms’ performances
5 Conclusions
This paper combines deep learning with transfer learning and proposes a transfer
learning strip steel defect recognition network based on VGG19. The frozen pre-training
network layers donot train and combines with the actual use of the network layer to set
the learning rate in sub-regions, which not only takes into account the convergence
speed and accuracy of the model. The model focuses on learning the trivial edges of the
image and responding, which solves the loss oscillating problem of the dead relu node
and the verification data set. The convolutional neural network has greatly improved the
recognition rate and generalization ability of small sample data. Compared with the
traditional machine learning algorithms, the network model of this paper not only
doesnot need data preprocessing and model parameter adjustment, nor does it need to
manually participate in designing the classifier. It is a simple and efficient method.
At present, the network model of this paper is only suitable for the identification of
small sample image dataset with class balance. In the future, it is necessary to consider
small sample image dataset with unbalanced categories and the image data augmen-
tation algorithm (such as horizontal or vertical flip of ROI area, random scaling,
random sampling and cropping, adding various noises to the original image, etc.)
which is different from the traditional technology to realize automatic segmentation of
strip surface defects, labeling, autonomous learning and identification classification.
References
1. Tian, S.: Research on object detection and classification algorithms for surface defects of steel
plates and strips. University of Science and Technology, Beijing (2019)
2. Srinivasan, K., Dastoor, P.H., Radhakrishnaiah, P., et al.: FDAS: a knowledge-based
framework for analysis of defects in woven textile structures. J. Text. Inst. 83(3), 431–448
(1992)
3. Karayiannis, Y.A., Stojanovic, R., Mitropoulos, P., et al.: Defect detection and classification
on web textile fabric using multiresolution decomposition and neural networks. In: The 6th
IEEE International Conference on Electronics, Circuits and Systems 1999, Proceedings of
ICECS 1999. IEEE (1999)
4. Tang, B., Kong, J., Wang, X., et al.: Steel surface defect recognition based on support vector
machine and image processing. China Mech. Eng. 22(12), 1402–1405 (2011)
5. Zhao, Y.: Research on segmentation of oil contamination region on silicon steel surface based
on superpixels. Northeastern University (2014)
6. Zhao, J., Yan, Y., Liu, W., et al.: A multi-scale edge detection method of steel strip surface
defects online detection system. J. Northeast. Univ. (Nat. Sci.) 31(3), 432–435 (2010)
7. Yosinski, J., Clune, J., Bengio, Y., et al.: How transferable are features in deep neural
networks? Eprint Arxiv 27, 3320–3328 (2014)
Visual Interaction of Rolling Steel Heating
Furnace Based on Augmented Reality
1 Introduction
furnace [2]. However, the production environment of the hot rolling mill is relatively
complicated, and the unfavorable factors such as noisy and high temperature make the
operator not have the conditions for inspecting the state of the heating furnace
equipment and the state of the internal flow field during the production operation. And
traditional monitoring methods often require a large number of electronic instruments
and monitoring instruments. Operators can only know the current device status
information at a fixed location where monitoring instruments are installed, and it is
difficult to understand and operate anytime, anywhere. And the traditional monitoring
information lacks the mastery of the overall information of the entire heating furnace.
Therefore, there is a need for a system that allows an operator to view the operational
status of a device anytime, anywhere and operate in a timely manner.
Augmented reality (AR) technology is a product of the rapid development of
computer graphics technology. It is developed on the basis of virtual reality technology
[3]. It generates virtual objects, images and even sounds through computer graphics,
but unlike virtual reality, augmented reality technology superimposes computer-
generated virtual objects, scenes, sounds or system prompts into real scenes, thus
realizing realistic scenes. Enhancements increase user perception of the real world. In
the industrial sector, some developed countries have already applied augmented reality
technology. Boeing Company of the United States assists maintenance personnel in
wiring the wires by developing augmented reality auxiliary maintenance devices for the
head-mounted display. Sony’s TransVision Augmented Reality Prototype System [4]
can display a variety of auxiliary information to the user through the helmet display,
including the panel of the virtual instrument, the internal structure of the equipment
being repaired, and the parts drawings of the equipment being repaired. Augmented
reality applications in the domestic industrial sector are just beginning [5].
This paper proposes a visual interaction of rolling steel heating furnace realized by
augmented reality technology, and combined with the application of computational
fluid dynamics to realize the internal flow field simulation of the heating furnace. Based
on the 3D digital model of the hot rolling furnace, the Unity 3D development engine is
combined with the Vuforia SDK development kit as the augmented reality develop-
ment platform. The augmented reality-based visual interaction platform is developed
according to the device work information and the actual workflow to realize the device
interaction of the augmented reality device. Operation and information review.
Through this platform, operators can monitor the heating furnace anytime and any-
where through augmented reality technology, which is beneficial to the improvement of
production efficiency and safety.
2 System Framework
The visual interaction of the rolling steel heating furnace based on augmented reality
described in this paper can be divided into three levels, namely mobile augmented
reality application, AR data source server and heating furnace equipment. The overall
framework of the system is shown in Fig. 1.
344 B. Feng et al.
control device, the system also provides digital and real-time digital motion control,
which provides virtual operation of digital production line through augmented reality,
providing operators with the functions of trial and error, trial operation and test output
before actual operation. The equipment motion control module refers to the presen-
tation of the steel rolling furnace equipment through augmented reality technology. In
traditional paper materials, it is very difficult to realize the motion simulation of the
equipment, which is the space for augmented reality. The 3D model of the created
device is made into a static augmented reality display model by the Unity 3D engine.
Through the animation system (The schematic diagram of the animation system is
shown in Fig. 2.) and the C# script language editing, the device is displayed through
the augmented reality technology by referring to the actual running steps of the device
in the actual working condition. Motion simulation and practical manipulation of the
device.
The visual interaction of the rolling steel heating furnace based on augmented reality
designed in this paper has been successfully applied to the AR laboratory of Shanghai
Intelligent Manufacturing and Robot Key Laboratory, which solves the problems of
equipment data monitoring and maintenance operations in the workshop production.
The furnace model device is connected through the network to obtain real actual
production data stored in the device and perform interactive control.
Through the smart phone scanning identification map, the system displays the data
of the device in real time, including the device operation data and production data. For
abnormal data, it can adjust the historical alarm data, assist the personnel to analyze the
abnormal cause, and the system promptly reminds Alarms help to detect equipment
failures and avoid accidents. The mobile phone AR production line display is shown in
Fig. 5.
When the head-mounted display device uses the system to monitor the device, the
operator can view the current furnace device alarm information and operation status in
real time, and timely view the required maintenance manual and operation guidance
recommendations. And you can check the internal working state of the furnace, and
check the whole process of the equipment in detail, which greatly helps the staff to
reduce the difficulty and quantity of daily operation and inspection work. The visual
interactive interface on the head-mounted display device and the device status moni-
toring on the head-mounted display device is shown in Fig. 6.
Fig. 6. Visual interactive interface device status monitoring on the head-mounted display device
5 Conclusion
In this paper, Unity 3D is used as the augmented reality system development platform.
A set of visualization and interactive system for rolling steel heating furnace based on
augmented reality is designed. The main functions of the system include AR device
control, AR device parameter information display, and AR virtual scheduling. The
augmented reality technology is applied to the actual industrial production process. By
integrating the sensing material technology, the steel rolling industry production
information is read, and the running status information of the rolling steel equipment is
clear at a glance, and the complex industrial production equipment is monitored and
interacted under the virtual and real interaction environment. The on-site operator’s
real-time data monitoring and interaction with the equipment is not subject to site
constraints, reducing the environmental constraints of the operator’s actual operation
and improving the quality and efficiency of the equipment in the production envi-
ronment. The implementation of this application research lays the foundation for the
future application of augmented reality technology in digital workshops and intelligent
production.
References
1. Song, J.: Numerical simulation of pressure distribution in large steel rolling furnace furnace.
Anhui University of Technology (2017)
2. Zhang, S.: Improved design of regenerative heating furnace combustion system based on CFD
simulation. Shanghai Jiaotong University (2012)
3. Zhao, X., Zuo, H., Xu, X.: Research on key techniques of augmented reality maintenance
guiding system. China Mech. Eng. 19(6), 678–682 (2008)
4. Schwald, B., Figue, J., Chauvineau, E.: STARMATE: using augmented reality technology for
computer guide maintenance of complex mechanical elements. In: Proceedings e200:
eBusiness and eWork, pp. 17–19. Venice Cheshire Henbury (2001)
5. Liu, L., Jiang, C., Gao, Z., Wang, Y.: Research on real-time monitoring technology of
equipment based on augmented reality. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T. (eds.)
Advanced Manufacturing and Automation VIII, IWAMA 2018. Lecture Notes in Electrical
Engineering, vol 484. Springer, Singapore (2019)
Design and Test of Electro-Hydraulic Control
System for Intelligent Fruit Tree Planter
Abstract. Aiming at the problems of high labor intensity and low automation
degree in the process of fruit tree planting, this paper designs an electro-
hydraulic control system for intelligent fruit tree planter. The system adopts
STM32 as the control unit, through the intelligent control of multi-channel relay
and PWM governor, it can realize the automatic operation of digging, Emission
of fertilizers, mixing of fertilizers, backfilling and irrigation. Data of Rotating
speed of bit, system pressure and oil temperature are collected and displayed by
interrupt, ADC, IIC and serial communication technologies. PID control algo-
rithm is adopted to realize the matching operation of amount of digging feed and
system pressure, solve the problems of high resistance, system pressure and oil
temperature, and PID control process is simulated by MATLAB to get the
pressure-time curve of feedback operation. Field test results show that the
pressure of the hydraulic system tend to 10 MPa, the oil temperature maintain
between 50 and 55 °C, the total planting time is within 3 min, the system greatly
reduces labor intensity and improves the quality of planting.
1 Introduction
The process of fruit tree planting is miscellaneous, and the process of digging, Emission
of fertilizers, mixing of fertilizers, backfilling and irrigation result in great labor
intensity, and the existing mechanical automation degree is low [1]. Therefore, it is of
great significance to realize the automatic operation of fruit tree planting. Moreover,
hydraulic system pressure is high, easy to cause the leakage increase, high oil tem-
perature, affect the service life of the machine [2]. In the process of fruit tree planting,
the energy that digging needs is the largest, soil quality and depth have great influence
on digging resistance, excessive resistance will be reflected in a sharp drop in speed,
the increase of system pressure and the oil temperature, so the amount of digging feed
should be adjusted according to the digging resistance and system pressure [3].
In view of the above problems, this paper designs an electro-hydraulic control
system for intelligent fruit tree planter, STM32F103ZET6 as main control unit [4],
combining with the PID control algorithm to control the matching relationship between
the amount of digging feed and the system pressure, realizes the automatic and intel-
ligent operation of fruit tree planting, reduces the labor intensity, improves the service
life of the machine.
The mechanical structure of the planting system is shown in Fig. 1, after the syn-
chronous oil cylinder is extended, the quadrilateral lifting mechanism drives the
quadrilateral lifting frame to descend so that the bottom end of the outer cylinder is
attached to the ground. The contraction of the lifting oil cylinder makes the fertilizer
discharging device, inner cylinder and the screw bit which are fixed in the lifting frame
work vertically along the lifting slide track. The bottom screw bit is used for digging
holes and raising soil, the upper screw bit is used for mixing soil and bacterial fertilizer
evenly, the fertilizer discharged by the fertilizer discharging device is directly in the
inner cylinder and in direct contact with the soil. When completing the excavation and
mixing of fertilizer, the contraction of synchronous oil cylinder makes the place where
the guide pipe holds the saplings just in the middle of the hole. Putting on the sapling,
and raising the baffle to complete the backfilling of soil and bacterial fertilizer.
Fig. 1. Mechanical structure diagram of the planting system 1. screw bit 2. diversion pipe
3. baffle 4. outer cylinder 5. quadrilateral lifting mechanism 6. inner cylinder 7. hydraulic motor
8. fertilizer discharging device 9. lifting slide rod mechanism 10. Vertical lifting frame
11. quadrilateral lifting frame 12. synchronous oil cylinder 13. lifting oil cylinder 14. crawler
chassis frame
used for data collection, and IIC is used to display on OLED display screen, the system
controls the amount of digging feed according to the collected pressure value intelli-
gently, so that soil and fertilizer can be mixed evenly in the shortest time. The control
flow chart of planting operation is shown in Fig. 2.
start
initialize
Detect whether"one-click
operation" ispressed
relay operation
Manually control
Single planting
each component to Need an emergency Whether or not to
operation was
return to its original stop? continue?
completed
position
stop
STM32 microcontroller
y1 ¼ 37:5ðx1 1Þ
The measuring range of the pressure sensor is 0–30 MPa, the output voltage is 0–
5 V, and the relationship between pressure y2 and voltage x2 is:
y2 ¼ 6x2
Temperature and pressure are collected through the ADC module, the relationship
between ADC value and voltage x is:
3:3ADC
x¼
4096
The detection frequency can reach 1 kHz through the slotted photoelectric switch
sensor. The slotted disc with 30 holes is installed on the motor shaft. When the motor
rotates, the timer interrupt function is used for timing t, and the external interrupt is
used for counting z, external interrupt has a higher priority, so the rotational speed per
minute is:
2z
d¼
t
ADC
Temperature sensor
IIC communication
OLED
ADC
Pressure sensor MCU
Serial communication
Computer
Slot type photoelectric Timer interrupt
switch sensor External interrupt
cylinder, so as to intelligently adjust the supply of excavation, the PID feedback control
scheme is shown in Fig. 6 [7]. The position-type PID control equation is as follows:
X
n
out ¼ ðKp Ek Þ þ ðKp ðT=Ti Þ Ek Þ þ ðKp ðTd =TÞðEk Ek1 Þ þ out0
k¼0
The target
pressure PID Duty cycle Solenoid Output
Relay
+ controller valve
-
Real-time system pressure
CCR is the comparison value, ARR is the loading value, and duty cycle is the ratio
of the two. The duty cycle is determined by the comparison value and the duty cycle is
determined by the loading value. PSC is the pre-frequency division coefficient of the
system. Here we set it to 0 and the PWM frequency calculation formula is:
72M
f ¼
ðARR þ 1ÞðPSC þ 1Þ
Through MATLAB Simulink simulation, verify the rationality of the design [8],
Simulink simulation model is shown in Fig. 7, post-processing curve is shown in
Fig. 8, the system pressure added with PID coincides with the target pressure over
time, achieves the expected design goal.
+ 12VPower -
Shunt solenoid
valve
Synchronous Synchronous Vertical lift Vertical drop
ascending drop solenoid solenoid solenoid
solenoid valve valve valve valve
Fig. 9. Multiple valve Fig. 10. Solenoid valve working circuit diagram
Hydraulic transmission oil circuit schematic diagram is shown in Fig. 11, the left
three position four-way electromagnetic change-over valve realizes the expansion and
contraction of the synchronous hydraulic cylinder, the inlet and outlet of the syn-
chronous hydraulic cylinder are in parallel respectively to achieve synchronous
movement, the right controls the vertical rise and fall of the vertical lifting hydraulic
cylinder. Two position four-way electromagnetic change-over valve realizes one-way
rotation of hydraulic motor, when the solenoid valve on the right is not energized, the
oil will return to the oil tank through port P to port T. When the solenoid valve on the
right is energized, the oil will drive the hydraulic motor through port P to port A.
Fig. 11. Hydraulic transmission circuit schematic diagram 1. hydraulic oil tank 2. filter
3. hydraulic pump 4(11). three position four-way electromagnetic change-over valve 5. two-way
hydraulic lock 6. synchronous hydraulic cylinder 7. vertical lifting hydraulic cylinder
8. hydraulic motor 9. speed regulating valve 10. Two position four-way electromagnetic
change-over valve 12. overflow valve 13. cooler
356 R. Yin et al.
Field experiments were carried out in HeDong District, Linyi, the machine field test is
shown in Fig. 12. The machine operation status is shown in Fig. 12(a) and the fruit tree
planting field effect is shown in Fig. 12(b), planting speed of automatic mode is within
3 min, planting speed of manual mode is within 3 min 30 s, automatic mode is more
efficient than manual mode, the system greatly reduces the labor intensity, realizes the
automation of the fruit tree planting.
(a) machine operation status diagramand (b) Effect of fruit tree planting in field
The collection values of temperature and pressure during operation period is shown as
Fig. 13.The average system pressure during the operation period is shown as Fig. 13(a)
tending to 10 MPa, and the average oil temperature is shown as Fig. 13(b) tending to 50–
55 °C. The system pressure tends to the best working pressure through PID control, and
the oil temperature is at the normal oil temperature, which is of great significance to
extend the service life of the machine.
14 55
12 50
Temperature(℃)
45
Pressure(MPa)
10
8 40
6 35
4 30
2 25
0 20
0 12 24 36 48 60 0 12 24 36 48 60
Time(s) Time(s)
Fig. 13. Collection values of temperature and pressure during operation period
Design and Test of Electro-Hydraulic Control System 357
5 Conclusions
This paper designs an electro-hydraulic control system for intelligent fruit tree planter,
the automatic operation of fruit tree planting is realized, which improves the planting
speed and quality, and reduces the labor intensity. PID control algorithm is used to
intelligently adjust the matching relationship between the amount of digging feed and
system pressure, so as to improve the problems of high system pressure and high oil
temperature. Field test results show that the pressure of the hydraulic system tend to
10 MPa, the oil temperature maintain between 50 and 55 °C, and the total planting
time is within 3 min.
References
1. Liu, P.: Development and experimental study on the recipe of original filling. Shandong
Agricultural University (2017)
2. Tang, K.: Excessive high temperature of oil liquid of machine tool hydraulic system and
preventive measures. Mod. Manuf. Technol. Equip. 2006(06), 68–70+79 (2006)
3. Kudla, L.: Influence of feed motion feature on small holes drilling process. J. Mater. Process.
Technol. 109(03), 236–241 (2001)
4. Sun, Y., Shen, J., et al.: Design and test of monitoring system of no tillage planter based on
cortex M3 processor. Trans. Chin. Soc. Agric. Mach. 49(08), 50–58 (2018)
5. Huang, W., Bian, Y., Feng, Q., et al.: Crop growth parameters monitoring system based on
Cortex M3. J. Agric. Mech. Res. 37(02), 203–205 (2015)
6. Song, S., Ruan, Y., Hong, T., et al.: Self-adjustable fuzzy PID control for solution pressure of
pipeline spray system in orchard. Trans. Chin. Soc. Agric. Eng. 27(06), 157–161 (2011)
7. Wang, Q., Cao, W., Zhang, Z., et al.: Location control of automatic pick-up plug seedlings
mechanism based on adaptive fuzzy-PID. Trans. CSAE 29(12), 32–39 (2013)
8. Qiu, C., Liu, C., Shen, F., et al.: Design of automobile cruise control system based on Matlab
and fuzzy PID. Trans. Chin. Soc. Agric. Eng. 28(06), 197–202 (2012)
Lean Implementing Facilitating Integrated
Value Chain
1 Introduction
2 Theoretical Background
2.1 Integrated Value Chain
The term value chain has been used in a broad range of contexts and meanings [2] and
may be defined as “a system of interdependent activities, which are connected by
linkages” [3]. Organizations often experience difficulties with the handover between
two consecutive process steps caused by aspects such as e.g. minor standardization,
documentation or systemization, the presence of functional silos or different cultures [4,
5]. Furthermore, if infrastructure and written procedures have little flexibility or are
cumbersome, employees may choose to make their own parallel routines and contribute
to unreliable processes. To achieve a well-managed value chain with aligned and
balanced intra-organizational customer demand- and supply capabilities, all value
creating processes must act together [6].
2.2 Lean
Lean in a historical perspective has its roots back to the Japanese industrial success
after World War II [7, 8], where Toyota adapted, customized and developed produc-
tions systems to fit Japanese culture and context. Today, lean is mainly known as an
operational management strategy derived from The Toyota Production System (TPS) in
the early 1980’s. These lean manufacturing principles and techniques have later been
developed to other contexts.
The core of lean may be described in the following five principles [9]; (1) speci-
fying value creation, (2) identifying the value streams and eliminating waste, (3) cre-
ating flow in the production line from supplier to customer, (4) creating pull by
allowing customer demand to be the driver, and (5) striving to achieve the four pre-
vious principles through a systematic approach towards continuous improvement.
Factors that are emphasized in succeeding with lean is, among others; management
commitment, standardization, visualization and employee-involvement [10, 11]. These
are also referred to as influencing value chain integration [5, 12, 13].
3 Research Design
The data presented in this article represents a larger research project with the aim to create
new insights on lean in the Norwegian context. Data analyzed from five case companies
are presented in this article. Table 1 describes the case companies and data collection.
triangulation was obtained [25]. Extensive plant tours were made in all companies to
visually observe effects of lean in the organization. All interviews were recorded,
transcribed and anonymized. To identify issues and themes on how lean influence
value chain integration, the data was analyzed and coded into categories and analyzed
in a tabular.
Five case companies were studied to discover in what way lean contributes to achieve
integration. Table 2 describes the companies’ main approach to lean implementation
including the most commonly used tools.
Table 3. Management
Case Tools and activities
company
A Clear and demanding
B Clearly anchored. “This is not something one wins once - it is important that
continuity is in place. It is important not to “tick off” prematurely”
C Focus on creating a common platform and a common language
D Management-based anchoring over a long time period
E Initiated by top management – managed as a project. “Has changed the
approach along the way. Started with individual processes, but the experience
was that we had a one-time effect that was lost on the way. We then focused
more on Lean Academy. But the next step is to get a lean culture”
4.2 Standardization
Table 4 presents a summary of the degree of standardization per company. There was a
high degree of use of standardized work processes in company A. In this company the
operators were involved in improvement projects, for example, by improving stan-
dards. The management perceived that having an operator to coach another operator
would have more impact than if for instance, management or an engineer did the same
work. Management in company B emphasized standardization as the foundation for
continuous improvement processes. The company had different types of processes; and
experienced that it was easier to achieve an understanding for the need to standardize
processes where direct control of the product was impossible. The implementation of
team boards was the most important new element at company C. They found that that
standardization was easier to accomplish with new employees, as opposed to changing
habits among older employees. Company D had a high degree of standardized systems,
but “we have operators who ask critical questions, - which is positive, but it could be
challenging to establish standard work sets”. In company E, some standards were
available like common regulations and routines. The employees had slightly different
reactions to the process of implementing new standardized systems, “Initially, we made
many changes, but then it stopped a bit. So, we have focused on what we can do
ourselves. But it is difficult to maintain that focus. The overall goal was to create a
sense of community, which is critical to succeed”.
Lean Implementing Facilitating Integrated Value Chain 363
Table 4. Standardization
Case Degree of standardization
company
A High degree of standardization
B Unalterable principles – but possibilities for local adjustments. Generally,
high degree of standardizing on each production site
C Standardized roll-out, but a large degree of local adaption of boards
D High degree of standardization
E Some standards as common regulations, work procedures and routines
to express their own opinion. When the company was bought by Germans, they
experienced a negative change in employee involvement. A new lean business system
was implemented, but with few employees being involved in the process, and with
minimal possibilities for adjusting the system to the Norwegian context. In company E
they reported increased employee involvement after introducing lean. Another inter-
esting result is use of the tool value stream mapping and its positive influence on
improved interaction and collaboration between consecutive process step. By
addressing challenges that affect other process step, the involved people gain better
understanding that all value creating process must act together. This is obtained by
using cross functional team representing all the effected process steps and collaborating
on improvement to achieve flow in the value chain.
5 Conclusions
In this article, empirical findings from mapping of five different organizations has been
presented, with focus on their experiences with lean and how it influences the value
chain integration. When comparing the five companies, we notice a difference in degree
of lean implementation, however all the organizations had a management that
emphasized the employee involvement, and the importance of visual management.
There was a varying use of standardization, and an autonomy with respect to standards
may be a possible threat for the value chain integration. The extent of use of visual-
ization tools was different among the companies and were mostly used at the company
with the longest lean experience. Furthermore, this study indicates that use of cross
functional teams working with value stream mapping across several process step has a
positive influence on collaboration on improvement to achieve streamlined processes
and better understanding of that all value creating processes must act together. To
enhance the process transparency, this study shows that there is a need for a common
arena for people working at different process steps to work together on improvement;
i.e. team board meeting.
Further research should aim for attaining more empirical results from each sector.
Acknowledgements. The research was funded by the Research Council of Norway and the
participating company. Informed consent was obtained from all individual participants included
in the study.
References
1. Walters, D., Rainbird, M.: The demand chain as an integral component of the value chain.
J. Consum. Mark. 7, 465–475 (2004)
2. Feller, A., Shunk, D., Callarman, T.: Value chains versus supply chains. BP trends, pp. 1–7
(2006)
3. Porter, M.E.: Competitive Advantage: Creating and Sustaining Superior Performance. Free
Press, New York (1985)
4. Basnet, C., Wisner, J.: Nurturing internal supply chain integration. Oper. Supply Chain
Manag. 5, 27–41 (2012)
Lean Implementing Facilitating Integrated Value Chain 365
5. Pagell, M.: Understanding the factors that enable and inhibit the integration of operations,
purchasing and logistics. J. Oper. Manag. 22, 459–487 (2004)
6. Stank, T.P., Keller, S.B., Daugherty, P.J.: Supply chain collaboration and logistical service
performance. J. Bus. Logist. 22, 29–48 (2001)
7. Liker, J.K., Hoseus, M.: Toyota Culture, The Heart and Soul of the Toyota Way. McGraw-
Hill, New York (2008)
8. Liker, J.K., Meier, D.: The Toyota Way Fieldbook: A Practical Guide for Implementing
Toyota’s 4P’s. McGraw-Hill, New York (2006)
9. Liker, J.K.: The Toyota Way: 14 Management Principles from the World’s Greatest
Manufacturer. McGraw-Hill, New York (2004)
10. Womack, J.P., Jones, D.T.: Lean Thinking: Banish Waste and Create Wealth for Your
Corporation. Simon & Schuster, New York (1996)
11. Liker, J.K., Morgan, J.M.: The Toyota way in services: the case of lean product
development. Acad. Manag. Perspect. 20, 5–20 (2006)
12. Lindlof, L., Soderberg, B.: Pros and cons of lean visual planning: experiences from four
product development organisations. Int. J. Technol. Intell. Plan. 7, 269–279 (2011)
13. Bowersox, D.J., Closs, D.J., Stank, T.P.: 21st century logistics: making supply chain
integration a reality (1999)
14. Chen, I.J., Paulraj, A.: Towards a theory of supply chain management: the constructs and
measurements. J. Oper. Manag. 22, 119–150 (2004)
15. Lawrence, P.R., Lorsch, J.W., Garrison, J.S.: Organization and environment: managing
differentiation and integration. Division of Research, Graduate School of Business
Administration, Harvard University Boston, MA (1967)
16. Netland, T.H., Schloetzer, J.D., Ferdows, K.: Implementing corporate lean programs: the
effect of management control practices. J. Oper. Manag. 36, 90–102 (2015)
17. Lodgaard, E., Aschehoug, S.H., Gamme, I.: Barriers to continuous improvement:
perceptions of top managers, middle managers and workers. In: Procedia CIR: 48th
Conference on Manufacturing Systems, CIRP CMS 2015 (2015)
18. Mintzberg, H., Quinn, J.B., Ghoshal, S.: The Strategy Process, European edn. Prentice Hall
(1995)
19. Malone, T.W., Crowston, K.: The interdisciplinary study of coordination. ACM Comput.
Surv. 26, 87–119 (1994)
20. Bateman, N., Philp, L., Warrender, H.: Visual management and shop floor teams–
development, implementation and use. Int. J. Prod. Res. 54, 1–14 (2016)
21. Bititci, U., Cocca, P., Ates, A.: Impact of visual performance management systems on the
performance management practices of organisations. Int. J. Prod. Res. 54, 1–23 (2015)
22. Shah, R., Ward, P.T.: Defining and developing measures of lean production. J. Oper. Manag.
25, 785–805 (2007)
23. Sederblad, P.: Scanias produktionssystem: en framträdande modell i Sverige, Liber (2013)
24. Kvale, S.: Det kvalitative forskningsintervju. Gyldendal akademisk (1997)
25. Yin, R.K.: Case Study Research: Design and Methods. Sage Publications (2014)
26. Ayers, D.J., Gordon, G.L., Schoenbachler, D.D.: Integration and new product development
success: the role of formal and informal controls. J. Appl. Bus. Res. (JABR) 17, 133–148
(2011)
Developing of Auxiliary Mechanical Arm
to Color Doppler Ultrasound Detection
With the rapid development of ultrasound medicine and the expansion of the scope of
clinical application, workload of ultrasound departments increased dramatically,
because the ultrasound department doctors work in an unphysiological posture on a
long-term basis, in which the body is under an overloaded state, meanwhile during
clinical diagnosis, it is vulnerable to light source, infection, radiation, noise, mind and
other physical, chemical, biological factors [1, 2]. These can result in various occupa-
tional injuries that have seriously affected the lives of doctors in an ultrasound depart-
ment, and a new solution is needed to alleviate or resolve the present situation [3].
In 1999, the French professor Pierrot and his team initially used industrial robots
(PA-10) to break through the limitations of ultrasound artificial scanning [4]. After that
similar ultrasonic detection auxiliary robot systems gradually started to permeate in the
application during clinical diagnosis, many domestic and international scholars and
engineers have made effort into the developing of auxiliary mechanical arm systems for
Color Doppler Ultrasound detection [5]. This study aims to develop a robot mechanism
Health care workers need to carry an ultrasonic probe during the procedure of working.
because of the body epidermis and body fat causes the decay of the effect of ultrasound,
for the patients of heavy weight with more body fat, operators often need to force the
subcutaneous fat spreading out, and use cross cutting, vertical cutting, oblique cutting
and other means of operation to check the patient’s internal organs. The above oper-
ations often require the of Color Doppler Ultrasound to pay a considerable amount of
physical strength to complete the day’s work tasks [6].
Commonly used Color Doppler Ultrasound probe as shown below in Fig. 1, which
is a abdominal cavity detection probe, used in detection for internal organs of the
abdominal cavity. Operators need to use the coordination of shoulder and elbow joints
to secure the positioning of the probe detection area, through the mutual cooperation of
front arm and wrist to realize swinging, rotation, force pressure and other operations, to
achieve a clear image monitoring or capturing of the internal organs of the abdominal
cavity. Especially for patients with higher body fat (obese), the operator needs to exert
strong pressure to spread out the fat under the skin, so that the ultrasound can penetrate
the skin to reach the internal organs and return to the ultrasound probe to achieve the
detection of the state of internal organ.
Operation areas of the patient body for the probe of the Color Doppler Ultrasound
instrument can be summarized mainly as: under the right rib, under the xiphoid, under
the left rib, within the right rib. Each position is intended for different organs, manual
operation method, strength and the use of the probe have specific requirements [6].
368 H. Zhang et al.
There are majorly three degrees of freedom in the freedom distribution of the
human arm and wrist, namely: the rotational freedom of the arm, the pitching freedom
of the wrist, and the deflection freedom of the wrist. Through the interaction of the
above three degrees of freedom, the previously introduced actions of swinging, turning
and cutting can be realized. Medical operators’ Ultrasound testing for patients sub-
stituted with a mechanical arm requires the above-mentioned front arm and wrist
freedom degrees to be synchronized to the mechanical arm exactly.
Therefore, for the design of auxiliary mechanical arm for color Doppler ultrasound
equipment, this study presents 6° of freedom and an end clamping freedom. The
freedom on the mechanical arm is designed to simulate the operator’s shoulder, elbow
as well as wrist freedoms, while the clamping freedom is used to simulate the freedom
of the hand. The end gripper of the mechanical arm has been integrated with multiple
force sensors to detect the volume of the force fed back to the end gripper by the Color
Doppler Ultrasound probe, so as to achieve force control at the end of the probe and
avoid the discomfort caused by the mechanical arm during the detection procedures.
In the examination process, in order to prevent affects of respiratory movement
artifacts on the image, through the examination process the patient is required to hold
his breath during the detection, and the operator can not be overexerted, otherwise the
other areas other than the lesions (such as skin or chest wall) can also be shown in red,
is mistaken for a high stiffness. By contrast, adjustment of force intensity can be solved
well by the force sensing data monitoring of the mechanical arm.
AI control system of the mechanical arm analyzes the position information, coor-
dinate information and operation state information collected by the visual inspection
system in the overall system, resolves the motion trajectory of the joints of the
mechanical arm and the motion mode of the end joints, so as to complete the overall
movements of the mechanical arm, and achieve the specific operation of the ultrasonic
probe.
As a mechanical arm in the prototype of human arms, its joint configuration completely
simulates the way degree of freedom is distributed in a human arm. The shoulder of the
mechanical arm has two freedom degrees, namely the freedom degree of rotation and
deflection; the elbow of the mechanical arm has two freedom degrees, namely the
freedom degree of rotation and deflection; and the wrist of the mechanical arm has two
degrees of freedom, namely the freedom degree of rotation and deflection. The
mechanical arm joints adopt the modular design mode, the structural form of each
rotation joint as well as the transmission scheme is exactly the same, the structure form
of each rotating joint, the transmission scheme is the same. Nevertheless, according to
the load condition of each joint, its specific structure size and configuration parameters
are slightly different (Fig. 2).
Developing of Auxiliary Mechanical Arm to Color Doppler Ultrasound Detection 369
Deflection joints of the mechanical arm is designed in the way of side-mounted and
direct derived by the servo motor integrated harmonic reducer, of which the average
accuracy error of the harmonic reducer is no more than 10 arc points, so that the
combination of servo motor and harmonic reducer combination can fully meets the
positioning accuracy requirements of the end probe.
The rotating joints of the mechanical arm is designed in the way of direct con-
nection and direct derived by the servo motor integrated harmonic reducer, in which the
coaxial degree between the two arms is determined through the high-precision rolling
bearings at both ends, and realizes the stable connection between the joints.
The end clamping device of the mechanical arm installed with force sensors,
through the mutual cooperation of force sensors and the visual sensors to achieve the
autonomous control of the mechanical arm, and to prevent secondary damage to the
patient caused by the movements of the mechanical arm.
Fig. 3. Scene simulation of the auxiliary mechanical arm for Color Doppler ultrasound
9 force sensors are installed between the ultrasonic probe and the end effector to
detect the contact status between the probe and the human body. As shown in Fig. 4.
F1 F3 F9
F8
F5
F2 F4
F7
F6
Fig. 4. Indication of integrated position of the probe in a Color Doppler ultrasound equipment
coordinates and the Euler angle of the centroid of the rigid body is used as generalized
coordinates, i.e.
The process is a dynamics course of the first detection position of the human body
from the initial position of the mechanical arm moving to the visual-positioned loca-
tion. At first, the mechanical arm is in the initial position, after receiving the motion
signal, the arm lifts up and at the same time retrieves entirety, and then the first joint of
the mechanical arm rotates 90° to the patient’s direction, the second joint of the
mechanical arm opens entirely, so that the end probe moves exactly to the patient’s
illness areas. During the inspection process, the end of the mechanical arm is imple-
mented through the image vision and probe, used to determine the pressure strength of
the probe, once the strength exceeds any one of the values of the detection points, the
mechanical arm returns to the origin along the initial path, to prevent the mechanical
arm causing harm to the patient.
In the simulation results above, the horizontal axis is the timeline and the vertical
axis is the value of joint hinged torque, in which the second joint torque, the third joint
torque and the five joint torque are from top to bottom. By decomposing the detection
movements of medical personnel, the pressing down movements during the detection
process are completed mainly depends on the corresponding degree of freedom of the
second, third, fifth joints of the mechanical arm.
Analysis of the above simulation results, the second joint as the pitching joint at the
root of the mechanical arm in simulation of deflection freedom of the shoulder joint in
the human arm, it mainly bears the mechanical arm body weight and the feedback force
of the probe end, therefore the second joint bears the most force moment value among
the three joints. The torque of the second joint is gradually reduced because in the
recovery process, self-weight centroid of the mechanical arm gradually close to the
second joint, and then the torque gradually increases in the subsequent opening
372 H. Zhang et al.
process, the second joint torque simulation results are consistent with the theoretical
results of the actual process.
The third joint of the mechanical arm simulates the degree of freedom of deflection
of the human arm, and it mainly bears the weight of the front arm of the mechanical
arm and the feedback force at the end of the probe, so the torque beard by the third joint
is less than the second joint. In the process of overall recovery of the mechanical arm,
the torque of the third joint also goes through a process of getting increased and then
reduced, but because the third joint is closer to the end of the mechanical arm, the range
of torque fluctuations is smaller, and the results of the third joint torque simulation are
consistent with the theoretical results of the actual process.
The fifth joint in the mechanical arm simulates the deflection freedom of the wrist
in a human arm, it bears the feedback force at the probe end of the mechanical arm, and
the absolute torque value is between 10–20 mNm, which is relatively low. The torque
value at the fifth joint mainly comes from the load and the angle between the end of the
fifth joint and the direction of gravity, and the end angle is affected by the position
relationship between the arm and the elbow which is interrelated to the overall state of
the mechanical arm.
4 Conclusions
From the simulation results three points can be seen: First, for better detection of the
contact status between the ultrasonic probe and the human body, 9 force sensors are
installed between the ultrasonic probe and the end effector; Second, at the start of the
joints of the mechanical arm, cuspidal points occurred in corresponding force curve,
which requires a mechanical strength intensifying for the robot arm in the joint posi-
tion; Third, under the premise of meeting the requirements of medical resources, by
adjusting the duration of the start-up phase so as to reducing the start-up acceleration,
the force state of the mechanical arm can be effectively improved.
References
1. Li, F.: Occupational hazards and protective measures of ultrasound doctor. J. Hebei United
Univ. 14(3), 406–407 (2012)
2. Huang, P., Huang, F., Zhao, B., et al.: Attention must be paid to the injury of practicing
ultrasound doctors. J. Chin. Ultrasound Med. 26(12), 1141–1143 (2010)
3. Zhang, J., Li, R., Chen, W., et al.: Research on robot operating system-based robot-assisted
ultrasound scanning system. J. Biomed. Eng. Res. 37(4), 382–387 (2018)
4. Pierrot, F., Dombre, E., Dégoulange, E., et al.: Hippocrate: a safe robot arm for medical
applications with force feedback. Med. Image Anal. 3(3), 285–300 (1999)
5. Swerdlow, D.R., Cleary, K., Wilson, E., et al.: Robotic arm - assisted sonography: review of
technical developments and potential clinical applications. AJR Am. J. Roentgenol. 208(4),
733 (2017)
6. Rusu, R.B.: Semantic 3D object maps for everyday manipulation in human living
environments. KI - Künstliche Intelligenz 24(4), 345–348 (2010)
The Importance of Key Performance
Indicators that Can Contribute
to Autonomous Quality Control
1 Introduction
Quality Control in manufacturing has built on same principles over the last decades,
and still many companies are still using manual inspection by identification of non-
conformances and corrective actions. Process data and quality planning are developed
in many different ISO standards, and are defined as requirement, specification,
guidelines or characteristics that can be used consistently to ensure that material,
products, processes and services are fit for their purpose [4, 5]. Such material is today
available in databases, the question is therefore to find suitable methods to structure and
apply the information for further best practices in industrial settings.
Another important area that can generate information to data driven quality vision
is structured key performance indicators (KPI’s) which can show feedback from cus-
tomer and organizations and the performance indicators will be able to give an
excellent direction for the organizations quality control [6, 7].
This includes the management and the operator’s possibility to influence on the
value creation for the customer, but also to enhance equipment at site and the product to
the customer. This form of smart product and services give us the new smart factories
and introduce the view of quality to a new digital development [8].
Quality systems for control of plant in real-time by monitoring the operating
equipment are crucial. Real-time monitoring, different types of data of key components
need to be collected, including temperature, vibration, water level, pressure, flow, etc.
This paper shows some important performance indicators to fulfill autonomous
quality control. It shows in which way Zero Defect Manufacturing methods is
important for implementation of harvested data, both for process and for product
(equipment in use) and enchantment of new products.
Further on the paper shows the relation to ZDM for set-up of data collection,
datamining and the relation to analytics for smart manufacturing. Section 3 gives
further a detailed list of some available performance indicators that is important and
their role for creating new hardware, sensors and software development for prediction
of fault, propagation and non-conformances of quality in production line. With
emphasize on planning, management and operation for connected production. Finally,
some conclusions and advices are given in Sect. 5.
The Tables 1 and 2 below show some contextual data taken from harvested KPI
workflow described in an EU Research Project, where the KPI’s are linked to processes
and products. Similar lists can be found in literature and standards [11]. This can give a
view of some standard control KPI’s for planning and Management and a table for
connected production which can lead to a smart manufacturing system;
376 R. J. Eleftheriadis and O. Myklebust
Table 1. Possible proposed KPI’s for Planning and Management for autonomous quality in
smart manufacturing.
Autonomous quality Planning and management for monitoring and real-time control
control loops services
Plug & produce ZDM production facility
equipment
Augmented human centred Critical Process Visually inspection of tools and
decision Monitoring parts
Control loop services Process analysis and root
cause-spiralling marks
Multistage deep analysis Reduction of: Production Number of defects and repaired
control loop services cost, manufactured per total volume
Defective parts, Rework manufactured
time, Frequency of system or
Waste, Lead time component failure
ZDM orchestration & Production optimization Costs of tests performed in
simulation-based with aggressive machine connection to the manufacturing
composition control loops configuration line
Digital Twin that can simulate the
whole production in Realtime
closing the loop
ZDM embedded Reduction of: Defective Number of scraps per volume of
intelligence and real time parts, rework time, waste multiple times inspected
control loop services components
Table 2. Possible pProposed KPI’s for Connected Production for autonomous quality in smart
manufacturing.
Autonomous quality Connected production
control loops For maintenance operations in smart factories
Plug & produce ZDM Production Facility
Equipment
Augmented human Number of times a Predictive Quality
centred decision control specific part of a Predictive Maintenance on-line
loop services machine has failed
Total cost of
maintenance on the
manufacturing line
Multistage ZDM deep Digital twin of the Number of scraps per volume
analysis control loop machine in Realtime manufactured
services Simulation at planning Scrap reduction
stages Rework reduction
ZDM orchestration & Adaptive machining Holistic system that builds on real-
simulation-based system time data mining in production, by
composition control loops Integrated in an open Big Data and ZDM over the whole
cloud and data value chain
analytics solution
The Importance of KPI that Can Contribute to Autonomous Quality Control 377
An aggregation engine combines streams of data into single values using a selected
aggregation function. Built-in aggregation functions include options like count (*), sum,
mean, median, P95, first/last values, min/max, etc. The engine takes time range, aggre-
gation function, compare groups, and filter conditions as inputs. It figures out which
columns to scan based on the stored and virtual columns used in the query (Fig. 1).
Fig. 1. An engine of knowledge, pattern and data. Where use of quantitative and qualitative
detection based on risk evaluation criteria’s gives an autonomous selection of KP’s for
autonomous quality.
In many of latest research papers the stages of pre-processing, now collecting features
and algorithms can be included, however the preparation and the effort of collecting
and structuring data acquisition are complex and new ways to structure KPI’s in
combination with data processing are an interesting research approach. Since this in
earlier stages often has been post-processes of performance and detection of fault and
propagation has been discovered after the product is finished. With the introduction of
AI and deep learning into the ZDM domain different perspectives on machine learning
and data provided by AI algorithms are described as; Descriptive, Diagnostic, Pre-
dictive and Prescriptive [11] Which describes the current state, why something happen
what will happen and a desired status of goals. So the goal will be to achieve auton-
omous decision making process to assure the quality of production processes and
related output in an autonomous way.
5 Conclusions
This means that an operator needs more domain skills, value chain insight and tech-
nological learning of what happen and what will happen if something goes wrong.
Because they will be the key in the operation of what should be done for making
decisions and taking actions of operations.
Acknowledgements. The work is supported by KPN CPS Plant, which is granted the Research
Council of Norway (grant no. 267752).
References
1. Assosiation for Standards Quality: What are quality standards. https://asq.org/quality-
resources/learn-about-standards. Accessed 08 Dec 2019
2. CEN: ISO9000:2015 Quality Management System Requirement. CEN, Brussel (2015)
3. Eleftheriadis, R., Myklebust, O.: A quality pathway to digitalization in manufacturing thru
zero defect manufacturing practises. In: IWAMA. Atlantis Press, Manchester (2016)
4. Rødseth, H., Eleftheriadis, R.J.: Successful asset management strategy implementation of
Cyber Physical systems
5. Porter, M., Heppelman, J.E.: How smart connected product are transforming Competition.
Harv. Bus. Rev. 92, 23 (2014)
6. Mogos, M.F., Eleftheriadis, R.J., Myklebust, O.: Enablers and disabler of Industry 4.0:
results from a survey of industrial companies in Norway. In: CIRP, Elsevier, Ljubliana
(2019)
7. Psarommatis, F., May, G., Dreyfus, P., Kiritsis, D.: Zero defect manufacturing: state-of-the-
art review, shortcomings and future directions in research. Int. J. Prod. Res. 20, 1–17 (2019)
8. Schlegel, P., Briele, K., Schmitt, R.H.: Autonomous data-driven quality control in self-
learning production systems. In: Advances in Production Research, pp. 679–689 (2019)
9. Kissflow Software Company: Is a Workflow Engine the Same as a Business Rule Engine?
Kissflow. https://kissflow.com/workflow/workflow-enginge-businessrule-engine-diference.
Accessed 03 Jan 2019
10. Appelbaum, D., et al.: Impact of business analytics and enterprise systems on managerial
accounting. Int. J. Account. Inf. Syst. 25, 29–44 (2017)
11. Rad, J.S., Zhang, Y., Chen, C.: A novel local time-frequency domain feature extraction
method for tool condition monitoring using S-transform and genetic algorithm. Int. Fed.
Autom. Control 47, 3516–3521 (2014)
12. Wang, J., et al.: Deep learning for smart manufacturing: methods and applications. J. Smart
Manuf.: Methods Appl. 48, 144–156 (2018)
13. Crosby, Philip: Quality is Free. McGraw-Hill, New York (1969)
14. Edward, D.W.: Out of the Crisis, Quality, Productivity and Competitive Position. MIT Press,
Cambridge (1982)
15. Kaplan, R.S., Norton, D.P.: Translating Strategy into action, The Balanced Score Card.
Harvard Business School Press, Boston (1996)
16. Halpin, J.F.: Zero Defects: A New Dimension in Quality Assurance, vol. 166. McGraw-Hill,
New York
Collaborative Fault Diagnosis Decision
Fusion Algorithm Based on Improved
DS Evidence Theory
1 Introduction
H ¼ f1; 2; 3; 4; 5; 6g. The recognition framework is a collection of targets for all sit-
uations, which are independent and mutually exclusive, thereby turning abstract
problems into mathematical problems.
(1) Basic probability assignment function
The power set 2n of the elements in H represents the possible combination of
targets, where any element is called a focal element of H. Assuming that H is the
recognition framework, the target problem can be represented by the function m :
2n ! ½0; 1 and must satisfy the following:
① The basic probability of an impossible event is 0, i.e. mðhÞ ¼ 0.
n
② The
P sum of the basic probabilities of all the elements in 2 is 1, i.e.
mðAÞ ¼ 1.
Ah
The difference between mðAÞ and BelðAÞ is mainly that mðAÞ means that the
confidence is only assigned to the subset, A, while BelðAÞ represents the sum of the
confidence relating to all subsets of A.
(3) Likelihood function
The likelihood function represents the degree of trust that A is not false, i.e. the
measure of the uncertainty as to whether A is possible, namely:
(
PI : 2H ! P
½0; 1
PIðAÞ ¼ mðBÞ; A H ð2Þ
B \ A6¼/
PIðAÞ is the sum of BPAs that do not support subsets of Ac , i.e. PIðAÞ ¼ 1 BelðAc Þ.
382 X. Gao et al.
where, minðmi ðhk Þ; mj ðhk ÞÞ is the smaller BPA of the two pieces of evidence and
maxðmi ðhk Þ; mj ðhk ÞÞ is the larger. Therefore, aij ðkÞ has a value greater than 0 and less
than or equal to 1. Given a limiting value, P, if aij ðkÞ\P, it means that the two pieces
of evidence are not close, which can be expressed as:
aij ðkÞ; aij ðkÞ P
aij ðkÞ ¼ ð5Þ
0; aij ðkÞ \ P
where, aij ðkÞ represents the closeness between mi ðhk Þ and mj ðhk Þ, but does not express the
closeness between iðjÞ and the other mn ðhk Þ in the recognition framework. The closeness
of the propositions between mi ðhk Þ and the other evidence is ai1 ðkÞ; ai2 ðkÞ; . . .; ain ðkÞ.
Collaborative Fault Diagnosis Decision Fusion Algorithm 383
A matrix can be used to visually express the closeness between the individual evidence
for the propositions:
2 3
1 a12 ðkÞ . . . a1n ðkÞ
6 a21 ðkÞ 1 . . . a2n ðkÞ 7
A¼6
4 ...
7 ð6Þ
... 1 ... 5
an1 ðkÞ an2 ðkÞ ... 1
The closeness between the two pieces of evidence in Eq. (6) is 1 and the basic
probability assignment function mi ðhk Þ of a certain piece of evidence i can be obtained
by analyzing the closeness between one particular proposition in the matrix, A, and
other evidence propositions, i.e. Asum ðiÞ ¼ ai1 ðkÞ þ ai2 ðkÞ þ . . . þ aim ðkÞ.
Pieces of evidence i and j have the same closeness to the same proposition, hk , i.e.
aij ðkÞ equals aji ðkÞ, which satisfies the symmetry of matrix A. As the values in matrix A
are always positive, the rules of linear algebra dictate that matrix A must have eigen-
values kðk [ 0Þ and corresponding eigenvectors R, namely:
AR ¼ kRðk [ 0Þ ð7Þ
For some piece of evidence, the closeness of the proposition reflects the credibility
of the evidence associated with the proposition. Thus, the weight of the evidence for a
proposition can be expressed by its closeness. The weight of a proposition, wi ðhk Þ, can
be expressed as follows:
wi ðhk Þ can be obtained from Eq. (8), but should also satisfy:
X
n
wi ðhk Þ ¼ 1 ð9Þ
i¼1
W ¼ AC ð10Þ
W ¼ kP ð11Þ
The matrix, P, in Eq. (11) includes p1 ðkÞ; p2 ðkÞ; . . .; pn ðkÞ. Subsequently, the
weight of the proposition can be obtained according to the following solution matrix:
pi ðkÞ
Wi ðhk Þ ¼ ð12Þ
p1 ðkÞ þ p2 ðkÞ þ . . . þ pn ðkÞ
384 X. Gao et al.
The weight, wi ðhk Þ, of each probabilistic BPA function, mi ðhk Þ, for the proposition
can be calculated in turn, according to Eq. (12). To solve wi ðhk Þ, p1 ðkÞ; p2 ðkÞ; . . .; pn ðkÞ
must be obtained first. For the matrix, P, it can be obtained by multiple transformations
using the rules of linear algebra. In this paper, we use a membership function with a
normal distribution:
xi ðkÞa 2
uðxÞ ¼ eð b Þ ; ða [ 0; b [ 0Þ ð13Þ
The function, xi ðkÞ, in Eq. (13) can represent the BPA function, mi ðhk Þ, of the
proposition, where a is the mean and b is the variance. The above formula can be
substituted into Eq. (13):
xi ðkÞa 2
pi ðkÞ ¼ eð b Þ ; ða [ 0; b [ 0Þ ð14Þ
P
n
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xi ðkÞ
Pn
where, a ¼ x ¼ i¼1
n ; b ¼ s ¼ n1
1
ðxi xÞ.
i¼1
pi ðkÞ can be obtained and substituted into Eq. (12). Subsequently, the weight,
wi ðhk Þ, of the evidence for the proposition can be obtained. The BPA is then recal-
culated and, finally, the new BPA can be used to carry out the decision fusion.
It can be seen from Table 3 that the accuracy of the decision fusion is greatly
improved when compared with traditional DS evidence theory and Yager’s method.
When compared with Sun Quan’s method, the improvement in the accuracy of the
decision fusion is about 22% and it is about 19% higher than Deng Yong’s method.
Thus, the proposed algorithm improves DS accuracy to a significant degree.
Figure 1 shows the change in the degree of support for the three faults in the
identification framework H ¼ fh1 ; h1 ; h3 g according to the amount of evidence, with u
indicating the degree of support.
It can be seen from Fig. 1 that traditional DS evidence theory always assigns a
degree of support of 0 to h1 when dealing with conflicting information. This is obvi-
ously not consistent with common-sense, indicating that traditional DS evidence theory
is not able to deliver a correct decision fusion result in the face of highly conflicting
evidence. If Yager’s method is compared with traditional DS evidence theory, it simply
discards the conflicting information when it appears. In other words, it is assigned to
mðuÞ and no other processing is performed, so the correct decision fusion result cannot
be obtained. Sun Quan’s method and Deng Yong’s method obtain a decision fusion
result, but the accuracy is low and a large proportion of evidence is assigned to mðuÞ.
By contrast, the algorithm proposed in this paper effectively overcomes the problem of
conflicting evidence. The larger the number of diagnosis nodes, the greater the degree
of support for fault h1 , according to the algorithm proposed in this paper. In comparison
to other algorithms, it effectively overcomes the disruption of decision fusion caused by
conflicting information and can converge rapidly.
386 X. Gao et al.
5 Conclusions
In this paper, the collaborative fault diagnosis decision fusion model based on DS
evidence theory has been discussed. Its limitations have been noted and an improved
DS evidence theory-based algorithm has been proposed. An example simulating use of
the algorithm was used to verify the performance of the algorithm in comparison to
other related algorithms. The results have shown that the improved algorithm performs
better and can obtain more accurate decision results.
Acknowledgements. This work is supported by the Special Funds for Science and Technology
Innovation Strategy in Guangdong Province of China (No. 2018A06001).
References
1. Huang, D., Chen, C., Zhao, L., Sun, G., Ke, L.: Hybrid collaborative diagnosis method for
rolling bearing composite faults. J. Univ. Electron. Sci. Technol. China, 2–18, 47(6), 853–
863
2. Ge, J., Liu, Q., Wang, Y., Xu, D., Wei, F.: Gearbox fault diagnosis method supporting the
combination of tensor and KNN-AMDM decision. J. Vib. Eng. 31(6), 1093–1101 (2018)
3. Gai, W., Xin, D., Wang, W., Liu, X., Hu, J.: A review of data fusion and decision making
methods in situational awareness 40(5), 21–25 (2019)
4. Yager, R.R.: Belief structures, weight generating functions and decision-making. Fuzzy
Optim. Decis. Mak. 16(1), 1–21 (2016)
5. Tajeddini, M.A., Aalipour, A., Safarinejadian, B.: Fusion method for bearing faults
classification based on wavelet denoising and dempster-shafer theory. Iran. J. Sci. Technol.-
Trans. Electr. Eng. 439(2), 295–305 (2019)
6. Ali, T., Dutta, P.: Methods to obtain basic probability assignment in evidence theory. Int.
J. Comput. Appl. 38(4), 46–51 (2013)
Collaborative Fault Diagnosis Decision Fusion Algorithm 387
7. Cui, J., Li, B., Li, Y.: Conflict evidence combination based on the applicable conditions of
Dempster combination rule. J. Inf. Eng. Univ. 16(1), 59–65 (2015)
8. Carlson, J., Murphy, R.R.: Use of Dempster-Shafer conflict metric to detect interpretation
inconsistency. Comput. Sci. 38(4), 46–51 (2012)
9. Deng, Y., Wang, D., Li, Q., Zhang, Y.: A new method of evidence conflict analysis. Control
Theory Appl. 28(6), 839–844 (2011)
10. Wang, A., Zhang, L.: Grid fault diagnosis algorithm based on selection criteria and
closeness. Control Decis. 31(1), 155–159 (2016)
11. Denoeux, T., Li, S., Sriboonchitta, S.: Evaluating and comparing soft partitions: an approach
based on Dempster-Shafer theory. IEEE Trans. Fuzzy Syst. 26(3), 1231–1244 (2018)
12. Zhang, N., Yang, Y., Wang, J., et al.: Identifying core parts in complex mechanical product
for change management and sustainable design. Sustainability 10(12), 4480–4494 (2018)
Hybrid Algorithm and Forecasting Technology
of Network Public Opinion Based on BP
Neural Network
Abstract. Based on the hybrid algorithm and BP neural network, the fore-
casting technology of network public opinion was studied. After detailed
introduction of the research background and significance, the concept and
development of hybrid algorithm, Hybrid algorithm is analyzed in detail. The
user network public opinion classification model under the clustering algorithm
is established. Then the data mining experiment of Sina Weibo users’ network
public opinion is conducted. Using an improved BP neural network algorithm,
unsupervised user network clustering and evaluation of the results, finally, the
user’s Internet public opinion is divided into six categories. The characteristic
preferences of each user group are analyzed, with recommendations provided
for operation recommendation and promotion.
1 Introduction
Internet public opinion is a public opinion that regards the Internet as a carrier of com-
munication. It is a new form of expression of public opinion. With the continuous
popularization of China’s Internet, the Internet has become an important channel for the
public to express their opinions [1]. Netizens can express their attitudes, opinions, and
sentiment toward various social issues through the Internet. The Internet public opinion
has the characteristics of anonymity, openness, and other characteristics. According to the
different characteristics of the Internet public opinion, how to realize preprocessing of
web information, public information collection, classification, clustering, and hotspot
acquisition are all key steps in the prediction of Internet public opinion [2]. How to make
the network public information collection and arrangement more reasonable, will affect
the accuracy of forecasting the Internet public opinion. As a branch of the content of
public opinion research, Internet public opinion integrates the knowledge of journalism,
computers, and sociology. Nowadays, netizens, as the main body of online public
opinion, mainly use instant messaging tools, BBS, search engines, news follow-ups, post
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 388–395, 2020.
https://doi.org/10.1007/978-981-15-2341-0_48
Hybrid Algorithm and Forecasting Technology of Network Public Opinion 389
bars, microblogs, and E-mails to transmit their wishes [3]. As everyone knows, there will
be media participation in the dissemination of lyricism. With each additional media, the
faster the lyrical spread, the greater the number of media provided by the Internet, and the
rapid spread of online public opinion [4]. The survey shows that the time of media
coverage can affect the speed of obtaining public opinion information, and its propagation
speed is directly proportional to the popularity rate of Internet. As a new type of public
opinion, Internet public opinion is a prerequisite for determining the source of data and
obtaining information from [5]. Therefore, timely collection of information on Internet
public opinion, analysis of its development trend, accurate identification and screening of
Internet public opinion hotspots. It is of great practical significance to strengthen network
monitoring and correctly guide public opinion on the Internet.
Our country began to study the Internet public opinion in 2005. As of 2015, there were
nearly 5000 related literatures, mainly focusing on computers, education, journalism
and communication. This shows that although the related learning of Internet public
opinion started late. However, with the increasing attention paid by scholars in recent
years, the number of related research is increasing rapidly. About the development of
the basic theory of Internet public opinion: Internet public opinion is a new discipline,
but many scholars have carried out relevant learning: Liu Yi is the first writer to publish
books on Internet public opinion in China and discusses in detail the theoretical and
practical applications of Internet public opinion [6]. Xu and others believe that the
analysis of Internet public opinion should start with the content and empirical evidence
it contains [7]. Gu Mingyi analyzes the development mode of Internet public opinion.
He analyzes the development trend of Internet public opinion from three perspectives:
media, public opinion upgrade and audience. Learning about the network public
opinion forecasting index system: The Internet public opinion itself is a complex and
intersecting system. The amount of Internet public opinion data is huge. After data
collection, it still needs a lot of work to process the data to meet different needs. There
are many kinds of indicators which affect the network public opinions. They are
complex and changeable. The demand for network public opinion and the purpose of
monitoring are different. In order to predict Internet public opinion, it is necessary to set
up relevant index system for Internet public opinion [8].
3 Methodology
Currently, cluster analysis has been applied in many industries in multiple fields,
including business intelligence, retail, information search, medicine, and security. In
business intelligence, a large number of users can be grouped, and users in the group
are similar in behavioral characteristics. Such group processing can provide accurate
target users for the development of marketing activities. In terms of information search,
documents of the same type can be clustered. When users query documents in that
class, they can be other documents in such documents.
Clustering analysis is an important process for user classification in the research of
social network users’ public opinion behavior characteristics. By clustering the interest
characteristic data of different users, the classification of users with similar interest
features is realized, and the interest characteristics of all kinds of users are analyzed.
At present, researchers in
various fields have proposed K-means
PAM
many clustering methods and K-medoids CLARA
Partition based method
algorithms, but it is difficult Nearest neighbor clustering
CLARANS
COBWEB
grid-based methods, each of Grid based method
CLASSIT
which has its typical and
commonly used algorithms, Fig. 1. Structure diagram of clustering method
are shown in Fig. 1.
found by equidistant segmentation. First, the distance between all points in the data set
is calculated using the Euclidean distance formula (1). And find the two points farthest
away. Among them, any two points in the data set are t and x, respectively. The data
point has the dimension i, and the distance between the two points is d.
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2
qX
dðv wÞ ¼ n ðv w Þ i ¼ 1; . . .; n ð1Þ
i¼1 i i
Then, the cutting distance of each dimension is calculated. The formula is shown in
(2), which is mainly calculated by the known number of clusters k.
i vi wi
dcut ¼ i ¼ 1; . . .; n; k [ 2 ð2Þ
k1
i
In Eq. (2): dcut t is the cutting distance in this dimension, i is the dimension of the
sample point, and k is the number of clusters. If k = 2, no system calculation is needed.
The initial cluster center is the two points farthest away. If k > 2, then the initial
clustering center needs to be calculated by cutting distance. Since each dimension
needs to be calculated independently, it is also necessary to calculate the cutting
distance in each dimension, and the cutting distance in each dimension is independent.
The formula is shown in (3).
but for Sina Weibo user data lacking clustering experience. It is not possible to deter-
mine what the optimal number of classes is, and the optimal k value needs to be
determined by multiple clustering of the data and viewing the effect of the cluster. In
order to deal with this problem, the clustering number k is evaluated by the cluster
validity function DBI index to determine the optimal clustering number k. DBI
(Davidson Fortary Index) is an evaluation index for non-ambiguous clusters based on
the principle of geometric operations. Its evaluation is based on the closeness of data
points in the same cluster and the degree of dispersion between different clusters.
Intraclass similarity is positively correlated with exponent size, and similarity between
classes is inversely related to exponential size. When the distance between data points in
a cluster is smaller and the distance between clusters is larger, the DBI value is smaller.
Indicates that the differences between clusters are large and the differences within
clusters are very small. Therefore, the smaller the DBI index for different clustering
numbers is, the more clustering number is close to the number of clusters in the dataset
itself.
The above content gives a method for clustering Weibo users online according to
their interest, by referring to the domain classification system of Sina Weibo’s famous
people. Divide the user’s Internet sentiment into ten categories. These ten categories of
network public opinion contain almost all of the user’s points of interest. As an analysis
factor, analyze the interest characteristics of each classified user, and provide support
and evidence for follow-up operation recommendation and promotion. When analyzing
the public opinion characteristics of the user network, if there are certain linear rela-
tionships between the two analysis factors, then the two analysis factors will have very
similar similarities or connections between the two topics. Therefore, whether there is a
linear relationship between the two analysis factors is very important for the study. The
Pearson correlation test was used to calculate the correlation coefficient of each analysis
factor. If the correlation between the two analysis factors is weak, the value of the
Pearson correlation coefficient is within 0.3. There is no strong correlation between the
factors. Therefore, these ten categories of user interest analysis factors can be used as
factors for user analysis after classification.
This section uses the characteristics of Weibo user network sentiment extracted from
the UR-LDA model above as the input data set, and processes the data according to the
improved method in this chapter. The number of clustering parameters of the BP neural
network algorithm is set during the experiment, because the network public opinion of
the microblog users has been classified into 10 categories. Therefore, the number of
user clusters kmax ¼ 10, k starts from 2, and each cycle k increases by 1 until k = 10;
Through the implementation of the clustering algorithm, the initial center is randomly
generated. The value of the current evaluation index is calculated by the end of dif-
ferent k-value clusters. The experimental environment is MatlabR2011. The results are
shown in Figs. 3 and 4:
Hybrid Algorithm and Forecasting Technology of Network Public Opinion 393
30
30
25
DBI value
20
15
10
5
0
1 2 3 4 5 6 7 8 9 10
Cluster number
Fig. 3. The DBI value of the number of Fig. 4. Comparison trend diagram of actual
clusters value and simulated value
From the experimental results, it can be concluded that when k = 6, the DBI value
is 16, which is the number of clusters that meet the requirements. Therefore, when
using k-means clustering, micro-blog users will cluster according to 6 categories. The
above improved BP neural network algorithm is used to cluster the data of micro-blog
users’ Internet public opinion. Taking the 5000 micro-blog users’ Internet public
opinion topics in the third chapter as classified data sets, the users are divided into 6
categories. The dataset is shown in Table 1.
The distance between each cluster center is also calculated, as shown in Table 2.
According to the data in Table 2, there are significant differences in the distance
between the 6 final cluster centers. The distance between center 2 and center 3 reached
0.580, which is the center of the biggest difference. The distance between center 1 and
center 5 is the smallest, 0.264. According to the result of user classification, the
improved clustering algorithm is evaluated. Cluster F was used to evaluate the results.
394 R. Zhang et al.
The evaluation indexes of F include recall ratio (Recall) and precision ratio (Precision).
The results of clustering are evaluated by recall and precision. It can not only measure
the clustering results accurately, but also observe the clustering effect simply and
intuitively.
Experiments show that the improved BP neural network method has improved the
recall, precision and F value compared with the traditional BP neural network algo-
rithm. It proves that the improved BP neural network algorithm is feasible and can
better achieve the clustering of micro-blog users’ network public opinion.
After completing the clustering analysis of BP neural network, micro-blog user
network public opinion obtained by crawler is divided into six categories. And the six
categories of interest variables are calculated. According to the results, a general
explanation is given to these six categories first, and then the characteristics of interest
variables of micro-blog users after classification are introduced. In terms of recom-
mendation and promotion, different treatment strategies are proposed for each category
of users. To facilitate observation and analysis, we will cluster the results and generate
the average values of the 6 attribute values of each class, as shown in Table 3.
By discretizing the result data after clustering, the user interest preference is more
intuitive. By defining the value of the range of interest variables, we define the weak
interest as [0–0.1], where the interest is [0.1–0.2], and the strong interest is [0.3–1].
From the previous table, the six categories of users are interested in the two cat-
egories of interest in “life fashion” and “reading”. It shows that these two types of
interest should be widely used in the general users. Six categories of ordinary users of
“life fashion”, “reading” “life fashion” mainly include fashion brands (bloggers),
shopping, Chaoman net red micro-blog, covering the current fashion form. “Reading”
mainly includes literary reading, Life Encyclopedia, soul chicken soup, various skill
sets of micro-blog, covering reading, literary resonance, the need for practical
knowledge. These two kinds of interest are closely related to the life of people, and the
medium interest of users shows that micro-blog users are concerned about their own
Hybrid Algorithm and Forecasting Technology of Network Public Opinion 395
life and constantly improve their quality and knowledge. Therefore, in the process of
recommending push, some representative micro-blog users in “life fashion” “reading”
can be recommended in addition to “possible people”. In addition to these two kinds of
interest variables, each category has different performance on other kinds of interest
variables and does not have universality.
5 Conclusions
The improved BP neural network algorithm is used to cluster the users in an unsu-
pervised way, and the results are evaluated. Finally, the characteristics of each user
group’s network public opinion are analyzed. Suggestions for operation recommen-
dation and promotion are provided. Based on the existing research results and methods,
when extracting the characteristics of Sina micro-blog users’ online public opinion,
considering the characteristics of user behavior information, the interaction charac-
teristics between users are taken into account. The interest of users’ positive reactions
and their side reactions are different, and a UR-LDA based user network public opinion
feature extraction model is proposed. The results are evaluated. The user data extracted
from the topic of interest feature is clustered using the improved BP neural network.
Six clusters of similar user clusters are obtained, and good clustering results are
obtained, and the classification results are analyzed. The recommendation and pro-
motion of six types of users is only an introduction and analysis of sample data. There
is still a lot of work to be done to achieve the full promotion of micro-blog.
Acknowledgements. This work was financially supported by Fund Project of XJPCC (13QN11,
2016AF024 and 2017CA018).
References
1. Nie, F., Zhang, P.: Study on prediction and early warning model of public opinion basing on
K-harmonic means and particle swarm optimization. Inform. Res. 55(102), 111–115 (2017)
2. Wei, J., Zhu, H., Song, R., et al.: Link prediction analysis of internet public opinion transfer
from the individual perspective. New Technol. Libr. Inform. Serv. 36(52), 847–853 (2016)
3. Yan, W.U., Huang, Y., Wei, W.U., et al.: Research on college internet public opinion
prediction based on hybrid artificial neural network. J. Gannan Norm. Univ. 10(78), 441–445
(2016)
4. Lin, L., Wei, D.: Research and application of the internet public opinion assessment model
about government. J. Chongqing Univ. Sci. Technol. 32(12), 110–112 (2016)
5. Chen, Y., Amp, I., Center, N.: Application of internet public opinion monitoring system in
campus network. Comput. Knowl. Technol. 9(46), 958–964 (2016)
6. Fukui, K.: Internet public opinion and a new risk on copyright. J. Inform. Process. Manag. 59
(85), 41–44 (2016)
7. Jin, Y., Xu, H.: On the system construction of government’s governance of internet public
opinion in the era of big data era. J. Party School Tianjin Comm. CPC 36(74), 471–473
(2018)
8. Wang, N., Zhang, W., Niu, L.: Emotion prediction of public sentiment based on ARIMA and
BP neural network model. Electron. Sci. Technol. 5(62), 210–218 (2016)
Applying Quality Function Deployment
in Smart Phone Design
1 Introduction
In 2018, approximately 1.4 billion smartphones were sold worldwide [1]. In a market
landscape where designs are getting more and more outlandish, such as Huawei’s
foldable screens [2], the design process in smartphone production is central to the
success of a smartphone life-cycle. This paper aims to solve the decision-making
problem of how to set about designing a new smartphone. For context, the paper will
take on the perspective of Samsung and how they can tackle product design of a new
flagship smartphone to rival that of Apple’s.
Samsung is currently the world’s biggest smartphone manufacturer [3].This is owed
to the extensive range of smartphones they offer; from the premium S10, to the entry
level J6 [4]. The decision-making problem of how to best design a new phone is a
plausible scenario.
More specifically, the solution to this decision-making problem is how Samsung
can utilise Quality Function Deployment [QFD], focussing on the House of Quality
[HOQ], to analyse a range of variables that can affect smartphone design. Although
there are many different solutions to the decision-making problem, HOQ is the area of
focus for this paper. The paper aims to: define HOQ, critically evaluate its effective-
ness, and continually apply its usefulness to the context of Samsung’s decision-making
problem.
Moreover, the structure of the paper is as follows. The paper will begin with a
review of existing literature on the HOQ, where the concept will be defined, and
differing concepts discussed. Within the literature review, the current research land-
scape on the HOQ will be identified. Following the literature review, the paper will
then discuss the relevance of the HOQ to Samsung’s decision-making problem. HOQ
will be critically evaluated using appropriate theory and research findings. Lastly, a
conclusion will summarise key points discussed and include suggestions for Samsung
with regards to the use of HOQ to tackle its decision-making problem. To begin, it is
key to define the HOQ.
2 Literature Review
A definition Hauser [5] details the HOQ to be one of four houses as part of Quality
Function Deployment [QFD]. Hauser explains how the HOQ is used to ‘understand the
voice of the customer and to translate it into the voice of the engineer.’ [p. 61]. The
makeup of the HOQ consists of matrixes between: customer requirements [CR]; design
requirements [DR]; and the DRs themselves. Each matrix uses a scoring system to
show: the importance of each CR; relationships between CR and DR; and the rela-
tionship between the DRs themselves. Ultimately, the outcome of completing the HOQ
is to give engineers – or whoever is designing a product – a ‘blueprint’ of design
features and their relation to one another, all in the context around customer needs.
As the HOQ was implemented more and more into manufacturing of various
products over time, the concept was picked apart and challenged. One ongoing theme is
how the CRs of the HOQ will be decided. It is up to the QFD team of a firm to
construct the HOQ, which opens up a level of subjectivity. If subjective inputs into the
HOQ are not accurate, the whole process will be flawed, and the final product can
misrepresent CRs. This underlying problem can be adjectivized as ‘fuzzy’. Fuzzy data,
fuzzy databases, or fuzzy concepts are data that is uncertain and can vary according to
context or conditions [6].
Kim et al. [7] produced a mathematical process which best helps define fuzzy data
that surrounds HOQ. The aim of the article was to tackle the issue of fuzzy data and
implement an extra step people could take when producing the HOQ to ensure variables
remained constant and representative of CRs. When concluding, Kim et al. said their
works ‘could allow the design team to mathematically consider trade-offs among the
various performance characteristics and the inherent fuzziness in the system.’ [p. 517].
A big part of Kim et al.’s process was utilising customer competitive analysis – how
competitor’s design features compare to that of your own in relation to CRs. However,
other research shows solely using customer competitive analysis is insufficient on its
own.
Klochkov et al.’s [8], suggest those constructing the HOQ should consider other
factors as well as competitor analysis. They suggest considering how customers graded
the satisfaction of their needs. Chen and Weng [9] supported this notion too, explaining
how solely using competitor analysis would be difficult in cases where a new product
was being developed – there would be no competitors to analyse. Chen and Weng’s
approach is a step forward in the process of creating the HOQ as their approach puts the
relationships between CRs, DRs, and among DRs themselves into ‘linguistic terms’
[p. 569] which accounts for ‘uncertainty in the stage of product design’ [p. 569].
398 T. Barnard and Y. Wang
3 Critical Evaluation
One of the biggest benefits relevant to Samsung in producing a new flagship smart-
phone is that of the potential competitive advantage gained [5]. As the HOQ ‘links the
voice of the customer to the product design attributes.’ [10, p. 343], Samsung will have
knowledge of features the new smartphone should have that customers want. This can
create a competitive advantage if Samsung sees a new trend emerging. For example,
one CR may be a variety of ways to take photos. Then, the designers can create a DR of
a back camera with multiple interchangeable lenses, meeting the CR. If a rival, such as
Apple, has not seen this emerging trend, Samsung’s new smartphone will give them a
competitive edge, encouraging new smartphone sales.
However, smartphone manufacturers that rival Samsung will most likely be con-
ducting ongoing research into customer requirements. Meaning, any emerging trends
may be picked up by them too and any competitive advantage to Samsung could be
minimalised. On the other hand, this does not rule out the usefulness of the HOQ. [11]
shed light on the human factor of the HOQ. Samsung’s smartphone designers can
demonstrate the human factor by interpreting the CRs differently to rivals and out-
putting this into their DRs. As such, a competitive edge can still be gained so long as
the CRs is still met. In contrast to this, the HOQ’s human factor can also tarnish its
usefulness.
The human factor can hinder the HOQ’s usefulness [12]. All aspects of the tradi-
tional HOQ involve the human factor which brings on the issue of subjectivity. In just
one example, there is no set rulebook on how to score each matrix; especially for a new
product, like Samsung’s new smartphone. Meaning, when it comes to scoring the
relationship between DRs, the scoring may be incorrect. This is just one example and
this human factor can be repeated throughout the HOQ process. As a result, Samsung’s
new smartphone may not meet its CRs and be deemed unsuccessful. This could be
detrimental to Samsung’s performance as it opens up a gap for rival firms to move into.
Although the human factor can be seen on Samsung’s sides, it can also be on the
customer’s side too through the fuzzy nature.
CRs can be fuzzy [13]. Samsung could construct the best HOQ possible, but if the
CRs are difficult to interpret, the knock-on effect is huge. For smartphones, this can be
easily done due to the market being a mass market. One common notion heard from the
consumers in the smartphone market is the requirement for longer battery life. To the
designers, there are many DRs that can meet this CR: bigger battery; smaller phone;
and lower quality screen are all DRs that can be implemented. But, this can easily
conflict another CR [DRs of lower quality screens when CRs are high quality screens].
Even though Samsung is the market leader in smartphones, their design team can only
do as good as their market resarch. If CRs are not clear enough, the end product may
not meet customer needs. Particularly with smartphones, the HOQ can have many CRs
and DRs that conflict [14]. It poses significant pressure on Samsung to navigate CRs
and DRs to come up with a new smartphone that compromises on the two yet satisfies
CRs to the best possible end result.
Nonetheless, the benefits of Samsung using the HOQ overshadow the drawbacks.
The HOQ does solve the problem of how to begin designing a new smartphone.
Applying Quality Function Deployment in Smart Phone Design 399
The HOQ shows tradeoffs between design requirements [15] which aids manufacturers
at Samsung to know what is going to be affected if they change a design feature of a
smartphone. Seeing this can save money and help the smartphone be a success. It saves
money as less prototypes will have to be built in order to see how DRs relate to one
another. Also, manufacturers can navigate the design process using the HOQ as a point
of reference. They won’t have to keep conducting research and meetings to discuss
each step in the design process. Time and money are saved dearly when using the
HOQ.
As the HOQ shows you where efforts should be focused, resources can be allocated
efficiently [16]. In designing a new smartphone, Samsung can see from the HOQ where
efforts should be focused. For example, if the biggest CR is a good quality screen,
Samsung knows they should invest the time and money into developing a top quality
new screen. A big part of the design process is allocating resources efficiently.
The HOQ solves this decision-making problem as it signals what customers want most
and what DR meets this CR. Table 1 summarises the main finds and contrast.
Evidently, it is clear that the HOQ is most suitable for solving Samsung’s decision-
making problem. The benefits of the HOQ discussed address various issues that may
arise during the design process of a new smartphone, such as cost. Implementing the
HOQ at Samsung can help ensure their next flagship smartphone meets CRs to the best
of their ability. With competitors such as Apple in close competition, Samsung should
be considering any opportunity to maintain their market lead, such as the HOQ.
4 Conclusion
Throughout this paper, the HOQ has been discussed thoroughly to help solve Sam-
sung’s decision-making problem. The concept was defined in its earliest version and
the HOQ has been critically analysed. An ongoing theme has been fuzzy data. Seen in
both older literatures and new, fuzzy data has been highlighted as an Achilles heel to
the HOQ. From both CRs to DRs, there is not a set formula for constructing either,
creating an opportunity for subjectivity. Future research into the HOQ could be set on
better managing the fuzzy nature of the HOQ; managing the fuzzy nature could be
difficult as the application of the HOQ changes constantly.
Similarly, the human factor is also a drawback of using the HOQ. The paper has
discussed the human factor, and a cure to this weakness is emerging. A new dynamic
HOQ uses digital algorithms for data in the HOQ compared to manually interpreting
400 T. Barnard and Y. Wang
data. As discussed, this dynamic HOQ looks set to be the driver of future digital
revolutions. Samsung can utilise this as certain CRs relate to digital features of
smartphones. However, for other CRs, along with users of HOQ that are not concerned
with digital, a new dynamic HOQ is not suitable. As such, the human factor needs to be
further researched ensure it does not conflict in the usefulness of the HOQ.
As a whole, the paper does support the use of the HOQ to solve Samsung’s
decision-making problem. The benefits simply outweigh the drawbacks. The oppor-
tunity to see CRs and gain a competitive advantage is invaluable to Samsung, in such a
competitive market. Without the HOQ, it is difficult to see how Samsung can optimise
their smartphone design process. From the initial creation of the HOQ, Samsung can
begin to quantity the resources they need to allocate throughout the design process.
Any conflicting DRs can be spotted early on and accounted for, saving on both time
and money.
As a final recommendation, Samsung should utilise the HOQ to its fullest potential
and take advantage of any new developments, such as the dynamic HOQ. The draw-
backs discussed should be accounted for and future research should be conducted on
how to minimalize and manage these issues.
References
1. Bradshaw, T.: Are foldable phones more than just a gimmick? Financial Times, 26 February
2019. https://www.ft.com/content/a21b8dae-3941-11e9-b72b-2c7f526ca5d0. Accessed Apr
2019
2. Bradshaw, T.: Huawei unveils Mate X foldable phone at Mobile World Congress. Financial
Times, 24 February 2019. https://www.ft.com/content/d5e7aaf2-3840-11e9-b856-
5404d3811663. Accessed Apr 2019
3. Jung-a, S.: Samsung Electronics warns of weak earnings after 30% profit drop. Financial
Times, 31 January 2019. https://www.ft.com/content/b241f728-24fa-11e9-8ce6-
5db4543da632. Accessed Apr 2019
4. Samsung: Our Smartphones. Samsung (2019). https://www.samsung.com/uk/smartphones/
all-smartphones/. Accessed Apr 2019
5. Hauser, J.R.: How Puritan-Bennett used the house of quality. Sloan Manag. Rev. 34, 61–70
(1993)
6. IGI Global: What is Fuzzy Database. IGI Global (2008). https://www.igi-global.com/
dictionary/introduction-trends-fuzzy-logic-fuzzy/11718. Accessed 28 Apr 2019
7. Kim, K.-J., Moskowitz, H., Dhingra, A., Evans, G.: Fuzzy multicriteria models for quality
function deployment. Eur. J. Oper. Res. 121, 504–518 (2000)
8. Klochkov, Y., Klochkova, E., Volgina, A., Dementiev, S.: Human factor in quality function
deployment. In: 2016 Second International Symposium on Stochastic Models in Reliability
Engineering, Life Science and Operations Management (SMRLO), pp. 466–468 (2016)
9. Chen, L.-H., Weng, M.-C.: A fuzzy model of exploiting quality function deployment. Math.
Comput. Model. 38, 559–570 (2003)
10. Verma, R., Maher, T., Pullman, M.: Effective product and process development using
quality function deployment. School of Hotel Administration Collection, pp. 339–354
(1998)
Applying Quality Function Deployment in Smart Phone Design 401
11. Chen, A., Dinar, M., Gruenewald, T., Wang, M., Rosca, J., Kurfess, T.R.: Manufacturing
apps and the dynamic house of quality: towards an industrial revolution. Manuf. Lett. 13,
25–29 (2017)
12. Olewnik, A., Lewis, K.: Limitations of the house of quality to provide quantitative design
information. Int. J. Qual. Reliab. Manag. 25, 125–146 (2007)
13. Bouchereau, V., Rowlands, H.: Methods and techniques to help quality function
deployment. Benchmarking: Int. J. 7, 8–20 (2000)
14. Poel, I.V.: Methodological problems in QFD and directions for future development. Res.
Eng. Design 18, 21–36 (2007)
15. Vonderembse, M.A., Raghunathan, T.: Quality function deployment’s impact on product
development. Int. J. Qual. Sci. 2, 253–271 (1997)
16. Silva, F.L., Cavalca, K.L., Dedini, F.G.: Combined application of QFD and VA tools in the
product design process. Int. J. Qual. Reliab. Manag. 21, 231–252 (2004)
17. Hauser, J.R., Clausing, D.: The house of quality. The Harvard Business Review, May 1988.
https://hbr.org/1988/05/the-house-of-quality. Accessed 25 Apr 2019
eQUALS: Automated Quality Check System
for Paint Shop
1 Introduction
The main contribution of eQUALS to the state of the art is the usage of a multi-
modal approach, which joins the sensitive capabilities of a collaborative robot and
external sensoring from the manufacturing line.
Being a contact application, other challenge is not damaging the measurement
device or the car body. This is much related to performing the process in movement,
but it has additional issues, such as guaranteeing that the applied force can be mea-
sured. Again, the sensitive capabilities of new collaborative robots are the key to
achieve this: the application can actually measure the force applied during the mea-
surement [13].
Finally, regarding cost, there are two main groups of solutions in the market to
control paint features: manual-based and automatic ones. In the first group, we identify
devices such as [14–16]. They are non-expensive devices, oriented to be used by hand,
although many of them provide a SDK to handle and invoke measurements
automatically.
On the other hand, there are automatic solutions, based on the usage of controllers
and probes, so the probes are mounted on a robot or similar equipment. These systems
are usually more expensive, but help to partially automate the control process.
Examples are the Fisher models “MMS INSPECTION” or “MP0/MP0R SERIES”
[17]. eQUALS aims to use non-expensive solutions to allow a rapid integration in the
factory.
3.3 Phase II
Using the concepts analysed through the virtual validation, a further phase II will
develop a final inspection system composed by a number of robots, so each robot in the
system has specific inspection points to measure with one or more measurement
eQUALS: Automated Quality Check System for Paint Shop 407
devices. In this phase, any robot, PLC, device, etc. could be adapted to be used in this
solution.
In this second phase, some additional features have been identified as relevant,
taking into account manufacturing constraints:
– Flexibility: eQUALS will be developed so the execution of specific inspections to
specific references will be performed.
– User friendliness: a GUI will allow easy interaction, maintainability and additional
operations launching, such as system calibration.
– Data storage: Phase II will include a data monitoring system to store process
execution data.
– The communication PLC-PC will be adapted to specific needs of the OEM, so
ongoing solutions are used. An example is using OPC, OPC-UA, MQTT or ROS as
messaging tools.
4 Results
During the execution of the proof of concept, the main validation points have been
assessed:
• Force applied during measurement: the tests have confirmed that it is possible to
maintain the applied force between 20 N and 40 N. The low value is the minimum
necessary by the measurement device to perform the measurement. The high value
is a confidence range, which is considered as enough to keep safe both car and
device.
• Line speed limit: it has been assessed that the current approach is able to inspect in
movement at a max line speed of 60 cars per minute. Above this threshold, the
necessary approximation procedure is insufficient to achieve a fine measurement
position.
• Measurement time: it is highly related to the measurement device. However, there is
a previous approximation step which requires 4 s. This means that, in an average
1 min cycle time, no more than 10 inspection points can be performed.
5 Conclusions
In this paper, a new automated system to perform paint quality assessment has been
presented.
A multimodal system using sensitivity data from the robot has been developed in
order to accurately measure the force applied with the measurement device on the car
body. This has been key for allowing to measure without damaging the device or the
car, especially important in aspect zones.
Offline robotics programming has been used in order to simulate number and
position of robots to make the system able to inspect all the control points in a short
cycle time.
408 A. Dacal-Nieto et al.
Finally, the application of this process in movement has represented the main
challenge. Measuring in movement has provided interesting knowledge which can be
transferred into other robotic applications, such as bead forming or welding, that are
currently developed only when the car is stopped.
The second phase of eQUALS will demonstrate the application of this new
knowledge into a real in-line automotive scenario. In the future, eQUALS will serve as
a basis for further robotic developments regarding in-movement applications, paint
quality assessment, and similar applications.
Acknowledgments. The authors want to thank the ESMERA project (European SMEs Robotic
Applications), which has received funding from the European Union Horizon 2020 Research and
Innovation program, under grant agreement No. 780265 and supports the work of this project.
References
1. ESMERA project website. http://www.esmera-project.eu/. Accessed 30 July 2019
2. Moore, J.R.: Automotive paint application. In: Wen, M., Dušek, K. (eds.) Protective
Coatings. Springer (2017)
3. Svejda, P.: Paint shop design and quality concepts. In: Streitberger, H., Dossel, K. (eds.)
Automotive Paints and Coatings. Wiley (2008)
4. Javad, J., Alborzi, M., Felor, G.: Car paint thickness control using artificial neural network
and regression method. J. Ind. Eng. Int. 7(14), 1–6 (2011)
5. Pires, J.N., Godinho, T., Ferreira, P.: CAD interface for automatic robot welding
programming. Ind. Robot: Int. J. 31(1), 71–76 (2004)
6. Jones, M.G., Erikson, C.E., Mundra, K.: U.S. Patent No. 6,521,861. U.S. Patent and
Trademark Office, Washington, DC (2003)
7. Wang, Y., Chen, T., He, Z., Wu, C.Z.: Review on the machine vision measurement and
control technology for intelligent manufacturing equipment. Control Theory Appl. 32(3),
273–286 (2015)
8. Taniguchi, N.: Current status in, and future trends of, ultraprecision machining and ultrafine
materials processing. CIRP Ann. 32(2), 573–582 (1986)
9. Ferraguti, F., Pertosa, A., Secchi, C., Fantuzzi, C., Bonfè, M.: A methodology for
comparative analysis of collaborative robots for industry 4.0. In: 2019 Design, Automation
and Test in Europe Conference and Exhibition, pp. 1070–1075. IEEE (2019)
10. Chawda, V., Niemeyer, G.: Toward torque control of a KUKA LBR IIWA for physical
human-robot interaction. In: 2017 IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), pp. 6387–6392. IEEE (2017)
11. Fanuc force sensor product website. https://www.fanuc.eu/es/en/robots/accessories/robot-
vision/force-sensor. Accessed 29 July 2019
12. Pérez, L., Rodríguez, Í., Rodríguez, N., Usamentiaga, R., García, D.: Robot guidance using
machine vision techniques in industrial environments: a comparative review. Sensors 16(3),
335 (2016)
13. Magrini, E., Flacco, F., De Luca, A.: Control of generalized contact motion and force in
physical human-robot interaction. In: 2015 IEEE International Conference on Robotics and
Automation (ICRA), pp. 2298–2304. IEEE (2015)
14. Elcometer website. https://www.elcometer.com. Accessed 29 July 2019
eQUALS: Automated Quality Check System for Paint Shop 409
Abstract. Case-based reasoning (CBR) has been applied to all walks of life as
an emerging intelligent fault diagnosis method. For the existing fault situation,
the feature weight is mainly given by experts, which has the disadvantages of
strong subjectivity and low accuracy of search results. A case retrieval algorithm
based on mixed weight (CRAMW) is proposed. Firstly, the entropy weight
method is used to obtain the objective weights of the feature attributes of each
fault case, and the mixed attribute weight value of the comprehensive attribute is
given in combination with the expert experience. Secondly, the similarity
between the cases is calculated by using the mixed weight value and the
Euclidean distance to find the case base. The most similar case to the case to be
tested; finally, the engineering equipment fault diagnosis algorithm simulation
of the algorithm is designed. The simulation results show that the algorithm has
higher resolution and accuracy than traditional algorithms. Applying the algo-
rithm to equipment fault diagnosis helps the smooth progress of maintenance
work and improves equipment support efficiency.
1 Introduction
With the continuous enhancement of China’s military strength and the gradual
advancement of military information construction, the complexity of various military
equipment has been continuously improved, and the requirements for equipment
support have become higher and higher. Equipment fault diagnosis technology is one
of the important links of equipment support. It provides equipment maintenance per-
sonnel with fault information, ensures the smooth operation of equipment support,
enables the equipment to restore wartime status as soon as possible, and lays a solid
foundation for the ultimate victory of the war.
Due to the complexity of the battlefield environment and the particularity of
weapons and equipment, it is not only difficult to establish an accurate fault diagnosis
model, but also difficult to effectively analyze the input and output signals. Artificial
intelligence has the function of machine learning, and some case-based reasoning fault
diagnosis methods based on artificial intelligence are proposed. The core idea of CBR
(Case Based Reasoning) is to use the past experience of solving problems to solve
emerging problems. Through the case search, find the case most similar to the new
situation in the historical case database [1]. CBR consists of four basic processes: case
retrieval, case reuse, case modification, and case studies.
CBR fault reasoning has been widely used in various industries. As early as 1971,
Kling [2] proposed a memory network model and case retrieval algorithm, which is
considered to be the first to develop analogy information and use it to solve problems.
program. Jaime [3–5] proposed the concept of conversion analogy and derived analogy
in case-based reasoning, and developed the PRODIGY system in 1991. In recent years,
domestic and foreign scholars have conducted a lot of research on fault diagnosis and
case retrieval. Baogang [6] constructed a case-based reasoning-based intelligent
diagnosis model for airborne missiles based on case-based reasoning. Yong [7] applied
the case-based reasoning method to the fault diagnosis of aerospace measurement and
control equipment, effectively improving the efficiency of equipment fault diagnosis;
Mingju [8] Applying the formal concept to the fault case retrieval process, an improved
similarity calculation method is proposed. Bin [9] proposed an overall similarity cal-
culation method based on edit distance.
Most of the above-mentioned documents directly give the weight of the fault case
characteristics by expert experience, and there are some disadvantages such as strong
subjectivity and low accuracy of fault case retrieval. Based on this, an equipment fault
case retrieval algorithm based on hybrid weights is proposed, which is applied to the
equipment fault case retrieval to improve the accuracy and resolution of the search
results.
The CBR-based fault diagnosis method essentially searches for the case library that has
failed, finds a case similar to the fault to be diagnosed, and then diagnoses the faulty
equipment through correction. In the CBR-based fault diagnosis method, case retrieval
is its most important step, and the choice of case retrieval algorithm directly affects the
accuracy of fault diagnosis. The CBR search usually calculates the degree of similarity
between each problem attribute of the problem case and the fault case to obtain the
local similarity LS (Local similarity), and then calculates the global similarity GS
(Global similarity) according to the weight and local similarity of each feature attribute.
Finally, choose the most similar historical case according to the similar size. The
formula is as follows:
X
n
GSðX; YÞ ¼ Wj LSðX; YÞ ð1Þ
j¼1
where, X; Y represents two different cases, n represents the number of case feature
attributes, Wj represents the weight of the feature attribute of item j.
412 Y. Tian et al.
3 CRAMW Algorithm
3.1 Determination of Attribute Weight Based on Entropy Weight
Method
In the past, methods for empowering case attributes mainly include domain expert
scoring method, analytic hierarchy process, and comprehensive evaluation method.
These methods are mainly given by experts based on their experience. Although the
principle is simple and easy to understand, it is subjective. The entropy weight method
based on information entropy can be used to determine the objective weight of each
symptom attribute of the case, which can further improve the accuracy of fault diag-
nosis [10]. The calculation steps of the entropy weight method are as follows:
(1) Build m fault cases, failure symptom matrix Q of n symptom attributes:
Q ¼ qij mn ði ¼ 1; 2; . . .; n; j ¼ 1; 2; . . .; mÞ: ð2Þ
(2) Normalize the vector of the fault symptom matrix Q to obtain a normalized matrix B:
q
bij ¼ rffiffiffiffiffiffiffiffiffi
ij
P m
ð3Þ
2 qij
i¼0
1 X m
Hj ¼ fij ln fij ð4Þ
ln m i¼1
bij
fij ¼ P
m ð5Þ
bij
i¼1
1 Hj X
n
wj ¼ n ; and wj ¼ 1 ð6Þ
P
1 Hj j¼1
j¼1
Equipment Fault Case Retrieval Algorithm Based on Mixed Weights 413
The final failure symptom attribute weighting result given in this paper consists of
two parts of weights. The comprehensive weight calculation method is as follows:
ð1Þ ð2Þ
X
n
ð1Þ ð2Þ
Wj ¼ Wj Wj = Wj Wj ð7Þ
j¼1
Where, W ð1Þ is the objective weight calculated from the fault symptom data using
the information entropy weight method. W ð2Þ is the subjective weight directly given by
expert experience.
Where, ci represents the i th case in the case library, and xi1 ; xi2 ; xi3 ; . . .xin represents
the feature attribute value of the i th case.
The similarity matching algorithm based on Euclidean distance is the most classic
and practical when performing similarity calculation. The complete failure case simi-
larity calculation process should include two steps:
First, calculate the Euclidean distance of the symptom value of each case in the case
to be diagnosed and the case library:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
d¼ ðxij x0j Þ2 ð9Þ
1
simðC0 ; Ci Þ ¼ ð11Þ
1 þ DðC0 ; Ci Þ
414 Y. Tian et al.
In order to verify the feasibility of the hybrid case-based equipment fault case retrieval
algorithm, this paper uses different case retrieval algorithms to diagnose a certain
engineering equipment, and select 15 typical cases ðC1 C15 Þ, for the case to be
diagnosed, each case has 9 The symptom attribute ðV1 V9 Þ, the symptom of the fault
to be diagnosed and the fault case are shown in Table 1:
The attribute weight vector determined by the expert scoring form is (0.25, 0.025,
0.025, 0.1, 0.2, 0.05, 0.05, 0.1, 0.2), and the attribute weight vector determined by the
entropy weight method is (0.0695, 0.0629, 0.0685, 0.0857, 0.1651, 0.1108, 0.1390,
0.1482, 0.1502), the attribute blending weight vector determined by the expert scoring
and entropy weight method is (0.1453, 0.0131, 0.0143, 0.0717, 0.2761, 0.0463,
0.0581, 0.1239, 0.2512). After normalizing the data in Table 1, the Expert Weighted
Euclidean Algorithm (EWEA), the Entropy-Euclidean Algorithm (EEA) and the
Algorithm (CRAMW) performs similarity calculation on equipment failure cases.
Table 2 gives the fault diagnosis results of different similarity algorithms. Figure 1
shows the comparison results of different similarity algorithms.
Equipment Fault Case Retrieval Algorithm Based on Mixed Weights 415
It can be seen from Table 2 and Fig. 1 that the results obtained by different sim-
ilarity algorithms are consistent, that is, the fault case to be tested has the highest
similarity with Case C13 . By comparison, the CRAMW algorithm has higher resolution
than other algorithms.
In this paper, the definition of “accuracy” is the number of cases in which the
similarity of the case is not less than 0.9/the total number of cases retrieved. Select
different similarity algorithms and different cases to be diagnosed for comparison
experiments. The total number of cases retrieved is the number of cases with similarity
416 Y. Tian et al.
not less than 0.85. When the number of cases is N = 15, the search results are shown in
Table 3. In the case When the number N = 30, the search results are shown in Table 4.
As can be seen from the table, the CRAMW algorithm has higher accuracy than other
algorithms; meanwhile, the total number of cases retrieved by the CRAMW algorithm
is compared with the other two. The algorithm is relatively rare, and in the case of the
same effect, the retrieval case redundancy can be effectively avoided, and the case
retrieval time can be shortened; in addition, as the number of case library cases
increases, the case retrieval capability is also continuously improved.
Table 3. Comparison of case retrieval accuracy between different case retrieval algorithms at
N = 15
Algorithm Total number of Retrieved case Accuracy Search case
cases retrieved Sim 0:9 maximum similarity
EWEA 9 5 0.556 0.9459
algorithm
EEA 10 5 0.4 0.9430
algorithm
CRAMW 8 5 0.625 0.9462
algorithm
Table 4. Comparison of case retrieval accuracy between different case retrieval algorithms at
N = 30
Algorithm Total number of Retrieved case Accuracy Search case
cases retrieved Sim 0:9 maximum similarity
EWEA 24 17 0.708 0.9442
algorithm
EEA 28 15 0.536 0.9321
algorithm
CRAMW 19 7 0.789 0.9514
algorithm
Using the CRAMW algorithm proposed in this paper, the fault case data of Table 1
is retrieved, and the top 8 cases with similarity to the case to be diagnosed are selected
to be C13 [ C12 [ C15 [ C1 [ C6 [ C10 [ C3 [ C9 , the corresponding case modi-
fication and adjustment of the retrieved case results, and finally get the optimal diag-
nosis strategy of the case to be diagnosed, so that the equipment can return to normal
state as soon as possible, and help the final victory of the war.
5 Conclusions
In order to improve the accuracy of equipment fault case retrieval, this paper proposes
an equipment fault case retrieval algorithm based on hybrid weights. The entropy
weight method is used to calculate the objective weight of the case symptom attribute,
Equipment Fault Case Retrieval Algorithm Based on Mixed Weights 417
and the subjective weight determined by the expert experience is used to calculate the
similarity between the case to be diagnosed and the case in the fault case library by
using the Euclidean distance. The algorithm overcomes the subjectivity of determining
weights based on expert experience in the past. The calculation results show that under
a certain number of case bases, the algorithm has higher resolution than the other two
algorithms, and under different number of case bases, the algorithm of this paper The
case retrieval accuracy is higher, and it is applied to the equipment fault diagnosis, and
the equipment is repaired in time, which is conducive to the smooth implementation of
the equipment support work.
References
1. Bingyang, L., Liting, H., Tingmei, Z.H.: Fault diagnosis system for locomotive based on
case- based reasoning. J. Wuhan Univ. Technol. (Inf. Manage. Eng. Edn. 37(01), 38–42
(2015)
2. Robert, E.K.: A paradigm for reasoning by analogy. Artif. Intell. 2(2), 147–178 (1971)
3. Jaime, G.C.: A computational model of analogical problem solving. In: The 7th International
Joint Conference on Artificial Intelligence, pp. 147–152. Morgan Kaufmann Publishers Inc,
San Francisco (1981)
4. Jaime, G.C.: Derivational analogy and its role in problem solving. In: The Third AAAI
Conference on Artificial Intelligence (AAAI 1983). AAAI Press, Washington, pp. 64–69
(1983)
5. Jaime, G.C., Oren, E., et al.: PRODIGY: an integrated architecture for planning and learning.
SIGART Bull. 2(4), 51–55 (1991)
6. Baogang, L.: Design of aviation missile fault intelligent diagnosis model based on CBR.
Ordnance Autom. 34(03), 13–17 (2015)
7. Yong, L., Haichao, L., Xizhong, G., Bingguang, Z.H.: Fault diagnosis of space TT&C
equipment based on case-based reasoning. Telecommun. Technol. 57(02), 236–242 (2017)
8. Mingju, F., Yun, Z.H., Zhengguo, X.: Case retrieval for faults of steam turbines based on
formal concept. J. Shandong Univ. Sci. Technol. (Nat. Sci. Edn) 36(04), 24–30 (2017)
9. Bin, S.H., Shuyu, Z.H.: A case-based reasoning method for fault diagnosis of CNC machine
tools based on edit distance. Chin. J. Constr. Mach. 15(04), 359–364 (2017)
10. Barnum, H., Barrett, J., Clark, L.O., et al.: Entropy and information causality in general
probabilistic theories. New J. Phys. 3, 1–32 (2010)
Cascaded Organic Rankine Cycles (ORCs)
for Simultaneous Utilization of Liquified
Natural Gas (LNG) Cold Energy
and Low-Temperature Waste Heat
Abstract. Liquified Natural Gas (LNG) is a good way to transport natural gas
from suppliers to end consumers. LNG contains a huge amount of cold energy
due to the energy consumed in the liquefaction process. Generally, the LNG cold
energy is lost during the regasification process at the receiving terminal. Power
generation with LNG as the heat sink is an energy-efficient and environment-
friendly way to regasify LNG. Among different kinds of power generation
technologies, Organic Rankine Cycle (ORC) is the most promising power cycle
to recover LNG cold energy. ORC has been widely used to convert low-
temperature heat into electricity. If low-temperature waste heat and LNG cold
energy utilization are utilized simultaneously, the efficiency of the whole system
can be improved significantly. However, due to the large temperature difference
between the low-temperature waste heat source and LNG, one stage ORC cannot
exploit the waste heat and LNG cold energy efficiently. Therefore, a cascaded
ORC system is proposed in this study. The optimization of the integrated system
is challenging due to the non-convexity and non-linearity of flowsheet and the
thermodynamic properties of the working fluids. A simulation-based optimiza-
tion framework with Particle Swarm Optimization algorithm is adopted to
determine the optimal operating conditions of the integrated system. The maxi-
mum unit net power output of the integrated system can reach 0.096 kWh per
kilogram LNG based on the optimal results.
Keywords: Organic Rankine Cycles LNG cold energy Waste heat Process
optimization
1 Introduction
2 Process Description
The layout of the proposed cascaded ORC system is illustrated in Fig. 1. The higher
temperature cycle converting waste heat into electricity is called Top Cycle (TC) and
the lower temperature cycle utilizing LNG cold energy is called Bottom Cycle (BC).
Low-temperature waste heat acts as the heat source in the top ORC, and the conden-
sation heat is the heat source of the bottom ORC. LNG is pumped to the evaporation
pressure, which is a key variable in the system. And then the LNG evaporates in the
condenser of the bottom ORC acting as the heat sink. Since the temperature of LNG is
still below ambient temperature after evaporation, LNG is heated up by seawater. It is
assumed that the LNG is heated up to 10 °C by the seawater in this study. To improve
the utilization efficiency of low-temperature waste heat, natural gas superheater is set
between the waste heat and LNG as shown in Fig. 1. Therefore, the top cycle mainly
focuses on recovering the low-temperature waste heat and the bottom cycle aims at
recovering the LNG cold energy. In this study, the LNG is assumed to be used for
power plant, and the waste heat is assumed to be the treated flue gas at 150 °C. Since
420 F. Liu et al.
the LNG mass flowrate is assumed to be 1620 kg/h, the molar flowrate of flue gas
(mostly CO2) should be 9509 kg/h based on the combustion of natural gas. The
composition of LNG is the same as that in [7]. Due to the different operating tem-
perature ranges of the top cycle and bottom cycle, working fluid should be chosen
carefully for the top and bottom cycle, respectively. The top cycle is like a conventional
ORC for low-temperature waste heat recovery. Based on a new pinch based working
fluid selection study [9] and the waste heat conditions, R600 is chosen as the working
fluid for the top cycle in this work. For the bottom cycle, the condensation temperature
of the working fluid should be as close as possible to the temperature of LNG. Based on
the saturation temperature at 1 bar, R1150 has the lowest saturation temperature among
22 working fluids investigated by Yu et al. [7]. Therefore, R1150 is chosen as the
working fluid for the bottom cycle. The integrated process is simulated in
Aspen HYSYS, and Peng-Robinson equation of state is chosen for the thermodynamic
property’s calculation of working fluids and LNG.
3 Process Optimization
Table 1. The lower and upper bounds and optimal values of independent variables
Variables Lower bound Upper bound Optimal value
Condensation pressure of TC (bar) 1 5 1.37
Evaporation pressure of TC (bar) 5 34.1 14.69
Working fluid flowrate of TC (kmol/h) 5 50 18.43
Condensation pressure of BC (bar) 1 5 3.24
Evaporation pressure of BC (bar) 10 45.5 38.96
Working fluid flowrate of BC (kmol/h) 10 50 32.17
LNG evaporation pressure (bar) 5 150 79.52
Heat load of TC evaporator (kW) 50 400 181.69
Heat load of natural gas superheater (kW) 0 150 77.20
The optimal values of the independent variables are listed in Table 1. The maximum
power output of the integrated system is 155.5 kW. Since the mass flowrate of LNG is
assumed as 1620 kg/h, the unit power output is 345.6 kW with the LNG flowrate being
422 F. Liu et al.
1 kg/s. Therefore, the power output is 0.096 kWh/kg (LNG based metrics). Compared
with the electricity consumed during the liquefaction process, the power output of the
system is still quite low. The Logarithmic Mean Temperature Difference (LMTD) of
top cycle evaporator, top cycle condenser, bottom cycle condenser, and natural gas
superheater are 20.44 °C, 11.85 °C, 19.72 °C, and 15.67 °C respectively. The LMTD
of the top cycle evaporator is larger than other heat exchangers. The final temperature
of waste heat is 44.17 °C, which means that the waste heat recovery ratio is very high.
The top cycle power output, bottom cycle power output and natural gas expander
power output are 29.56 kW, 36.88 kW and 102.30 kW respectively. It is clear that the
expansion of natural gas generates 61% of the total power output. However, the LNG
pump consumes 10.07 kW pumping work. The direct expansion of natural gas is an
effective way to recover the LNG cold energy as well. These are the optimal operating
conditions obtained from the PSO algorithm with a maximum of 100 generations. If the
population size and the iteration numbers are increased, better results could be
obtained. There is still space to improve the efficiency of the system. Other than the
cascaded ORC system, series ORC system could probably result in higher efficiency.
However, series ORC system is out of the scope of this paper but will be investigated in
the future work.
5 Conclusion
In this study, a cascaded ORC system to recover low-temperature waste heat and LNG
cold energy is simulated and optimized. Cascaded ORC system can improve LNG cold
energy recovery efficiency. R600 and R1150 are chosen as the working fluids for the
top and bottom cycle respectively since they have appropriate critical properties in their
corresponding operating temperature range. Process optimization is performed based
on a simulation-based optimization framework, which adopts PSO as the optimization
algorithm. The optimal operation conditions of the system are derived based on this
optimization framework. The maximum netpower output is 343.2 kW with the LNG
flowrate being 1 kg/s, which is equivalent to 0.096 kWh/kg. The rigorous techno-
economic optimization of the integrated energy system and series ORC system will be
investigated in future work.
References
1. Liu, H., You, L.: Characteristics and applications of the cold heat exergy of liquefied natural
gas. Energy Convers. Manage. 40, 1515–1525 (1999)
2. Sung, T., Kim, K.C.: LNG cold energy utilization technology. In: Zhang, X., Dincer, I. (eds.)
Energy Solutions to Combat Global Warming, pp. 47–66. Springer, Cham (2017)
3. Patil, V.R., Biradar, V.I., Shreyas, R., Garg, P., Orosz, M.S., Thirumalai, N.C.: Techno-
economic comparison of solar organic Rankine cycle (ORC) and photovoltaic (PV) systems
with energy storage. Renew. Energy 113, 1250–1260 (2017)
4. Wang, E., Zhang, H., Fan, B., Liang, H., Ouyang, M.: Study of gasoline engine waste heat
recovery by organic rankine cycle. Adv. Mater. Res. 383, 6071–6078 (2012)
Cascaded (ORCs) for Simultaneous Utilization 423
5. Drescher, U., Brüggemann, D.: Fluid selection for the organic rankine cycle (ORC) in
biomass power and heat plants. Appl. Therm. Eng. 27, 223–228 (2007)
6. Yu, H., Feng, X., Wang, Y., Biegler, L.T., Eason, J.: A systematic method to customize an
efficient organic rankine cycle (ORC) to recover waste heat in refineries. Appl. Energy 179,
302–315 (2016)
7. Yu, H., Kim, D., Gundersen, T.: A study of working fluids for organic rankine cycles
(ORCs) operating across and below ambient temperature to utilize liquefied natural gas
(LNG) cold energy. Energy 167, 730–739 (2019)
8. Lin, W., Huang, M., He, H., Gu, A.: A transcritical CO2 Rankine cycle with LNG cold
energy utilization and liquefaction of CO2 in gas turbine exhaust. J. Energy Res. Technol.
131, 042201 (2009)
9. Yu, H., Feng, X., Wang, Y.: A new pinch based method for simultaneous selection of
working fluid and operating conditions in an ORC (Organic Rankine Cycle) recovering
waste heat. Energy 90, 36–46 (2015)
10. Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of
the Sixth International Symposium on Micro Machine and Human Science MHS, pp. 39–43.
IEEE (1995)
11. Pavão, L.V., Costa, C.B.B., Ravagnani, M.A.S.S.: Heat exchanger network synthesis
without stream splits using parallelized and simplified simulated annealing and particle
swarm optimization. Chem. Eng. Sci. 158, 96–107 (2017)
12. Javaloyes-Antón, J., Ruiz-Femenia, R., Caballero, J.A.: Rigorous design of complex
distillation columns using process simulators and the particle swarm optimization algorithm.
Ind. Eng. Chem. Res. 52, 15621–15634 (2013)
13. Liu, H., Zhang, H., Yang, F., Hou, X., Yu, F., Song, S.: Multi-objective optimization of fin-
and-tube evaporator for a diesel engine-organic Rankine cycle (ORC) combined system
using particle swarm optimization algorithm. Energy Convers. Manage. 151, 147–157
(2017)
14. Braimakis, K., Preißinger, M., Brüggemann, D., Karellas, S., Panopoulos, K.: Low grade
waste heat recovery with subcritical and supercritical organic rankine cycle based on natural
refrigerants and their binary mixtures. Energy 88, 80–92 (2015)
Manufacturing Technology
Effect of T-groove Parameters on Steady-State
Characteristics of Cylindrical Gas Seal
Keywords: Gas cylinder film seal T-groove parameters Gas film stiffness
Leakage
1 Introduction
In the recent development of sealing technology, the new and advanced gas film seal
technology is gaining more attentions in industrial gas and aviation engines turbines
[1], since the gas film seal technology, comparing with other technologies, has lower
leakage, wear, and energy consumption, but longer life expectation, simpler and more
reliable operation [2]. Ma et al. carried out a series of researches on cylindrical gas seal
and obtained results of gas film reaction force, film stiffness, and friction torque and
seal leakage of the seal [3]. The dynamic characteristics of the cylindrical gas seal were
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 427–433, 2020.
https://doi.org/10.1007/978-981-15-2341-0_53
428 J. Sun et al.
also studied by the perturbation method [4]. Ma et al. further compared the perfor-
mance of several common spiral groove forms and found that the cylindrical gas film
seal is more suitable for sealing the key parts of the aero engine [5].
As a new structure of cylindrical gas seal, there is relatively few researches on the
T-groove cylindrical gas seal. The T-groove cylindrical gas seal, therefore, is studied in
this paper. The focuses are on the influence of T-groove parameters on the steady state
characteristics of the cylindrical gas film by the control variable method. This research
plays an important role in guiding the design and the application of cylindrical gas seal.
2 Model
Fig. 1. The sketch of the expansion diagram of T-groove cylinder gas film seal
Figure 1 illustrates the sketch of the expansion diagram of T-groove cylinder gas film
seal. Sealing medium is air, and the detailed description of each parameter is presented
in Table 1.
3 Results
Fig. 4. Effect of number of T-groove on the maximum pressure and the film stiffness (a) and the
leakage (b)
Fig. 5. Effect of groove depth on the maximum pressure and the film stiffness (a), and the
leakage (b)
The impact of groove depth on the leakage is shown in Fig. 5(b). It can be seen
from the figure that the leakage reaches the minimum value when the groove depth is
also about 20 lm. This is due to that with the increase of the groove depth, the
hydrodynamic pressure effect is enhanced, and then the leakage is reduced. However,
when the depth exceeds 20 lm, negative pressure zone is formed when gas flows
through the groove. The reason is that the formation of the negative pressure zone
makes the hydrodynamic pressure effect no longer increase. The sealing gap between
432 J. Sun et al.
the moving ring and the stationary ring increases when the groove depth increases.
Therefore, the leakage begins to increase when the groove depth reaches more than
20 lm.
Fig. 6. Effect of groove width ratio on the maximum pressure and the film stiffness (a), and the
leakage (b)
Figure 6(b) shows the impact of the groove width ratio (c) on the leakage. The
leakage increases when the groove width ratio increases. This is because increasing of
the groove width ratio causes increase of the sealing gap in the circumferential
direction, which leads to higher leakage.
4 Summary
The effect of several parameters of the T-groove parameters on the sealing performance
is investigated, and from the analysis, the following conclusions can be drawn:
1. The film stiffness increases when the number of grooves increases, and it tends to be
stabilized when the number of grooves reaches 22. However, as the number of
grooves increases, the leakage decreases. These indicate that when other parameters
are constant, there are an optimal number of grooves. At the optimal number of
grooves, both the leakage and the film stiffness reach the stable values.
Effect of T-groove Parameters on Steady-State Characteristics 433
2. With the increase of the groove depth, there is a maximum value of the film stiffness
and a minimum value of the leakage. These two extreme values achieved simul-
taneously at the groove depth 20 lm. These show that when other parameters are
constant, there is an optimal groove depth. At this optimal groove depth, the leakage
is reduced to the minimum value, and the film stiffness rises to the maximum value.
3. The film stiffness increases with the increase of the groove width ratio. When the
groove width ratio is 0.5, the film stiffness tends to be stabilized, and when the
groove width ratio is 0.6, the maximum pressure reaches the maximum value. The
leakage increases when the groove width ratio increases. This is because increasing
of the groove width ratio causes increase of the sealing gap in the circumferential
direction, which leads to higher leakage.
Acknowledgement. This research was fully supported by the National Natural Science Foun-
dation of China (No. 51765024) and the Youth Project of Science and Technology Department of
Yunnan Province (No. 2017FD132). We gratefully acknowledge the relevant organizations.
References
1. Liu, J., Zhang, Z.: Prospect of aero engine power transmission system in the 21st century.
J. Aerosp. Power 16(2), 4 (2001) 108–114. (in Chinese)
2. Cai, R., Gu, B., Song, P.: Process Equipment Sealing Technology, 2nd edn. Chemical
Industry Press, Beijing (2006). (in Chinese)
3. Ma, G., Xi, P., Shen, X., Hu, G.: Analysis of quasi2dynamic characteristics of compliant
floating ring gas cylinder seal. J. Aerosp. Power 25(5), 1190–1196 (2010). (in Chinese)
4. Gang, M., Guangzhou, X.U., Shen, X.: Design and analysis for spiral grooved cylindrical
gas seal structural parameter. Lubr. Sealing 32(4), 127–130 (2007)
5. Nakane, H., Maekawa, A.: The development of high-performance leaf seals. J. Eng. Gas
Turbine Power 126(3), 42–350 (2004)
6. Ma, G., Sun, X.-J., Luo, X.-H., He, J.: Numerical simulation analysis of steady-state
properties of gas face and cylinder film seal. Beijing: J. Beijing Univ. Aeronaut. Astronaut.
40(4), 439–443 (2014)
7. Ma, G., Cui, X., Shen, X., Hu, G.: Analysis of performance and interface structure of
cylindrical film seal. J. Aeronaut. Power 26(1), 2610–2615 (2011)
8. Ma, G., Sun, X., He, J., Shen, X.: Simulation analysis of gas face and cylinder film seal by
parametric m odeling. Lubr. Sealants 38(7), 8–11 (2013)
9. Jing, X.U.: Ananlytical and Experimental Investigations on the End Face Deformation
Mechanisms for High Pressure Sprial Grooved Dry Gas Seals. Zhejiang University of
Technology, Hangzhou (2014)
10. Wang, X., Liu, M., Hu, X., Sun, J.: The Influence of T groove layout on the performance
characteristic of cylinder gas seal. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T.
(eds) Advanced Manufacturing and Automation VIII. Lecture Notes in Electrical Engineer-
ing, vol. 484, pp. 701–707 (2018)
Simulation Algorithm of Sample Strategy
for CMM Based on Neural Network Approach
Abstract. This paper proposes the algorithm for analyses of sample strategies.
The back propagation artificial neural network approach is employed to approx-
imate CMM measurements of the circular features of the aluminum workpieces
machined with milling process. The discrete data is transformed into continuous
nondeterministic profiles. The profiles are used for simulation to estimate the
maximum possible error in different sample strategies for various diameters.
1 Introduction
Coordinate Measuring Machines (CMMs) play an important role in the part inspection
and quality control [1]. One of the important parameters in measuring strategy with
CMM is the sample size of measuring points. The sample-point measurements provide
discrete coordinates of the workpiece surface. The point-coordinates are used for
assessment whether a form or dimension deviation are inside or outside of tolerance
limits. The optimal choice of discrete points also relatives with applied evaluation
methods and tolerance types [2, 3]. The reliability and quality of CMM sample
assessment depends on density and location of measured points [4]. Thus, the
inspection is often a compromise between a required time, cost, and the measuring
uncertainty.
The result of measuring inspection depends on manufacturing process errors as
well. Mesay et al. [5] have classified the process error into systematic and random
components. In another paper, Qimi, Mesay et al. [6] have estimated the frequency of
systematic errors by use of Fourier analyses. Other authors have investigated the
measuring uncertainty due to the sample size based on approximation of aperiodic
deterministic profile with Fourier series [7, 8].
Moschos et al. [9] suggested a Bayesian regularized artificial neural network
(BRANN) model trained with relatively small sample size to predict a variability of
large data sample. Other authors determine an optimal inspection sample size based on
measuring errors approximated by ANN for various machine processes and nominal
sizes [10].
The state of the art in Artificial Neural Network is based on our understanding level of
biological neurons function [11]. One of the most important advantage of ANN is that
it can mimic processes with unknown relation of input and output data. In the case of
limited information about a complex process, the ANN can provide relatively precise
solution based on limited experimental data. The artificial network composed of dif-
ferently connected artificial neurons, which are named as processing elements (PE).
The PEs are connected into input layers, hidden layers, and output layers to create the
artificial network [12]. Multilayers ANN can include many layers but to reduce the
computation time, most commercial systems do not exceed two layers. It is important
to notice that the final solution of ANN is not unique, but the one that satisfies the
minimal error requirements.
A multiple feed forward back-propagation (PB) ANN [13], was created in
MATLAB program environment (Fig. 1). The design of PB ANN includes a number of
steps [14] such as: preparing and pre-processing of training data; creating of a network
structure; configuring the network; initialization weights and biases; training, validation
and testing of the network. Let us have a look at each of these steps in detail.
We denote uk as the network input and Rk as the target, thus we have a network
with one input and one output. In order to achieve a better accuracy we apply a deep
leaning strategy in this work. There are two hidden layers, with 260 neurons in the first
and 12 neurons in the second layer. The chosen number of layers, neurons is a trial and
error procedure.
In our case, we utilize the tan-sigmoid transfer function (tansig) for both hidden
layers. The tansig function has a following form:
2
y ¼ f ðaÞ ¼ 1; ð1Þ
1 þ e2a
where the argument a is a summation of weights wij and biases si (threshold value) and
given by such expression:
X
n
a¼ wij uj si ; ð2Þ
i¼1
Fig. 1. A 1-260-12-1 feed forward ANN architecture for approximation of the measuring
profiles
X
m
ðyi ti Þ2
MSE ¼ ; ð3Þ
i¼1
m
3 Model Implementation
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Rk ¼ ðXk Xc Þ2 þ ðYk Yc Þ2 ; ð4Þ
The value of the radial angle was set as uk ¼ 2p k=480; k ¼ 0. . .479, where k is
the index number of measured points. The few examples of such estimated radius
variable versus the angle are shown on the polar plots in Fig. 2.
These estimated profiles were approximated with ANN model. An example of the
approximated nondeterministic profile is illustrated in Fig. 3. The lowest graph of
Fig. 3 shows the approximations error in each particular point. The total range of the fit
errors is within 0:9 104 mm for this particular profile. The nondeterministic profile
equivalents to a continuous function, that provides an opportunity to simulate the
measuring strategies based on real measurements and perfect repeatability conditions.
4 Simulation Procedure
In order to estimate the maximum measuring error due to the sample size, an additional
procedure was developed. A common practice in CMM measuring is using the sample
consisting of equally distributed measuring points and the LSC as the default method.
An example of the simulation, using a five-point sample (n = 5), is illustrated in Fig. 4.
A sample of n equally distributed points is taken from the ANN profile (Sect. 3), and
the n-point sample is rotated clockwise with m = 103 iteration number. In each iteration
the sample is rotated by the angular step s ¼ 2p=n m. When the first point p1 position is
defined, the other ðn 1Þ sample points (p2 ; p3 ; . . .pn ) are determined uniquely with
equal space 2p=n . The sample of n radius values rkANN (k ¼ 1; . . .n) is generated from
the trained network corresponding to uniform point locations.
438 P. Chelishchev and K. Sørby
Fig. 3. The continues nondeterministic profile approximated with ANN (D3 ¼ 100 mm)
where a “best fit” circle of least squares of simulated points with the circle centre
ðuc ; vc Þ was found from following system:
8
> P 2 P P 3 P 2
>
< cu u k þ v c u v
k k ¼ 1
2 u k þ u v
k k
k k k k ð7Þ
>
> P P 2 1 P 3 P 2
: uc v k uk þ v c v k ¼ 2 vk þ v k uk
k k k k
The circle centre ðxc ; yc Þ and radius values qk are calculated in the each iteration. Then
the radius variation range of the n-sample for each particular location is estimated by the
residual Dq ¼ qmax qmin . Eventually, after all iterations were completed, the smallest
estimated radius variation range Dqmin for the particular sample size n was defined. The
maximum estimation error dmax due to the sample size n was calculated as
dmax ¼ DRANN Dqmin , where DRANN ¼ RANN max Rmin is the precise radius variation
ANN
range estimated from 480 variables, which were simulated with the continuous virtual
profile.
Simulation Algorithm of Sample Strategy for CMM 439
Fig. 4. The five-point sample: p2 ; p3 ; . . .pn – the measured points; q1 ; q2 ; . . .q5 – the estimated
radius variables; ref.lsc – reference least squares circle; r1ANN – a radius of the original reference
circle; ðXc ; Yc Þ – a reference circle center based on 480 points; ðxc ; yc Þ – a new circle center based
on 5 points.
The simulation procedure described in Sect. 3 was applied with different sample sizes
from 5 to 400 measuring points, and for 9 circle sections with nominal diameter from
40 mm to 500 mm. The final simulation results are tabulated in Table 1.
Table 1. The maximum estimated error dmax due to the sample size n for various diameters Di
Sample size n 5 15 30 60 93 150 200 300 400
D1 ¼ 40 mm* 11.2 7.5 5.0 4.3 3.1 2.8 2.5 1.3 0.7
D2 ¼ 80 mm 10.2 5.4 5.0 4.1 2.5 1.9 1.6 1.1 0.5
D3 ¼ 100 mm 7.1 4.9 4.0 3.1 2.5 2.0 1.3 0.7 0.4
D4 ¼ 150 mm 21.4 11.5 9.1 7.3 6.6 6.6 4.5 2.1 0.8
D5 ¼ 200 mm 15.1 4.4 2.0 0.9 0.7 0.3 0.2 0.1 0.1
D6 ¼ 250 mm 30.2 7.8 2.6 1.9 1.1 0.7 0.7 0.4 0.0
D7 ¼ 300 mm 22.2 8.7 5.9 3.7 2.1 1.4 1.3 1.0 0.8
D8 ¼ 400 mm 21.5 8.7 4.1 1.8 1.6 1.1 0.7 0.4 0.2
D9 ¼ 500 mm 24.3 10.6 7.8 7.8 5.0 3.2 3.2 1.9 0.8
*dmax is given in lm
The graphical interpretation of the results (see Fig. 5a) shows that relation between
the maximum estimated error dmax and the sample size n has nonlinear, asymptotic
behavior. This behavior appears relatively predictable. However, the relation between
the maximum estimated error and the diameter size for a given sample size does not
follow a clear trend, when a five point-sample is used (see Fig. 5b). The maximum
estimated error for different diameters varies between 7.1 µm and 30.2 µm. In our tests,
the maximum estimated error is up to 6.6 µm for the ninety three points sample size,
and up to 2.1 lm for three hundred measuring points.
440 P. Chelishchev and K. Sørby
(a) The maximum estimated error vs sample (b) The maximum estimated error vs diam-
size for diameters D1 , D2 ,...D9 eter size for samples of 5, 93 and 300 points
Fig. 5. The relationship of the maximum estimated error with the sample size and the diameter
size of circle profiles
6 Conclusion
According to the simulation results, the error due to the sample size can be a significant
contributor to the measurement uncertainty and thus it must be considered in the
measuring strategy for CMM.
The simulation procedure presented in this paper is a new algorithm for estimating
the maximum error due to number of the measuring points. As shown with the test
pieces, the diameter size is not the main factor for defining the sample strategy.
The presented ANN approach can be adapted to profile forms generated by any
machine operations. The approximated nondeterministic profile can be further used as
the continuous function for other simulations regarding the sample strategy, alignment,
filtration methods and measuring uncertainty estimation.
References
1. De Chiffre, L.: Geometrical Metrology and Machine Testing. DTU Mech. Eng. (2015)
2. Summerhays, K.D.: Optimizing discrete point sample patterns and measurement data
analysis on internal cylindrical surfaces with systematic form deviations. Precis. Eng. 26(1),
105–121 (2002)
3. Changcai, C.: Research on the uncertainties from different form error evaluation methods by
CMM sampling. Int. J. Adv. Manuf. Technol. 43(1), 136–145 (2009)
4. Moroni, G.: Coordinate measuring machine measurement planning. Springer, London
(2010)
5. Desta, M.T.: Characterization of general systematic form errors for circular features. Int.
J. Mach. Tools Manuf 43(11), 1069–1078 (2003)
6. Qimi, J.: A roundness evaluation algorithm with reduced fitting uncertainty of CMM
measurement data. J. Manuf. Syst. 25(3), 184–195 (2006)
Simulation Algorithm of Sample Strategy for CMM 441
7. Cho, N.: Roundness modeling of machined parts for tolerance analysis. Precis. Eng. 25(1),
35–47 (2001)
8. Ruffa, S.: Assessing measurement uncertainty in CMM measurements: comparison of
different approaches. Int. J. Metrol. Qual. Eng. 4, 163–168 (2013)
9. Papananias, M.: A novel method based on Bayesian regularized artificial neural networks for
measurement uncertainty evaluation. In: EUSPEN Proceedings of the 16th International
Conference of the European Society for Precision Engineering and Nanotechnology,
Nottingham, UK, pp. 97–98. EUSPEN (2016)
10. Zhang, Y.F.: A neural network approach to determining optimal inspection sampling size for
CMM. Comput. Integr. Manuf. Syst. 9, 161–169 (1996)
11. Grossberg, S.T.: Studies of the Mind and Brain. Reidel Press, Drodrecht (1982)
12. Wang, K.: Applied computational intelligence in intelligent manufacturing systems. In:
International Series on Natural and Artificial Intelligence, vol. 2, 2nd edn. Advanced
Knowledge International, Adelaide (2007)
13. Jones, W.: Back-propagation: a generalized delta learning rule. Byte 12(11), 155–162 (1987)
14. Beale, M.H.: Neural Network Toolbox, User guide (2017)
15. Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters.
SIAM J. Appl. Math. 11, 431–441 (1963)
Digital Modeling and Algorithms for Series
Topological Mechanisms Based on POC Set
1 Introduction
of the mechanism as a symbol polynomial form, and the algebraic operation function
between the symbol polynomials, but this method is complicated, which is not con-
ducive to computer reading and recognition. In this paper, according to the symbolic
habits related to institutional science, a digital matrix is proposed to represent the order
of f motion pairs in the mechanism and the positional relationship between them. At the
same time, a 2 * 6 matrix is established to represent the mechanism. The POC set [11],
the first row and the second row respectively represent the number of moving elements
and the number of rotating elements of the mechanism; and then the automatic gen-
eration algorithm of the mechanism POC set is established to realize the automatic
analysis of the organization POC set. The automatic analysis of the series mechanism
POC set is realized by programming, and the correctness of the program is verified by
an example.
Jii—Diagonal element, indicating the type of motion pair (R/P vice), Jii= R or P;
f—Number of institutional movements;
Nij—Orientation relationship between the i, j movement pairs
444 L. Lu et al.
A plane substring is a sub-series mechanism in which two or more motion pairs are
connected in series. Therefore, the 10 kinds of plane substrings which are commonly
used are stored in the computer, so that when the mechanism POC set is automatically
analyzed, it can be identified and called. The common motion subtypes, orientation
relationship matrix C and POC set matrix of 10 plane substrings are shown in Table 2.
(1) For a single-degree-of-freedom motion pair, the POC set matrix structure is
2 1; (2) For a series mechanism with a degree of freedom less than 6, the POC set
matrix structure is 2 f (f is the number of motion pairs); (3) The first motion pair of
the plane substring is not necessarily the first motion pair of the series mechanism, and
may be the second, third or fourth motion pair of the series mechanism. The rules for
judging various possible situations are as follows:
① The two-degree-of-freedom plane substring corresponds to the numbers G21,
G22and G23 in Table 2, which is R//R, R⊥P or P⊥R. If their first motion
pair is also the first motion pair of the series mechanism, the POC set matrix
is the same as Table 2; if their first motion pair is the second motion pair of
the series mechanism, The elements of the rotating pair and the moving pair
in the POC set matrix are shifted to the right by one bit, the first column is
complemented by 0; if their first motion pair is the third motion pair of the
series mechanism, the rotating pair and the moving in the POC set matrix
The secondary element is shifted to the right by two digits, while the first
446 L. Lu et al.
and second columns are complemented by 0; if their first motion pair is the
fourth motion pair of the series mechanism, the elements of the rotating pair
and the moving pair in the POC set matrix are to the right. Move three
places, and add 0 to the first, second, and third columns.
As shown in Fig. 1, in the series mechanism, R4//R5 is a plane substring, and the
first rotating pair R4 is the fourth motion pair of the series mechanism, and the POC set
matrix of the plane substring R4//R5 is shown.
0 0 0 1 0 0
PR==R ¼ ð3Þ
0 0 0 1 0 0
In the main diagonal direction, the Eq. (3) is decomposed into three third-order
matrices and four second-order matrices, as shown by the dashed boxes in Fig. 3,
which are defined as L31, L32, L33 and L21, L22, L23, respectively.
448 L. Lu et al.
The bitwise OR operation means that the elements are added together; if the sum of
the moving element or the rotating element is equal to 3, then ti = 3, ri = 3. The bitwise
OR of this tandem mechanism is:
Digital Modeling and Algorithms for Series Topological Mechanisms 449
2 0 0 0 0 0 0 0 0 1 0 0
P ¼ P31 þ P24 ¼ þ
1 0 0 0 0 0 0 0 0 1 0 0
3 0 0 0 0 0
¼
1 0 0 1 0 0
3 0 0 0 0 0
The output is: P ¼
1 0 0 1 0 0
5 Conclusion
(1) Based on the theory of POC set, a digital description matrix of mechanism
topology is proposed, which not only includes the type of motion pair and axis
orientation of the constituent mechanism, but also facilitates the identification and
analysis of the computer.
(2) A POC set matrix of series mechanism is established. The POC set of the series
mechanism is represented by a 2 * 6 matrix. The output motion information is
complete, including not only the movement/rotation type and dimension of the
relative motion output, but also the characterization. The axis orientation or ref-
erence of the output motion is clear, the geometric meaning is clear, and the
computer calculation, storage, and query are convenient;
(3) Our work Converts the end-of-chain POC set operations into algebraic additions
of the same-dimensional matrix by extracting the branched planar sub-strings and
spherical sub-strings in order;
References
1. Dobrjanskyj, L., Freudenstein, F.: Some applications of graph theory to the structural
analysis of mechanisms 89(1), 153 (1967)
2. Sohn, W.J., Freudenstein, F.: An application of dual graphs to the automatic generation of
the kinematic structures of mechanisms, 392–398 (1986)
3. Olson, S.T., Francis, A.M., Sheffer, R., et al.: Parallel mechanisms of high molecular weight
kininogen action as a cofactor in kallikrein inactivation and prekallikrein activation
reactions. Biochemistry 32(45), 12148 (1993)
4. Chakarov, D., Parushev, P.: Synthesis of parallel manipulators with linear drive modules.
Mech. Mach. Theory 29(7), 917–932 (1994)
5. Dasgupta, B., Mruthyunjaya, T.S.: The Stewart platform manipulator: a review. Mech.
Mach. Theory 35(1), 15–40 (2000)
6. Li, S., Dai, J.: Topological description of planar mechanism based on Assur rod group
elements. J. Mech. Eng., 8–13 (2011)
7. Li, S., Dai, J.: The composition principle of the metamorphic mechanism based on the
extended Assur rod group. J. Mech. Eng., 22–30 (2010)
8. Ding, H., Huang, Z.: Motion chain topology diagram and automatic generation of feature
description based on loop characteristics. J. Mech. Eng., 40–43 (2007)
450 L. Lu et al.
9. Han, Y., Ma, L., Yang, T., et al.: Research on mechanism type of parallel robot based on VB
programming. J. Agric. Mach., 139–142 (2007)
10. Liao, M., Liu, A., Shen, H., et al.: Symbol derivation method for azimuth feature set of
parallel mechanism. J. Agric. Mach., 395–404 (2016)
11. Yang, T.: Robotic Mechanism Topology Design. Science Press (2012)
Optimization of Injection Molding for UAV
Rotor Based on Taguchi Method
1 Introduction
The rapid rise of multi-rotor UAV drives the development of drone-related industries.
The drone rotor is the most delicate part, with the highest replacement rate [1]. The
winding forming is one of the most common methods of making drone rotors. How-
ever, this method can’t produce high precision parts but some small batches. The cost is
relatively high for some complicated structures. Though the compression molding
method is also used to form fiber reinforced parts, this method is known for high cost
and low efficiency. Plastic injection molding is one of the most important methods
applied for forming plastic products in industry. The mechanical properties of materials
don’t be destroyed by using the method. Moreover, this method is very efficient and
suitable for mass production.
Some domestic researchers have been also studied in the aspect of fixed wing
forming. Wang Xiadan [2] optimized the fixed wings of carbon fiber-reinforced drones
by using orthogonal experiments and BP neural network. Wang Xiadan, Li Linyang,
etc. [3, 4] studied the optimal gate of the UAV fixed-wing injection molding process,
which had obvious effects on fixed-wing molding. However, the blades of the multi-
rotor UAV have not been received enough attention. In this paper, we performed
numerical simulations of the drone rotor using Moldflow software. Many experiments
were performed by utilizing the combination of process parameters based on four-level
of L16 Taguchi.
Four factors which determine the final quality of injected parts were mold design, part
design, material and process parameters such as injection temperature, mold temper-
ature, and injection time. Thousands of experiments were performed if all of them were
combined. Product development cycle were extended. Taguchi method had advantages
for this kind of multi-factor optimization problem. It possessed a special design of
orthogonal arrays to learn the whole parameters space with a small number of exper-
iments. We found an optimal process parameters by using the Taguchi experiments,
which greatly reduced the production cost and obtained better product quality [6]. In
this study, orthogonal array experiment of L16 were conducted find out the optimum
levels of process parameters. Firstly, the total deformation (index Td) of the part was
selected as the experiment index. And the auxiliary analysis index was the deformation
caused by the shrinkage (index Sd) and the fiber orientation (index Od). L16 orthogonal
array was created for 16 finite element analyses of the rotor blade by utilizing five
process parameters such as melt temperature (factor A), mold temperature (factor B),
injection time (factor C), holding pressure (factor D) and holding time (factor E). It
corresponded to the process parameters concerned in actual production. The ranges of
five process parameters were given in Table 2. While determining of the value ranges
of process parameters, the optimal values recommended by Moldflow material library
were considered. The other process parameters were selected by exploiting the expe-
riences in plastic injection molding industry.
Fig. 3. Deformation of the parts under optimal Fig. 4. Deformation comparison chart
process
6 Conclusion
The injection molding method for UAV rotor blade molding is put forward to meet the
manufacturing requirements. The results of the experiments show that the main factors
affecting the deformation are uneven shrinkage and fiber orientation. The melt tem-
perature has the greatest influence on the deformation of the rotor blade, followed by
the injection time and holding pressure, and the effect of mold temperature and holding
time is the smallest. The optimal molding process parameters of the rotor are as
follows: melt temperature 300 °C, mold temperature 115 °C, injection time 0.5 s,
holding pressure 110% of filling pressure, and holding time 8 s. Under the optimal
parameters combination, the maximum warping deformation of the rotor blade is
456 X. Feng et al.
0.517 mm, which is 25.7% lower than the initial results of simulation. It is proved that
injection molding is suitable for manufacturing UAV rotor blades.
References
1. Han, X., Liu, J., Wang, X., Lu, B., Yang, J., Liao, Y., Li, F.: Design and application of carbon
fiber composites on general aviation aircraft. Dual Use Technol. Prod. (07), 8–11 (2015)
2. Wang, X.: Injection molding and process parameters optimization for carbon fiber reinforced
unmanned aerial vehicle fixed wing. Xi’an University of Science and Thecnology (2018)
3. Yu, Y., Wang, X., Li, L., Lu, Y.: Optimization design for gate of UAV fixed-wing based on
MPI. Plast. Sci. Technol. 45(12), 87–91 (2017)
4. Yu, Y., Wang, X.: Optimization of injection molding process for fixed wing of unmanned
aerial vehicle based on BP neural network. Plast. Sci. Technol. 45(09), 74–78 (2017)
5. Amiruddin, H., Mahmood, W.M.F.W., Abdullah, S., Mansor, M.R.A., Mamat, R., Alias, A.:
Application of Taguchi method in optimization of design parameter for turbocharger vaned
diffuser. Ind. Lubr. Tribol. 69(3), 409–413 (2017)
6. Xu, C., Zhou, J.: Mold design and process optimization of automobile clip injection molding.
Plastics (01), 92−96+101 (2019)
7. Anugraha, R.A., Wiraditya, M.Y., Iqbal, M., Darmawan, N.M.: Application of Taguchi
method for optimization of parameter in improving soybean cracking process on dry process
of tempeh production. In: IOP Conference Series: Materials Science and Engineering, vol.
528, no. 1 (2019)
Assembly Sequence Optimization Based
on Improved PSO Algorithm
1 Introduction
Assembly is a crucial link in the process of product manufacturing, accounting for 20%
of the total manufacturing cost and 50% of the total production cycle [1]. Assembly
sequence planning (ASP) is to obtain the optimal assembly sequence of products under
certain constraints. It is a typical NP-hard problem. Reasonable optimization of
assembly sequence can not only reduce the accumulation time of parts and improve the
production line balance, but also have important significance for reducing product cost
and improving production efficiency.
With the increasing complexity of products, the number of parts that need to be
assembled increases, and the scale of solution will appear combination explosion [2].
Therefore, intelligent optimization algorithm has been widely used in solving assembly
sequence optimization. Xie et al. [3] applied ant colony algorithm to solve ASP
problem. Zeng et al. [4] proposed an improved ASP method of firefly algorithm, and
constructed the objective function with the interference times of tool parts in assembly
sequence as the evaluation index. Zhang [5] used immune algorithm to overcome the
premature convergence of particle swarm optimization (PSO) to solve the ASP prob-
lem; Somay’e [6] studied the solution space distribution of ASP problem, and proposed
a jump-out local search (BLS) algorithm based on the near-optimal solution’s
approximate uniform distribution in solution space.
In the existing swarm intelligence algorithms, the diversity of the population is
poor, and it depends strongly on the quality and size of the initial population. In this
paper, the interference matrix is established under the condition that the geometric
constraints of the product are satisfied. Combining with co-evolution theory, a popu-
lation initialization method based on Feigenbaum iteration is proposed to improve the
quality of the initial population. The existing particle swarm optimization algorithm is
easy to fall into local optimum when solving ASP problem, and the convergence speed
is not ideal. To solve this problem, a new inertia weight adjustment function is pro-
posed based on “Sigmoid function”, which can improve the convergence speed while
ensuring the convergence accuracy and quickly generate the optimal assembly
sequence. The improved PSO (IPSO) algorithm is validated by an assembly example,
and the experimental results are compared with the basic PSO algorithm.
The remaining part of this paper is organized as follows. Section 2 establishes the
mathematical model of assembly sequence optimization. Section 3 describes in detail
the improvement process of PSO algorithm and its performance verification. Section 4
verifies the practicability and superiority of PSO algorithm through an example. The
last part is the conclusion. Finally, the conclusions are given in Sect. 5.
2 Problem Statement
2.1 Constraints
The interference matrix is used to describe the geometric constraints of parts in all
assembly directions, so the interference matrix can be expressed as:
2 3
I11k I12k I1nk
6 I21k I22k I2nk 7
6 7
IMk ¼ 6 .. .. .. .. 7 ð1Þ
4 . . . . 5
In1k In2k Innk
Where k 2 fx; y; zg represents the direction of assembly; Iijk is a binary
variable, if the part pi interferes with the part pj in the k direction of assembly, Iijk ¼ 1,
otherwise, Iijk ¼ 0. In particular, Iiik ¼ 0.
The main function of interference matrix is to derive the next assembly part pi and
its assembly direction. Assuming that m parts have been assembled, the temporary sub-
assembly can be expressed as Xsub ¼ ½p1 ; p2 ; pm , whether the part pi can be
assembled smoothly depends on the value of Iik , which is calculated by formula (2):
Assembly Sequence Optimization Based on Improved PSO Algorithm 459
In which, _ is Boolean operation “OR”. If Iik ¼ 0, the part does not interfere with
all parts in the assembly direction; otherwise, the part interferes with at least one part in
the assembly direction.
The connection matrix CM ¼ Cij nn and support matrix SM ¼ Sij nn are used
to express the assembly stability relationship. Elements Cij and elements Sij represent
the connection type and support relationship between pi and pj respectively.
8
< Cij = 2 Stable connection
Cij = 1 Contact connection
:
Cij = 0 Disconnection
Sij = 1 pi can support pj stably
Sij = 0 pi can not support pj stably
X
n
f1 ¼ PIik ð3Þ
i¼1
If the assembly direction of the pl;i þ 1 and pl:i in the sequence is the same, then the
assembly does not need to change the direction,in this case Ql:i ¼ 0. Otherwise, Ql:i ¼
1 ,the assembly is represented as formula (5):
0; dl;i þ 1 ¼ dl;i;
Ql:i ¼ ð5Þ
1; dl;i þ 1 6¼ dl;i:
Set the number of times that the sequence needs to change direction to complete
assembly as follows:
X
n
f2 ¼ Ql:i ð6Þ
i¼1
In addition, the judging rules of whether the assembly sequence is stable are as
follows: (1) Cij ¼ 2; j 2 ½1; i 1 ) stable; (2) Cij ¼ 0 ) unstable; (3) stable if Cij ¼
460 X. Zhang et al.
0 or Cij ¼ 1 when Sij ¼ 1; j 2 ½1; i 1; (4) unstable if Cij ¼ 0 or Cij ¼ 1 when Sij ¼ 0.
f3 is the number of unstable operations in all assembly operations.
The objective function of the assembly sequence optimization problem studied in
this paper is to minimize the number of assembly direction changes while maximizing
the stability and the number of assembly under geometric constraints, that is:
In which, k1 , k2 and k3 are the weights of three objective functions, ranging from 0
to 1, and k1 þ k2 þ k3 ¼ 1. To synthesize the importance of each factor, take
k1 ¼ 0:35, k2 ¼ 0:25, k3 ¼ 0:4.
3 Algorithmic Design
Inertial weight is used to control the search behavior of particles in the search space.
In order to improve the accuracy of operation and reduce the possibility of PSO falling
into local optimum, the method of adjusting inertia weight is improved.
Study on “Sigmoid Function”:
1
f ð vÞ ¼ ð11Þ
1 þ eav
Assembly Sequence Optimization Based on Improved PSO Algorithm 461
For the “Sigmoid function” in formula (11), at that time av\ 10, f ðvÞ ¼ 0; on
the contrary, at that time av [ 10, f ðvÞ ¼ 1. Based on the above analysis, a method of
inertia weight adjustment is proposed, which is defined as formula (12):
xmax xmin
xðtÞ ¼ xmax ð12Þ
1 þ e0:08ðt2tmax Þ
1
Where xmax and xmin represents the maximum and minimum inertia weight,
respectively, t represents the number of iterations.
For the inertia weight adjustment method proposed by formula (12), in the case of
xmax = 0:9, xmin = 0:4, in the initial stage of evolution, the inertia weight is close to
0.9, while in the end stage of evolution, the inertia weight is gradually close to 0.4. In
the intermediate stage, the inertia weights calculated by formula (12) are in the range of
(0.4, 0.9) at any time in the evolution process, which is very consistent with the
conclusion that PSO has better optimization effect when the inertia weights in docu-
ment [7] are in the range of [0.4, 0.95].
Figure 1 is a comparison of several inertia weight adjustment curves, in which x0 ,
x1 , x2 x3 and x4 represent the value of x decreasing linearly with the number of
iterations, convex function decreasing, concave function decreasing, exponential
function decreasing, and the proposed method decreasing. Compared with the four
curves, it can be found that in the initial stage of particle evolution, the inertia weight
curve keeps a larger value, which can avoid PSO falling into the local optimum and
make it search in the whole space, which is conducive to obtaining more suitable seeds.
In the latter stage of particle evolution, on the contrary, the convergence accuracy of the
algorithm is guaranteed by keeping it in a small numerical range.
0.9
ω0
ω1
0.8
ω2
ω3
0.7
ω (t)
ω4
0.6
0.5
0.4
0 200 400 600 800 1000
t
The acceleration factor is controlled by the way that c1 decreases linearly with time
and c2 increases linearly:
c1 ðtÞ ¼ c1;min c1;max ttmax þ c1;max
ð13Þ
c2 ðtÞ ¼ c2;max c1;min t tmax þ c2;max
462 X. Zhang et al.
In order to verify the superiority of the improved PSO algorithm, the following two
test functions are optimized by using basic PSO improved PSO algorithm.
(1) Rosenbrock function
nP
1 2
f 1 ð xÞ ¼ 100 xi þ 1 x2i þ ð1 xi Þ2 2:048 xi 2:048 ð14Þ
i¼1
P
n Q
n
x2i xiffi
f2 ð xÞ ¼ 4000 cos p
i
þ 1 600 xi 600 ð15Þ
i¼1 i¼1
The above two test functions all have global optimal solutions minð f Þ ¼
f ð0; ; 0Þ ¼ 0, and the more variables, the more difficult it is to converge when PSO
is used to search and optimize. The parameters for standard PSO are: learning factor is
1.5, inertia weight is 0.8, variable dimension is 10, particle number is 30, maximum
iteration number is 1000. In order to facilitate comparison, only the inertia weight of
the improved PSO is set to formula (12), and the other parameters are the same as the
standard PSO. The results of optimization are shown in Table 1. Figure 2 are the
iterative process of optimization.
Fig. 2. Iterative process diagram for Rosenbrock function (top) and Griewangk (bottom)
function optimization by PSO (left) and IPSO (right)
4 Examples Verification
4.1 Experimental Settings
The practicability of the proposed algorithm is verified by an example of seat assembly.
The assembly structure consisting of 12 parts are shown in the Fig. 3.
The parameters of the algorithm are as follows: the learning factor is 1.5, the inertia
weight is 0.8, the dimension of variables is 10, the number of particles is 30, and the
maximum number of iterations is 200. The algorithm code in this paper is written in
MATLAB R2014a, and the computer parameters of simulation operation are: 64-bit
Windows 7 operating system Intel (R) Core (TM) i5-4460 CPU @ 3.20 Hz 8.00 GB
memory.
12
IPSO algorithm
10 PSO algorithm
8
fitness
0
0 50 100 150 200
Number of iteration
It can be seen that the improved PSO algorithm tends to converge in the 68th
generation, when the average fitness value is 1.3, at this time, the corresponding
assembly sequence results as follows:
p1 ! p11 ! p5 ! p10 ! p4 ! p6 !
p2 ! p8 ! p3 ! p7 ! p9 ! p12
While the basic PSO algorithm converges to the optimal solution 1.41 in the 110th
generation. The improved PSO algorithm converges faster and has higher accuracy;
and the basic PSO algorithm is easy to fall into local convergence. Therefore, the
quality, performance and efficiency of the improved PSO algorithm are significantly
improved compared with the basic PSO algorithm.
The analysis shows that the above assembly sequence planning meets the assembly
requirements and meets the needs of engineering applications.
5 Conclusions
References
1. Mrashid, M.F.F., Hutabarat, W., Tiwari, A.: A review on assembly sequence planning and
assembly line balancing optimisation using soft computing approaches. Int. J. Adv. Manuf.
Technol. 59(1–4), 335–349 (2012)
2. Wang, K., Wang, Y.: How AI affects the future predictive maintenance: a primer of deep
learning. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T. (eds.) Advanced Manufacturing
and Automation VII, IWAMA 2017. Lecture Notes in Electrical Engineering, vol. 451,
pp. 1–9. Springer, Singapore (2018)
3. Xie, L., Fu, Y.L., Ma, Y.L.: Assembly sequence generation strategy based on ant colony
algorithms. J. Harbin Univ. Technol. 38(2), 180–183 (2006)
4. Zeng, B., Li, M.F., Zhang, Y.: Assembly sequence planning based on improved firefly
algorithms methods. Comput. Integr. Manuf. Syst. 20(4), 799–806 (2014)
5. Zhang, H.Y., Liu, H.J., Li, L.Y.: Research on a kind of assembly sequence planning based on
immune algorithm and particle swarm optimization algorithm. Int. J. Adv. Manuf. Technol.
71(5), 795–808 (2014)
6. Ghandi, S., Masehian, E.: Breakout local search (BLS) method for solving the assembly
sequence planning problem. Eng. Appl. Artif. Intell. 39(3), 245–266 (2015)
7. Wang, T., Li, Q.Q.: Parallel evolutionary algorithm based on spatial contraction. China Eng.
Sci. 5(3), 57–61 (2003)
Influence of Laser Scan Speed on the Relative
Density and Tensile Properties of 18Ni
Maraging Steel Grade 300
Abstract. Laser powder bed fusion (LPBF) enables tool makers to design tools
with complex geometries and internal features. To exploit the possibilities
provided by LPBF it is necessary to understand how the processing parameters
influences the properties of the end-product. This study investigates the effect of
laser scan speed on relative density and tensile properties of 18Ni300 in the as-
built condition. The results show that there is a relatively wide processing
window which gives satisfactory relative density and tensile properties. Fur-
thermore, it was shown that the scan speed which produced the highest relative
density in this study did not provide satisfactory tensile properties, indicating
that processing parameters can not be established based on relative density
measurements alone.
1 Introduction
18Ni maraging steel grade 300 (18Ni300) is a precipitation hardening tool steel with
excellent mechanical properties in the aged state, which is easy to machine in the
solution annealed state [1]. The high strength and hardness of the material makes it
suited for tooling in applications such as aluminium casting, plastic injection molding,
and extrusion applications [2, 3]. Tooling components are excellent candidates for laser
powder bed fusion (LPBF) processing, since the geometry is often complex, and can
benefit from possibilities enabled by LPBF, such as conformal cooling channels and
vent slots for casting applications [4–6].
In order to efficiently process the material with LPBF it is necessary to identify
processing parameters which results in high density microstructures and satisfactory
mechanical properties. Several authors have investigated the effect of processing
parameters on the density of 18Ni300 processed by LPBF [7–10], focusing on the
effect of laser power, P, laser scan speed, v, hatch spacing, h, and layer thickness, t, on
the relative density of the end-result. A commonly used parameter to relate the laser
parameters to each other is volumetric energy density, Ed ¼ P=vht. The validity of
using volumetric energy density to explain the response of LPBF materials to changes
in the processing parameters have been questioned however [7, 11]. Suzuki et al.
suggests using Pv−1/2 to correlate laser parameters with material density [7]. This
model does not account for hatch spacing, laser spot diameter, and layer thickness,
however. It is not within the scope of this study to derive a model for accurate density
prediction based on laser parameters, and as such only the effect of laser scan speed on
the relative density and tensile properties of 18Ni300 will be investigated.
2 Method
Twelve flat tensile specimens were prepared as blocks in a Concept Laser M2 Cusing
(installed 2009) LPBF machine, and then machined to dog-bone specimens with a
cross-section of 6 6 mm2, with a reduced section length of 32 mm, and radii of
6 mm. The tensile specimens were processed with a laser power of 180 W, hatch
spacing of 105 lm, layer thickness of 30 lm, and scan velocities ranging from
600 mm/s to 725 mm/s with 25 mm/s increments. The scan strategy applied is the
‘island’ scan strategy by Concept Laser, with 5 5 mm2 islands with an angular shift
of 45° and an XY shift of 1 mm. The tensile specimens were built in parallel to the
build direction (Z-oriented). In addition to the tensile specimens, cubes with a volume
of 10 10 10 mm3 were prepared for density analysis. The 18Ni300 powder
feedstock was supplied by Sandvik Osprey. The chemical composition, as supplied by
the material vendor, is listed in Table 1.
The relative density was determined by investigating the cross section of the cubes
with optical microscopy and image manipulation software (similar to the process
described by the current authors in a previous work [11]). The tensile specimens were
tested in an MTS 809 Axial Test System with a 100 kN load cell at room temperature.
The displacement rate was 1 mm/min for all specimens.
Density
Figures 1 and 2 shows contrast images of polished cross section of cubes processed at
600 mm/s and 725 mm/s respectively. In both images spherical pores are visible, and
in Fig. 1 there are no signs of cracks or pores of large irregular shapes, as observed by
other authors when the processing parameters are sub-optimal [7, 9]. Furthermore, in
Fig. 1 the pores appear to be arbitrarily distributed in the cross section, while at the
higher scan speed in Fig. 2 the pores appear to have a systematic pattern.
468 E. W. Hovig and K. Sørby
Fig. 1. Contrast image of XY-cross section of a cube processed with v = 600 mm/s. The width
and height of the cross-section is 10 10 mm2.
Fig. 2. Contrast image of XY cross-section of a cube processed with v = 725 mm/s. The width
and height of the cross-section is 10 10 mm2.
Influence of Laser Scan Speed on the Relative Density and Tensile Properties 469
Figure 3 shows the measured relative density with respect to scan speed. As can be
seen, the highest relative density is measured for the sample processed at 725 mm/s,
and the lowest relative density is for the sample processed at 625 mm/s. The lowest
measured relative density is above 99.9%, however, which indicates that there is a large
processing window which results in satisfactory relative density. There appears to be a
steady increase in relative density with the increase of scan speed, except for the scan
speed of 625 mm/s, which should be considered an outlier. Based purely on the
measured relative density, a scan speed of 725 mm/s appears to be favorable.
It is interesting to note that the periodic pattern in the top right, and bottom right,
corners of Fig. 2 appears to be along straight lines with a length of 5 mm at a 45° angle
to the perimeter of the cube. This corresponds to the perimeter width, length, and angle
of an ‘island’ in the scan strategy. During laser melting, the core of each island is
scanned first, going from one corner to the opposite corner in a zig-zag pattern. Once
the core is melted, the contour is scanned as a continuous line along the perimeter of the
island. Based on the observed porosity in Fig. 2 it appears that as the scan speed
increases, and thus the energy input is reduced, the material fails to completely melt
and bond between the core and contour of the islands. A possible reason for this can be
that the energy input is not sufficiently high to give a wide enough melt pool to form a
dense material in this region. Within the islands the density appears to be higher in
Fig. 2 however. In a future work experiments can be conducted to verify this, either by
changing the hatch spacing between the contour and the core, or by modifying the laser
power or scan speed in the contour scan. It appears that the higher scan speed results in
a dense core, but porous perimeter, which leads to unsatisfactory tensile properties, as
will be demonstrated in the next section.
99.97
99.96
99.95
RelaƟve density (%)
99.94
99.93
99.92
99.91
99.9
99.89
99.88
99.87
600 625 650 675 700 725
Scan speed (mm/s)
Tensile Properties
The tensile properties of 18Ni300 with respect to scan speed is shown in Fig. 4. The
tensile properties do not vary significantly as the scan speed increases from 600 mm/s
to 700 mm/s, but at 725 mm/s the tensile properties drop. If the processing parameters
are evaluated on the relative density alone, a scan speed of 725 mm/s appears to give
the best results. For lower scan speeds, when the microstructure is decorated with
arbitrarily spaced pores, the tensile properties are not significantly influenced by a small
change in relative density. The porosity observed at the perimeters of the scanned
islands significantly reduces the tensile properties, however. This is likely due to a
concentration of stress in the porous region as the specimen is being loaded, leading to
pre-mature failure.
1200 16
14
1150
12
1100
Elongation (%)
Stress (MPa)
10
1050 8
6
1000
Yield Strength 4
950 UTS 2
Elongation at break
900 0
600 625 650 675 700 725
Scan speed (mm/s)
When the yield strength is plotted against the relative density in Fig. 5 there is no
obvious correlation between relative density and tensile properties. If anything, it
appears that the yield strength drops as the relative density increases.
Influence of Laser Scan Speed on the Relative Density and Tensile Properties 471
1060
1055
1050
Yield strength (MPa)
1045
1040
1035
1030
1025
1020
1015
1010
99.89 99.9 99.91 99.92 99.93 99.94 99.95 99.96 99.97
RelaƟve density (%)
4 Conclusions
This work investigates the effect of scan speed on the relative density and tensile
properties of 18Ni300 processed by LPBF. The density results show that there is a wide
processing window which results in relative density of above 99.9%. Furthermore, the
tensile results indicate that an increase in relative density is not necessarily accom-
panied by an increase in tensile properties. On the contrary, the scan speed which
resulted in the highest relative density was accompanied with significantly lower yield
strength, ultimate tensile strength, and elongation at break. Both relative density and
tensile properties were satisfactory for scan speeds between 600 mm/s and 700 mm/s
with the laser parameters used in this study.
Acknowledgements. The authors would like to thank SINTEF Industry, Oslo, Norway for
performing the relative density analysis. This work is funded in part by the Norwegian Research
Council through grant number 248243, and by the TROJAM project in the INTERREG A/ENI
program.
References
1. Fortunato, A., Lulaj, A., Melkote, S., Liverani, E., Ascari, A., Umbrello, D.: Milling of
maraging steel components produced by selective laser melting. Int. J. Adv. Manuf. Technol.
94(5–8), 1895–1902 (2017)
2. Pereira, M.F.V.T., Williams, M., Du Preez, W.B.: Application of laser additive manufac-
turing to produce dies for aluminium high pressure die-casting: general article. S. Afr. J. Ind.
Eng. 23(2), 147–158 (2012)
472 E. W. Hovig and K. Sørby
3. Ahn, D.-G.: Applications of laser assisted metal rapid tooling process to manufacture of
molding & forming tools — state of the art. Int. J. Precis. Eng. Manuf. 12(5), 925–938
(2011)
4. Hovig, E.W., Brøtan, V., Sørby, K.: Additive manufacturing for enhanced cooling in moulds
for casting. In: 6th International Workshop of Advanced Manufacturing and Automation.
Atlantis Press (2016)
5. Brøtan, V., Berg, O.Å., Sørby, K.: Additive manufacturing for enhanced performance of
molds. Procedia CIRP 54, 186–190 (2016)
6. Hovig, E.W., Sørby, K., Drønen, P.E.: Metal penetration in additively manufactured venting
slots for low-pressure die casting. In: Wang, K., Wang, Y., Strandhagen, J.O., Yu, T. (eds.)
Advanced Manufacturing and Automation VII, pp. 457–468. Springer, Singapore (2018)
7. Suzuki, A., Nishida, R., Takata, N., Kobashi, M., Kato, M.: Design of laser parameters for
selectively laser melted maraging steel based on deposited energy density. Add. Manuf. 28,
160–168 (2019)
8. Bai, Y., Yang, Y., Wang, D., Zhang, M.: Influence mechanism of parameters process and
mechanical properties evolution mechanism of maraging steel 300 by selective laser melting.
Mater. Sci. Eng. A 703, 116–123 (2017)
9. Casalino, G., Campanelli, S.L., Contuzzi, N., Ludovico, A.D.: Experimental investigation
and statistical optimisation of the selective laser melting process of a maraging steel. Opt.
Laser Technol. 65, 151–158 (2015)
10. Kempen, K., Yasa, E., Thijs, L., Kruth, J.P., Van Humbeeck, J.: Microstructure and
mechanical properties of Selective Laser Melted 18Ni-300 steel. Phys. Procedia 12, 255–263
(2011)
11. Hovig, E.W., Holm, H.D., Sørby, K.: Effect of processing parameters on the relative density
of AlSi10Mg processed by laser powder bed fusion (2019)
Application of Automotive Rear Axle Assembly
1 Introduction
With the integration and application of new generation information technologies (such
as Cloud Computing, Internet of Things, Big Data, Mobile Internet, Artificial Intelli-
gence, etc.) and manufacturing, countries around the world have successively proposed
their own manufacturing development strategies at the national level, representative
examples are Industry 4.0, Industrial Internet, CPS-based manufacturing, Made in
China 2025 and Internet + Manufacturing, Service Oriented Manufacturing or Service
Manufacturing [1]. As a benchmark in the manufacturing industry, the automotive
industry has a tremendous impact on economic development and social progress.
Automobile assembly is the last stage of automobile manufacturing. The assembly
process of automotive rear axle is an important part of the assembly process of the
whole automobile. Therefore, the assembly quality of the rear axle directly affects the
product quality of the automobile. In the assembly process of automotive rear axle,
many assembly parts are involved, and the assembly process is complicated. In recent
years, with the development of robot technology and new generation information
technology, the use of robots instead of workers in some factories has reduced the labor
intensity of workers, but the large amount of data in the factory has not been effectively
utilized, real-time monitoring of assembly process and real-time adjustment of
assembly tasks are not implemented.
The concept of digital twin was first proposed by Professor Grieves in 2003 and has
been rapidly developed in recent years. Digital twin is a multi-physical, multi-scale,
multi-probability simulation process that uses historical data and real-time updated data
from sensors to characterize and reflect the full life cycle of physical objects. The
virtual assembly line is established by the 3D modeling method, and the bidirectional
real mapping and real-time interaction between the real assembly line and the virtual
assembly line are realized by digital twin technology, thereby realizing real-time
monitoring of the assembly process. According to the current order, material, and
machine running state, the multi-objective particle swarm optimization algorithm is
used to predict the number of rear axles currently required to be assembled, so as to
achieve optimal production. The key research content of this paper is to realize the real-
time monitoring of the assembly process of automotive rear axle based on digital twin
technology, and provide the basic content for the subsequent in-depth study of the rear
axle assembly line.
Real-time simulation of assembly process of automobile rear axle assembly line based
on digital twin technology mainly needs to break through three technical points,
namely, lightweight processing of geometric model of rear axle assembly line, data
acquisition of heterogeneous equipment during assembly execution, interaction
between physical space and virtual space.
The geometric model of the rear axle assembly line of the automobile has problems
such as large number of geometric vertices, large number of patches, and a large
number of hidden bodies. It is difficult to be used in the development of application
systems, and it is necessary to lighten the geometric model.
The real-time simulation of the virtual model to automotive rear axle assembly
process is to realize the real-time driving of the data to the model. Therefore, the
collection of heterogeneous data becomes the key technology.
The most important point of this paper is to provide a method for real-time sim-
ulation of actual assembly line driven virtual models. Therefore, the interaction
between physical space and virtual space is the most important technical point (Fig. 1).
Application of Automotive Rear Axle Assembly 475
Step 3: Correct the reduced UV map according to the position of the UV map before the
patch is reduced to ensure that the patch is consistent before and after the patch is
reduced.
Step 4: Finally, the model is exported in fbx format in 3dmax software. The export
setting parameters include geometry parameter setting, scale factor parameter setting,
fbx file format setting, animation setting, and lighting setting [2].
…… …… ……
The real-time simulation system of automobile rear axle assembly process based on
digital twinning technology studied in this paper has been effectively applied in the rear
axle of E2XX, A2XX and other models of Shanghai Huayu-intelligent Equipment
Technology Co., Ltd. Through the corresponding assembly line model and the light-
weight processing of the product model, the collection of heterogeneous data, the
interaction between the physical space and the virtual space, the assembly process
planning and simulation of the rear bridge in the digital space is realized, and the
simulation process can be released to wearable device to guide workers in the assembly
Application of Automotive Rear Axle Assembly 479
of the rear axle. Secondly, through this process, the simulation process of the assembly
process is associated with the process execution process in real time, real-time moni-
toring of the assembly process is realized, and the process execution process can be
optimized through the simulation process, in particular, the production orders are
optimized to form an effective closed-loop assembly.
4 Conclusions
In this paper, the digital twin technology of the rear axle assembly line of the auto-
mobile is studied. The main achievement is to complete the virtual and real syn-
chronous simulation of the assembly process of automotive rear axle, and provide a
way to optimize the actual assembly process for the simulation analysis process.
Through the lightweight processing of assembly line and product geometric model, the
real-time collection of heterogeneous data in the workshop, and the research on the
interaction mechanism between physical space and virtual space. The basic structure of
the digital twin system of the rear axle assembly line of the automobile is constructed,
and the real-time monitoring of the entire assembly process is completed, which lays a
foundation for a more intelligent assembly line in the future.
References
1. Tao, F., Zhang, M., Cheng, J.: Digital twin workshop: a new paradigm for future
workshop. Comput. Integr. Manuf. Syst. 23(01), 1–9 (2017)
2. Zhang, X.: Design and implementation of workshop management and control system based
on digital twins. Zhengzhou University (2018)
3. Zhang, P.: Research of Digital Twin Based Assembly Process Planning and Simulation of
General Aircraft Product. Hebei University of Science and Technology (2018)
4. Tao, F., Cheng, J., Cheng, Y.: SDMSim: a manufacturing service supply-demand matching
simulator under cloud environment. Robot. Comput. Integr. Manuf. 45, 34–46 (2017)
5. Zhang, J., Gao, L., Qin, W.: Big-data-driven operation analysis and decision-making
methodology in intelligent workshop. Comput. Integr. Manuf. Syst. 22(05), 1220–1228
(2016)
6. Armendia, M., Cugnon, F., Berglind, L., Ozturk, E., Gil, G., Selmi, J.: Evaluation of machine
tool digital twin for machining operations in industrial environment. Procedia CIRP 82, 231–
236 (2019)
Improvement of Hot Air Drying on Quality
of Xiaocaoba Gastrodia Elata in China
1 Introduction
Gastrodia elate is a perennial herb and well-known for treatment effect on dizziness,
headache, migraine, infantile convulsion, limb spasm, wind-cold dampness, neuras-
thenia [1], and has been recorded for medicinal purposes in China more than 1000
years. Additional, many experiments have shown that it also has certain effects against
oxidation, hypoglycemic, vascular headache and concussion sequela [2, 3].
In recent years, the market demand for gastrodia elata has gradually increased.
However, wild gastrodia elata has been near extinction due to excessive exploitation. So
artificial cultivation in imitation of the wild environment has emerged on a large scale.
Zhaotong city is one of the main production area in China. The planting area of gastrodia
elata increased from 3400 ha in 2012 to 5367 ha in 2017, and the output value increased
from 300 million Yuan in 2011 to 3.97 billion Yuan in 2017 [4–6]. Due to the special
physical geography and climate of XiaoCaoba, for example, the average altitude of
1710 m, the average annual sunshine of 927.3 h and the annual average temperature of
9.8°, and 47% forest coverage. Gastrodia elate in XiaoCaoba is well-known by excellent
quality [7].
Fresh gastrodia elata is difficult to store for a long time, and it is easy to corrupt, so
dryness is the most critical factor to ensure the quality of storing gastrodia elata. Our
team found that coal-powered hot air drying is a widely used method in Xiaocaoba by
field research. However, this method is difficult to control and adjust the temperature,
and heating is uneven in different positions in local drying room. If the temperature in
drying room is too low, gastrodia elata is prone to decompose; If the temperature is too
high, the surface of gastrodia elata will has a large number of folds and the effective
components will be partially destroyed resulting in poor quality. In addition, this
method needs a lot of manpower, large space, and result in sulfur residue easily
exceeding the standard.
There are many Chinese researches on the drying of gastrodia elata, and the drying
methods mainly include Hot-air Drying, Wind Drying, Sunlight Drying, Oven Drying,
Microwave Drying, Vacuum Drying, Vacuum Freezing Drying, Infrared Drying and
other various combination techniques [8–12]. Yandao Liu, Changli Wang, Bin Tang
et al. took 5 methods including Wind Drying, Sunlight Drying, Oven drying, Vacuum
drying, Vacuum Freeze Drying for drying gastrodia elata. Experimental results showed
that the best drying method of gastrodia elata was vacuum drying, and the drying
temperature was 52–58 °C, followed by Vacuum Freeze Drying, Sunlight Drying [13].
Ji, Ning, Zhang et al. selected Sunlight Drying, Hot air drying, Microwave drying,
Infrared drying, Hot-air and Microwave combined drying. Compare to the compre-
hensive analysis of the surface, content of active ingredients, production cost and other
factors of gastrodia elata, hot air drying or hot air combined with microwave drying is
the preferred method [14].
Therefore, this paper choose hot air drying to improve the quality of gastrodia elata.
One reason is because this method is widely used in Xiaocaoba area, another is that
electricity-powered hot air drying is clean, safe operation, and thoroughly solve these
problems that include mildew easily, excessive sulphide which come from the smoke
from burning coal. In addition, hot air drying by electricity can improve the effective
components of gastrodia elata and apply to large-scale production, which has certain
guiding significance for local gastrodia elata processing and improvement.
by electricity-powered hot air dryer in the laboratory. At 9:00 am on January 13, samples
covered with soil were cleaned with tap water at the same time.
Fig. 1. Local drying room Fig. 2. Electric thermostatic hot air blower dryer
ð m nÞ
p¼ 100%
n
Where m and n are respectively the initial weigh and dry weight.
(3) Gastrodine. Gastrodine is one of the critical effective ingredients of gastrodia
elata, according to the Chinese pharmacopoeia (2015 edition), gastrodine is
measured by the high performance liquid phase chromatography method [17].
(4) Polysaccharide. Polysaccharide of gastrodia elata is another important effective
ingredients of gastrodia elata. Anthrone-sulfuric acid method is one measured
method [18]. The polysaccharide content of gastrodia elata was calculated as
follows:
CDf
w¼
m
Where C is the concentration of glucose in polysaccharide solution (mg/ml), D
is the dilution ratio of polysaccharide, m is the raw polysaccharide quality of
gastrodia elata (g), f is the conversion factor, and w is the polysaccharide content
of gastrodia elata (%).
(c)sample section of boiled and not-boiled (d)sample section of boiled and not-boiled in
in the laboratory the local dying room
Fig. 3. (a) The appearance, curling degree, the texture of samples with boiled or not-boiled in
two methods
The range of dry base moisture content of samples is from 1.801 to 4.77. According
to the calculated data, the dry base moisture content in the weight range of 80–110 g is
the largest. Above or below this range, the dry base moisture content will decreases
gradually. The dry base moisture content of samples without boiled is higher than that
of boiled. From the comparison of similar weights boiled samples between two
methods, the dry base moisture content of the samples dried by electric hot air was
generally smaller than that of the samples dried by coal hot air, however, the data of the
non-boiled samples dried by two methods were not clearly distinguished. Therefore,
electric hot air drying is better than coal hot air drying dealing with boiled samples.
3.3 Gastrodin
An appropriate amount of gastrodin control substance was added to acetonitrile solu-
tion with a concentration of 3%, and then different concentrations solutions were
injected into the HPLC to obtain the standard curve square Area = 17777.57082 *
Amt + 8.0580846, R = 0.99999, and detailed steps referring to the Chinese pharma-
copoeia. The experimental data are shown in the Table 1. Lab number means sample
number in laboratory and local number means sample number in local drying method.
Compared to the same weight gastrodia elata, the gastrodin in boiled gastrodia elata
is significantly higher than that of the non-boiled, and as for different weight of gas-
trodia elata, the content of gastrodin with large weight is higher. The data compared
between two drying methods of boiled gastrodia elata show that the gastrodin content
in hot air drying by electricity are all higher than 0.20% (according to the Chinese
pharmacopoeia, gastrodin content in gastrodia elata should not be less than 0.20%).
However, except for samples XKD8 and XKD7, hot air drying by coal fails to meet the
requirements. Through analysis of these two samples, the possible reason is that they
486 X. Tang et al.
have large volume and thickness. The temperature in local drying room is in 60–30°,
which is conducive to rapid dehydration due to temperature difference, while the
temperature of electric constant dryer is always around 50°, which is easy to cause
uneven dehydration inside and so there are many surface folds. Therefore, fresh gas-
trodia elata with less than 130 g is better to be dried at a constant temperature of 50° by
electric hot air drying, while those with more than this weight should be dried at a
variable temperature. The specific process needs to be further studied.
3.4 Polysaccharide
Taking 1 mg/mL glucose standard solution 0, 1, 2, 3, 4, 5 and 6 mL put respectively in
a 50 mL volumetric flask with constant volume. Then, taking 1 mL of each tube put
respectively in a plugged tube and place them in an ice water bath, and add 4 ml 0.2%
anthranone - sulfuric acid reagent. After that, they are put in a boiling water bath for
10 min and cooling it, place them in the dark for 10 min. The absorbance was mea-
sured at 620 nm, and the regression equations y = 30.65x + 0.6617, R2 = 0.9925 were
obtained.
Experimental data show that polysaccharide content of all samples are between
16.21% and 16.49%. In general, the polysaccharide content of the boiled samples is a
bit higher than that of the non-boiled samples, however, the difference could be
ignored. So there was no significant difference in the amount of polysaccharides
between the two methods. The possible reason is that the drying temperature of both
methods is less than 60°, and the loss of polysaccharides of gastrodia elata below this
temperature is less.
4 Conclusions
In this paper, constant hot air drying by electricity in 50° is compared to local variable
temperature hot air drying by coal. Although the appearance and the polysaccharide
content are not improved, the previous method could improve the content of gastrodine
which meet the national standards. Therefore, it can be concluded from this experiment
that the 50 °C electric hot air drying method is better than the local hot air drying
method. The research group gives suggestions on local gastrodia elata drying:
(1) All the gastrodia elata should be deal with boiled, the suggested temperature is
90 °C, and the boiled time is: 140–175 g for 15–20 min, 105–140 g for
10–15 min, and 70–105 g for 8–10 min.
(2) The local drying room should be designed to a fully sealed, temperature adjustable
electric hot air drying room. The drying temperature should be controlled at 50 °C
below 130 g. For gastrodia elata over 130 g, backwater treatment is recom-
mended, but the specific process needs to be further studied.
Acknowledgements. This work was supported by the Opening Fund of Key Lab of Process
Analysis and Control of Sichuan Universities of China (2017001).
Improvement of Hot Air Drying on Quality of Xiao-Caoba Gastrodia Elata 487
References
1. Su, S.: Herbal Classic of Materia Medica (annotated). Fujian Science and Technology Press,
Fu Zhou (1988)
2. Yang, S., Lan, J., Xu, J.: Research progress of gastrodia elata. Chin. Tradit. Herbal Drugs 31
(1), 66–69 (2000)
3. Kong, X., Liu, T., Guan, J.: Effect of polysaccharide from the Gastrodia elata Blume on
metabolism of free radicals in subacute aging model mice. J. Anhui Univ. Nat. Sci. Ed. 29
(2), 95–99 (2005)
4. Yunnan “zhaotong gastrodia elata” industry development becoming strong. http://yn.
yunnan.cn/html/2018-04/20/content_5172277.htm. Accessed 21 June 2019
5. The investigation of the gastrodia elata industry by National gastrodia conference organizing
committee. http://www.emushroom.net/news/201208/08/11700.html. Accessed 21 June
2019
6. Report on the development of gastrodia elata industry by Zhaotong people’s government.
http://www.ztrd.gov.cn/article/201408/t20140821_1274_1.shtm,last. Accessed 21 June 2019
7. Xiaocaoba gastrodia elata in Zhaotong City-the world’s original factory. https://www.
taodocs.com/p-50138411.html. Accessed 21 June 2019
8. Qin, J., Zhang, J., Zhou, H.: Influence on the content of gastrodin of different processing
methods. J. Shanxi Univ. Sci. Technol. 23, 76–79 (2005)
9. Ning, Z., Mao, C., Lu, T., et al.: Effects of different processing methods on effective
components and sulfur dioxide residue in Gastrodiae Rhizoma. China J. Chin. Materia Med.
39(15), 2814–2818 (2014)
10. Huang, X., Qi, C., Zhu, Y., et al.: Determination of Gastrodin and Gastrodigenin in fresh and
different processing Gastrodia elata BI. J. ZhaoTong Univ. 39(5), 43–47 (2017)
11. Yong, W., Zhao, Y., Gu, Y.: Effect of different drying methods on quality of Rhizoma
Gastrodiae. Chin. Trad. Pat. Med. 27(6), 673–676 (2005)
12. Tian, Z., Wang, J., Liu, J., et al.: Effects of different processing methods and steamed time on
quality of Zhaotong Gastrodiae rhizoma. Southwest China J. Agric. Sci. 29(7), 1701–1706
(2016)
13. Liu, Y., Wang, C., Tang, B., et al.: Effect of different drying methods on the content of
gastrodin in gastrodia elata. Mod. Trad. Chin. Med. 33(3), 108–109 (2013)
14. Ji, D., Ning, Z., Zhang, X.: Effects of different drying methods on quality of gastrodiae
Rhizoma. China Jo. Chin. Materia Med. 41(14), 2587–2590 (2016)
15. Yu, M., Zhang, X., Mu, G., et al.: Research progress on the application of hot air drying
technology in Chian. Agric. Sci. Technol. Equip. 8, 14–16 (2013)
16. Du, X.: Xinxing Shipin Ganzao Jishu Ji Yingyong. Chemistry Industry Press, Beijing (2018)
17. National Pharmacopoeia Commission: Chinese Pharmacopoeia, 2015th edn. China Medical
Science and Technology Press, Beijing (2015)
18. Peng, Z.: Research overview of polysaccharide from gastrodia elata. J. Gansu Coll. TCM 25
(4), 49–51 (2008)
Installation Parameters Optimization
of Hot Air Distributor During Centrifugal
Spray Drying
Abstract. The location of hot air distributor plays an important role in the
centrifugal spray drying. Based on response surface optimization and numerical
simulation, the overall desirability of fine powder ratio and outlet air moisture is
used as the response value, and the installation angle of hot air distributor and
the distance between atomizer and hot air distributor are optimized. A better
drying effect was obtained by optimizing parameters of hot air distributor, which
provides guidance for the industrial production of centrifugal spray drying.
Nomenclature
Mi indicator value.
Mmax, Mmin the maximum and minimum values of each indicator.
d1 he normalized value of ratio of fine powder.
d2 he normalized value of outlet air moisture.
1 Introduction
Many parameters such as atomizer speed, the hot air inlet temperature and the inlet air
volume affect the centrifugal spray drying effect, and many researchers have studied on
it. Meanwhile, the hot air distributor, which can make hot air contact fully and uni-
formly with droplets sprayed from the atomizer, also has influence on the centrifugal
spray drying effect like reducing or avoiding sticking or scorching. Li [1] showed that
larger or smaller installation angle of the hot air distributor will cause serious sticking
phenomenon, which is not conducive to the advantages of spray drying. Gao [2]
proposed to improve the drying effect by improving the hot air distributor. Yang [3]
showed that the tangential velocity provided by hot air can change dispersion of
droplets. However, the above research didn’t give a specific numerical research. In this
paper, the effect of the hot air distributor is fully considered, and installation angle and
the distance of the hot air distributor are adjusted to change hot air tangential speed and
increase contact time between hot air and droplets. The overall desirability (OD) of fine
powder ratio and outlet air moisture is defined as the goal to optimize the parameters
based on central composite design and response surface methodology (CCD-RSM),
and the optimal installation angle and the distance of the hot air distributor with better
drying performance are obtained, which plays a guiding role in actual industrial
production.
air inlet
wall
air
outlet
particle
outlet
air distributor
atomizer(droplet inlet)
1. Hot air inlet: The hot air, with 290 °C temperature and 2.173 kg/s flow rate cal-
culated based on the heat balance and material balance [7, 8], is composed of 0.051
H2O, 0.201 CO2 and 0.748 N2 [9] and its turbulent flow intensity is 4.33%.
2. Feed liquid inlet: The feed liquid, with 30 °C temperature, 0.175 kg/s flow rate and
67% moisture content, are divided into droplets through the atomizer with
7780 r/min. Droplets obeying to Nukiyama-Tanasawa [10] with distribution
parameter of 2.8953 and 0 enter the drying tower at the tangential speed of
85.267 m/s and the radial speed of 18.111 m/s.
3. Hot air outlet: The average static pressure generated by the induced draft fan is
−300 Pa.
4. Particle outlet: The flow rate is 0.
5. Wall boundary: Considering that the wall surface of insulation layer has a small
amount of heat loss, the wall surface is set to surface with a heat dissipation
coefficient of 0.961 W/(m2 K).
6. The installation angle of the hot air distributor and the distance between the hot
smoke dispenser and the atomizing disk are 0° and 250 mm, respectively.
The numerical simulation results are shown in Table 1.
Table 1. The comparison between numerical analysis and industrial measured data
Unit Industrial data Simulation results Relative error/%
Outlet air temperature °C 140 137.40 1.86
Ratio of 20–150 lm % 98 97.56 0.45
Product mean volume diameter lm 60–70 61.23 –
Particle moisture % 2 2.06 3.00
OD Calculation [13, 14]: The normalization of fine powder ratio and outlet air
moisture are calculated through Eqs. (1) and (2) based on Hassan method, respectively.
Then the OD of fine powder ratio and outlet air moisture is calculated through Eq. (3).
The data in Table 3 (angle, distance and OD) is imported into the Design-Expert
10.0.7 software for multivariate binomial fitting. The result is Eq. (4):
The results in Table 4 show that the model is significant, so the method can be used
to optimize the process parameters of the centrifugal spray drying. The values of R2
and Radj2 are 0.9016 and 0.8314, respectively, indicating that the model has a good
agreement with the simulation. The p values of B and B2 are extremely small, indi-
cating that the angle has a significant influence on the results. A and A2 are not
significant, indicating that the installation angle has a greater influence on the spray
drying effect than the distance.
The optimization result is that the installation angle is 5.741° and the distance is
219.758 mm. Considering the actual situation, the installation angle is 6° and the
distance is 220 mm.
The optimization results show that the optimized parameters can improve the
drying effect of spray drying with lower particles moisture, larger ratio of 20–150 lm
and lower product wear, showing that it has contributed to controlling product wear and
ensuring particles moisture.
Installation Parameters Optimization of Hot Air Distributor 493
Fig. 3. Air temperature distribution before Fig. 4. Air temperature distribution after
optimization optimization
As can be seen in Figs. 3, 4 and 5, the hot air temperature distribution in the drying
tower after optimization can provide a more uniform energy field for the droplets,
which ensures that the droplets are evenly heated and obtains good roundness and
moisture particles. It can be seen from the particle size distribution that the proportion
of small particles decreases, indicating that lower proportion of fine powder is bene-
ficial to reduce waste and increase production.
4 Conclusions
1. The influence of the installation position of the hot air distributor on the centrifugal
spray drying effect was verified by the numerical simulation. The reasonable
installation position of hot air distributor is of great significance to the centrifugal
spray drying.
2. The overall desirability (OD) of fine powder ratio and outlet air moisture is defined
as the goal to optimize installation angle and the distance of the hot air distributor
based on CCD-RSM, and the optimal installation angle and the distance are 6° and
220 mm, respectively.
3. The optimization results show that reasonable position of the hot air distributor can
improve the centrifugal spray drying effect and provide guidance for industrial drying.
References
1. Li, Q.: The improvement and optimization of centrifugal spray-drying tower. Dry. Technol.
Equip. 9(5), 268–273 (2011)
2. Gao, Z.L.: Development and application of electric high speed centrifugal spray dryer.
Chem. Eng. 26(3), 58–60 (1998)
3. Yang, S.J., Wei, Y.C., Woo, M.W., Wu, D.: Numerical simulation of mono-disperse droplet
spray dryer under influence of swirling flow. CIESC J. 69(9), 3814–3824 (2018)
4. Xu, J.J., Wang, Z.T., Yuan, K.: Numerical simulation of CFX gas-liquid two-phase flow in
spray drying tower for preparation of FCC catalyst. In: 2014 ANSYS China Technology
Conference, pp. 21–23 (2014)
5. Huang, L.X., Kumar, K., Mujumdar, A.S.: Simulation of a spray dryer fitted with a rotary
disk atomizer using a three-dimensional computational fluid dynamic model. Dry Technol.
22, 1489–1515 (2004)
6. Huang, L.X., Passos, M.L., Kumar, K.: A three-dimensional simulation of a spray dryer
fitted with a rotary atomizer. Dry Technol. 23, 1859–1873 (2005)
7. Wang, B.H., Wang, X.Z.: Two heat balance methods in spray drying process. Dry. Technol.
Equip. 9(2), 76–81 (2011)
8. Liu, G.W.: Spray Drying Technology, pp. 33–36. China Light Industry Press, Beijing (2001)
9. Xu, S.H.: Direct calculation method of flue gas physical properties. J. Suzhou Inst. Silk Text.
E Technol. 19(3), 32–36 (1999)
10. González-Tello, P., Camacho, F., Vicaria, J.M.: A modified Nukiyama-Tanasawa distribu-
tion function and a Rosin-Rammler model for the particle-size-distribution analysis. Powder
Technol. 186, 278–281 (2008)
11. Ma, Y.H., Lu, J.Y., Hu, Z.Z., Wei, S.H.: Preparation of 1-pentene/1-octene/1-dodecene
terpolymer drag reducer by response surface method. CIESC J. 68, 2195–2203 (2017)
12. Wang, X.H., Xia, L.L., Hu, M., Song, Y.: Optimization of extraction process for compound
Tongmai prescription by Box-Behnken response surface methodology combined with multi-
index evaluation. Chin. J. Hosp. Pharm. 37, 712–716 (2017)
13. Wu, W., Cui, G.H., Lu, B.: Optimization of multiple evariables: application of central
composite design and overall desirability. Chin. Pharm. J. 35, 532 (2000)
14. Hassan, E.E., Parish, R.C., Gallo, J.M.: Optimized formulation of magnetic chitosan
microspheres containing the anticancer agent, oxantrazole. Pharm. Res. 9, 390–397 (1992)
Wear Mechanism of Curved-Surface Subsoiler
Based on Discrete Element Method
1 Introduction
Subsoiling technology is one of the basic contents of conservation tillage [1]. It can
effectively break the plough pan and deepen the tillage layer, which is beneficial to the
renewal of rain and oxygen in the soil [2]. However, there are lots of problems in
subsoiling preparation, such as high resistance, serious wear of subsoilers, and high
energy consumption, which hinder the development and popularization of subsoiling
technology. Therefore, clarifying the wear mechanism of the subsoiler becomes the
important conditions to solve these problems. But the relationship between soil and
subsoilers is very complicated. It is difficult to analyze it by traditional test methods.
With the development of modern science and technology, the discrete element
method for analyzing the motion law and mechanical properties of complex granular
systems is proposed [3]. It can effectively solve some complex dynamic problems in
agricultural soil cultivation. That provide new ways for explore the relationship
between soil particles and agricultural implements.
In this paper, to explore the wear mechanism of the curved-surface subsoiler, the
discrete element method is used to simulate the subsoiling process at three working
speeds. Using the slicing function in the EDEM post-processing module, the main wear
surface the subsoiler has found. The wear mechanism of the subsoiler is explored for
reducing wear, resistance and energy consumption. That provide the theoretical basis
for optimum design of subsoiler.
Soil compaction is an indicator to measure the internal strength of the soil. It has a
great impact on the subsoiling operation. The uniform compactness between the
simulation soil and the actual soil is one of the important conditions to ensure the
simulation accurately. In order to simulate the real soil state as much as possible, six
test points were selected at the test site, and the soil compactness at different depths is
shown in Table 1.
Existing soil-related studies have shown that there is bond force between soil
particles, which has a direct impact on soil compaction. Therefore, there should also be
adhesion between the simulated soil particles [6]. The particles were regarded as vis-
cous bodies, and the Bonding model in the Hertz-Mindlin contact model is used as the
final particle contact model [7]. After bonding, Bond bonds are formed to provide the
bonding force.
In this paper, two particle factories are set up to produce two layers of soil. The
bottom layer is the plow-bottom layer and the surface layer is the tillage layer. The
particle formation time is 5 s, and a total of 620,000 particles are formed. After 1 s
deposition, the particles are bonded at 6 s to form bonds. The soil model is shown in
Fig. 3.
498 J. Li et al.
Enter the corresponding simulation parameters in the software and import the three-
dimensional model of the subsoiler into the discrete element software. The tillage depth
is set to 300 mm, which is in line with the actual subsoiling operation. Finally, simulate
the subsoiling process at three working speeds of 0.6 m/s, 1.0 m/s and 1.5 m/s. The
subsoiling discrete element model is established as shown in Fig. 4.
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete Element Method 499
The backfilling of the soil is not timely, so a confined space will be formed below the
subsoiler tip. It can be seen that the upper surface and front surface of the subsoiler tip
are worn severely, and the lower surface and the rear surface are worn lightly. It is
observed in Fig. 6 that the contact range between soil and subsoiler tip at a working
speed of 1.5 m/s is larger than that at 1.0 m/s and 0.6 m/s, therefore, the higher the
working speed, the more serious wears of subsoiler tips.
As shown in Fig. 7, the soil is in close contact with the front surface of the
subsoiler-surface, and the back surface has a gap with the soil, Therefore, the front
surface of the subsoiler-surface is the wear surface. It can be seen that the gap formed
between soil with subsoiler is the largest at the working speed of 1.5 m/s. And at
0.6 m/s, the gap formed is small. The main reason for the gap formed is the special
geometric shape and the working angle of the subsoiler, that causes the asymmetry of
the soil bulge around the subsoiler after the operation, which is more obvious with the
increase of the speed.
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete Element Method 501
Because of the squeezing and shearing action of the subsoiler handle, the soil
moves to both sides. The soil contact area is formed with both sides of the subsoiler.
With the speed increases, the front of the subsoiler handle is in closer contact with the
soil. It is regarded as the main wear surface. At the same time, the gap between soil and
back surface of subsoiler handle increases, the friction surface decreases, and the wear
of subsoiler decreases. This is due to the special curved shape of the subsoiler. The
higher the speed, the more the soil in front of the shovel is lifted, and the soil movement
is more severe. The soil in contact with the subsoiler handle is the tillage layer soil, the
density is smaller than that of the plough bottom layer, soil movement is more intense.
This increases the soil movement difference between the sides of the subsoiler handle.
Ultimately, there is a difference in wear on both sides.
paint sprayed on the subsoiler has not been rubbed off by the soil. The wear state of the
curved-surface subsoiler is consistent with the results of the discrete element analysis in
Sect. 3.1 of this paper, which verifies the authenticity of the results of the discrete
element simulation analysis.
4 Conclusions
In order to explore the wear mechanism of curved-surface subsoiler, the various sur-
faces of the subsoiler is defined in this paper, the subsoiling process is simulated
through discrete element simulation software. Combined the actual subsoiler that has a
long-working, the wear surface is analyzed, and draws the following conclusions.
(1) The upper surface and the front surface of the subsoiler tip, the front surface
subsoiler-surface and the front surface of the subsoiler handle are the friction
surface of the curved-surface subsoiler. These surfaces are the main wear parts of
the subsoiler, and that can be treated emphatically when optimizing the loss
reduction
(2) The wear of curved-surface subsoiler is related to working speed. The greater the
speed, the closer the subsoiler contacts with the soil, and the greater the force, the
more the wear. Among them, the wear of the subsoiler handle has the greatest
correlation with the change of speeds.
Acknowledgements. This work was supported by National Key R&D Program of China
(2017YFD0701103-3) and Key research and development plan of Shandong Province
(2018GNC112017), (2017GNC12108).
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete Element Method 503
References
1. Zhiqiong, W., Weixin, W., et al.: Development status of subsoiling technology under
conservation tillage conditions at home and abroad. Agric. Res. 6, 256–257 (2016)
2. Baumhardt, R.L., Jones, O.R.: Residue management and tillage effects on soil-water storage
and grain yield of dryland wheat and sorghum for a clay loam in Texas. Soil Tillage Res. 68
(02), 71–82 (2002)
3. Cundall, P.A., Strack, O.D.L.: A discrete numerical model for granular assembles.
Geotechnique 29(1), 47–65 (1979)
4. Sadek, M.A., Tekeste, M., Naderi, M., Calibration of soil compaction behavior using discrete
element method (DEM). In: 2017 ASABE Annual International Meeting, Spokane, WA, USA
(2017)
5. Yuxiang, H., Chengguang, H., Mengzhao, Y., et al.: Discrete element simulation and
experiment on disturbance behavior of subsoiling. J. Agric. Mach. 47(07), 80–88 (2016)
6. Hang, C., Gao, X., Yuan, M., et al.: Discrete element simulations and tests of soil disturbance
as affected by the tine spacing of subsoiler. Biosyst. Eng. 168, 73–82 (2018)
7. Potyondy, D.O., Cundall, P.A.: A bonded particle model for rock. Int. J. Rock Mech. Min.
Sci. 41(8), 1329–1364 (2004)
8. Tamás, K., Jóri, I.J., Mouazen, A.M.: Modelling soil–sweep interaction with discrete element
method. Soil Tillage Res. 134, 223–231 (2013)
9. Liu, X., Du, S., Yuan, J., Yang, L., et al.: Analysis and test on selective harvesting mechanical
end-effector of white asparagus. J. Agric. Mach. 49(04), 110–120 (2018)
Development Status of Balanced Technology
of Battery Management System
of Electric Vehicle
1 Introduction
Compared with other batteries, lithium-ion batteries have the advantages of small
accumulation, light weight, high single cell voltage, high specific energy, long cycle
life, no memory effect, low self-discharge rate and no pollution [1, 2]. However,
lithium battery has inconsistent battery performance caused by long-term use, which
may cause overcharge, overdischarge, overheat and overcurrent of the battery, which
may cause irreparable damage to the battery. In severe cases, it may even cause battery
explosion and spontaneous combustion [3]. In order to give full play to the excellent
characteristics of lithium-ion batteries, many people at home and abroad use battery
management systems (BMS) to improve battery utilization and life cycle, and solve the
problem of inconsistent battery performance through the balanced management tech-
nology in BMS.
This paper mainly summarizes the development and characteristics of equalization
technology in battery management systems at home and abroad in recent years. It
analyzes the advantages and disadvantages of different equalization techniques from
the topology structure and equalization strategy of equalization circuit, and proposes
the future of lithium ion battery equalization technology. The direction of research and
the key technologies that need to be addressed.
3 Balance Criteria
At present, the equalization variables used in the equalization control strategy mainly
include terminal voltage, SOC and battery capacity. The equalization methods used are
maximum value method, average value method and fuzzy control method.
and the influence of the opposite terminal voltage is relatively large, and such a method
cannot be applied in a parallel circuit. Ruihua [10] of Tongji University and others used
the terminal voltage as the equilibrium variable, and applied the integrated equalization
circuit topology to the active equalization strategy of the series battery, avoiding
overcharging or overdischarging of the battery and improving the equalization speed.
Remaining Power
The equilibrium strategy with SOC as the equilibrium variable is the most researched
and the future development direction. Le and Jianlong [1, 2, 11, 12] proposed a
balanced strategy aiming at the SOC consistency of single cells. The battery SOC
cannot be directly measured by the battery. It can only use the relevant monitoring
circuit to detect the remaining power calculated by the parameters such as the current,
voltage and temperature of the battery. When the difference in the remaining capacity
of the battery pack exceeds the theoretical value, the battery is charged. Start balancing
at the beginning. At present, the SOC estimation methods include open circuit voltage
method, ampere-time integration method, neural network method, Kalman filter
method, etc., but the accurate measurement of SOC still needs to be improved, and the
SOC cannot accurately estimate the battery overcharge and undercharge. On the other
hand, when the number of battery cycles increases, the effects of polarization effects
and aging of the battery gradually deepen, and the internal resistance of the battery
becomes large, which hinders accurate measurement of the battery SOC.
Battery Capacity
Accurate measurement of battery capacity is the main reason that restricts its wide
application. As with the measurement of SOC, as the number of cycles of battery
charge and discharge increases, the factors such as aging, polarization effect, electrolyte
concentration and temperature of the battery deepen. The accuracy of battery capacity
measurement is difficult to guarantee, and offline estimation of battery capacity is
currently the most common method.
The fuzzy control method is a relatively complicated control method, and it is also
the digital development direction of battery online control equalization. Taking the
battery consistency parameter as the input variable, the equilibrium non-linear char-
acteristic model of the battery is established by the correlation algorithm, and the output
of the controller is used. Control the equalization voltage and current. Time and other
parameters. Thi Thu Ngoc Nguyen [13] and others combined the neural network with
fuzzy logic control, which is both learnable and adaptive. It can use the online mea-
surement data to find the optimal control point and equalize the current between the
batteries.
4 Conclusion
References
1. Dongjin, Y., Janan, L.: Review of lithium ion battery and its online testing technologies.
Chin. J. Power Sources 42(09), 1402–1403+1419 (2018)
2. Mingli, L., Bingtao, Q.: Research and design of battery management system for pure electric
vehicle. Process. Autom. Instrum. 39(09), 21–24 (2018)
3. Jun, T., Cuijun, T.: Safety test and evaluation method of lithium ion battery. Energy Storage
Sci. Technol. 7(06), 1128–1134 (2018)
4. Guopeng, T., et al.: Research progress of power battery equalization. Chin. J. Power Sources
39(10), 2312–2315 (2015)
5. Ying, P., et al.: Research status of equalization technology for series batteries. Electron.
Meas. Technol. 38(08), 21–24+49 (2015)
6. Junfeng, H., et al.: Research of charging equalization circuit and equilibrium strategy for Li-
ion battery series. Chin. J. Power Sources 40(12), 2439–2443 (2016)
7. Shenshuang, Y., Xiangzhong, Q.: Design of equalization control for battery of electric
vehicle. Electron. Des. Eng. 25(22), 154–157+161 (2017)
8. Zhiguo, A., et al.: Design of energy transfer equalization control for electric vehicle power
battery. Comput. Simul. 34(05), 147–150+252 (2017)
9. Kaitao, B., et al.: Distributed energy balancing control strategy for energy storage system
based on modular multilevel. Trans. China Electrotech. Soc. 33(16), 3811–3821 (2018)
10. Ruihua, L., et al.: Voltage equalization optimization strategy for LiFePO4 series-connected
battery packs based on Buck-Boost converter. Electr. Eng. 19(03), 1–7 (2018)
11. Le, Q., et al.: Research on control strategy of energy storage system based on SOC. Coal
Technol. 37(06), 247–249 (2018)
12. Jianlong, H., et al.: Research on equalization strategy of energy storage battery strings based
on SOC. Renew. Energy Resour. 35(12), 1828–1834 (2017)
13. Yoo, H.G., et al.: Neuro-fuzzy controller for battery equalisation in serially connected
lithium battery pack. IET Power Electron. 8(3), 458–466 (2015)
Application Analysis of Contourlet Transform
in Image Denoising of Flue-Cured
Tobacco Leaves
Abstract. Image denoising is one of the most basic and important tasks in
image processing when computer is used for quality inspection of flue-cured
tobacco leaves. The Contourlet transform has the advantages of multiresolution,
anisotropy, and sparsity. Wavelet denoising, median filter, mean filter, gaussian
filter and wiener filter are used to conduct comparative experiments on tobacco
leaf images so as to verify the denoising effect of Contourlet transform. It is
showed that the image denoising method based on the Contourlet transform has
the advantages of high signal-to-noise ratio and good visual effect when applied
to tobacco image denoising, which is effective and feasible for image denoising
of flue-cured tobacco.
1 Introduction
After the fresh tobacco leaves are picked from the field and sorted according to different
maturity and shape characteristics, their subsequent baking quality can be improved
[1]. Therefore, the tobacco leaf sorting technology based on computer vision is applied.
There will be noise when acquiring the image of tobacco leaves due to the dust or
strains on their surface, which will have impact on their identification in severe cases.
Therefore, the image of the tobacco leaf needs to be denoised before the feature
extraction.
The image denoising can be divided into two kinds: spatial domain denoising and
transform domain denoising based on the actual characteristics of image and the
spectral distribution of noise. Among them, the spatial domain denoising mainly
includes mean filter, Gaussian filter and median filter, while the transform domain
y ¼ xþr e ð1Þ
Among them, x is the ideal signal, y, the observed noisy signal e, the noise and r,
the noise variance. The purpose of denoising is to recover the original signal x from the
noisy signal y.
For the image of N N pixel size, its original image is set as:
fi;j ¼ 1; 2; . . .; n ; n N ð2Þ
Among them, gi;j is the grayscale at the point ði; jÞ where the noise is superimposed
on the original image ei;j , the noise at the point ði; jÞ, which i; j is the row and column
coordinates of the corresponding pixel.
Application Analysis of Contourlet Transform in Image Denoising 513
Contourlet transform is a multi - scale geometric analysis method with multi - reso-
lution, multi - direction and anisotropy. Compared with wavelet transform, the coef-
ficient energy of the images obtained after Contourlet transform is more concentrated in
different directions and scales, and the effect of image denoising is better [7].
In the image denoising of Contourlet transform, the decomposition layer J is firstly
determined, then the low frequency coefficient a0 and high frequency coefficient
d0 ; d1 ; . . .; dJ1 are respectively obtained by Laplace transform and directional filter
bank.
Then the new Contourlet coefficient dbt , t ¼ 0; 1; . . .; J 1 is obtained by setting the
thresholding to address coefficient. The transposition is made in the last, which can gain
the estimated signal fbi;j of the original fi;j . Thus, the denoised image after the tobacco
leaf treatment db0 ; db1 ; . . .; dd
J1 and the Contourlet inversion a0 is obtained.
The choice of thresholding and its functions is the key to the denoising algorithm.
In the Contourlet-based denoising, the hard thresholding denoising algorithm can better
preserve the local and edge details of the image, but after reconstruction, the image may
have hair-like visual interference. While soft thresholding has smoother denoising
effect, but it is easy to blur the edge details. First, the hard thresholding function is used
for denoising, as shown in Eq. (4).
dt ; when jdt j d
dbt ¼ ; t ¼ 0; 1; . . .; J ð4Þ
0; others
In the equation, dt is the high frequency coefficient, dbt , the processed low frequency
coefficient, and d, shrinkage thresholding.
The denoising effect is directly related to the assignment of d. The larger d selected,
the more noise will be eliminated, but the high-frequency information in the image will
also be lost. The smaller d selected, the more image information can be retained, but so
the noise can. In view of this situation, Donoho proposed the shrinkage-thresholding
algorithm in 1994 [8, 9].
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
d ¼ r 2 lnðN0 Þ ð5Þ
Although this method has certain basis for the selection of shrinkage thresholding, it
can only obtain the upper limit of the optimal thresholding, not the optimal thresholding.
In different scale subbands, the proportion of image and noise information is also
different. And the higher the scale, the more obvious the trend. Therefore, one selection
method based on multi-scale decomposition denoising thresholding is proposed [10].
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðlJ Þ=2
Tl ¼ r 2 lnðN0 Þ 2 ð6Þ
In the equation, r is the size of noise; N0 , the total number of image pixels; l, the
scale level; and J, the number of decomposition layers.
Through the above formula, the selection of thresholding can be changed according
to the different scale of coefficient, thus achieving its adaptive selection.
514 L. Zhang et al.
In Fig. 1, it is obvious that most of the noise in the image after mean filter has been
removed, but the details become blurred. Median filter, gaussian filter, wiener filter and
wavelet denoising methods have similar denoising effects, and better texture details are
preserved. Contourlet-based denoising method not only removes all noise, but also
performs better than other methods in preserving texture details of tobacco image,
which also proves the effectiveness of this method in denoising agricultural products.
It can be concluded from Table 1 that the PSNR by using Contourlet transform for
denoising is the highest, followed by gaussian filter. The denoising effect of median
filter, wiener filter and wavelet is similar, slightly lower than that of gaussian filter.
While the effect of mean filter is relatively low.
5 Conclusions
To better realize the image denoising of fresh tobacco, this paper proposes an image
denoising algorithm based on Contourlet transform. Compared with other denoising
methods, it is showed that Contourlet transform thresholding denoising is an algorithm
more suitable for denoising tobacco images.
References
1. Xu, F., Zhang, F., Du, B., et al.: Effects of different fresh leaves classification on tobacco leaf
quality and benefits. J. Anhui Agric. Sci. 41(25), 10429–10432 (2013)
2. Donoho, L.: De-noising by soft threshholding. IEEE Trans. Inf. Theory 41(3), 613–627
(1995)
3. Yang, F., Tian, Y., Yang, L., et al.: Agricultural product image denoising algorithm based on
hybrid wavelet transfor. Trans. Chin. Soc. Agric. Eng. 27(3), 172–178 (2011)
4. Do, M.N., Vetterli, M.: Contourlets: a directional multi resolution image representation. In:
IEEE International Conference on Image Processing, Rochester, NY, pp. 357–360 (2002)
5. Cunha, L., Zhou, J.P., Do, M.N.: The nonsubsampled contourlet transform theory design and
applications. IEEE Trans. Image Process. 15(10), 3089–3101 (2005)
6. Song, H., He, D., Han, T.: Contourlet transform as an effective method for agricultural
product image denoising. Trans. Chin. Soc. Agric. Eng. 28(8), 287–292 (2012)
7. Dai, W., Yu, S., Sun, S.: Image de-noising algorithm using adaptive threshold based on
Contourlet transform. Acta Electron. Sinica 35(10), 1939–1943 (2007)
516 L. Zhang et al.
8. Donoho, D.L., Johnstone, I.M.: Ideal spatial adaptation via wavelet shrinkage. Biometrika
81, 425–455 (1994)
9. Donoho, D.L.: Denoising by soft-thresholding. IEEE Trans. Inf. Theory 3, 613–627 (1995)
10. Chang, S.G., Yu, B., Vetterli, M.: Adaptive wavelet thresholding for image denoising and
compression. IEEE Trans. Image Process. 9(9), 1532–1546 (2000)
Monte Carlo Simulation of Nanoparticle
Coagulation in a Turbulent Planar Impinging
Jet Flow
1 Introduction
2 Numerical Methodology
@q @ðqui Þ
þ ¼0 ð1Þ
@t @xi
where ui is the velocity, p is the pressure, q is the density and l is the viscosity. sij
refers to the subgrid scale (SGS) stress tensor.
dxp;i
¼ up;i ð3Þ
dt
dup;i 3 q
¼ u!
cD ðui up;i Þj~ up j þ f s ð4Þ
dt 4 qp dp
where xp;i is the position, up;i is the velocity of the particles, ui is the velocity of the
continuous gas phase and dp is the diameter of the dispersed particles. The first term in
the right hand side of Eq. (4) denotes the drag force that the carrier flow imposes on the
particles. The second term in the right hand side represents the contributions from
forces other than drag force.
3 Algorithmic Implementation
A brief outline of the algorithm of the LES-MC method is given as follows [8]:
(a) Initialization.
Monte Carlo Simulation of Nanoparticle Coagulation 519
5 Conclusions
Acknowledgements. The work is supported by the Sinopec Corp. major research project (Grant
No. 417002-2). The research work is based on the MC method developed by H.M. Liu during her
PhD study at the Department of Mechanical Engineering in the Hong Kong Polytechnic
University.
References
1. Chan, T.L., Liu, Y.H., Chan, C.K.: Direct quadrature method of moments for the exhaust
particle formation and evolution in the wake of the studied ground vehicle. J. Aerosol Sci. 41(6),
553–568 (2010)
2. Yu, M., Lin, J., Chan, T.: Numerical simulation of nanoparticle synthesis in diffusion flame
reactor. Powder Technol. 181, 9–20 (2008)
3. Rodrigues, P., Franzelli, B., Vicquelin, R., Gicquel, O., Darabiha, N.: Coupling an LES
approach and a soot sectional model for the study of sooting turbulent non-premixed flames.
Combust. Flame 190, 477–499 (2018)
4. van de Ven, T.G.M., Kelemen, S.J.: Characterizing polymers with an impinging jet.
J. Colloid Interface Sci. 181, 118–123 (1996)
5. Yu, M., Lin, J., Xiong, H.: Quadrature method of moments for nanoparticle coagulation and
diffusion in the planar impinging jet flow. Chin. J. Chem. Eng. 15(6), 828–836 (2007)
6. Liu, H.M., Chan, T.L.: Differentially weighted operator splitting Monte Carlo method for
simulating complex aerosol dynamic processes. Particuology 36, 114–126 (2018)
7. Liu, H.M., Chan, T.L.: Two-component aerosol dynamic simulation using differentially
weighted operator splitting Monte Carlo method. Appl. Math. Model. 62, 237–253 (2018)
8. Liu, H.M., Chan, T.L.: A coupled LES-Monte Carlo method for simulating aerosol
dynamics in a turbulent planar jet. Int. J. Numer. Methods Heat Fluid Flow (2019). https://
doi.org/10.1108/hff-11-2018-0657
9. Zhao, H., Kruis, F.E., Zheng, C.: A differentially weighted Monte Carlo method for two-
component coagulation. J. Comput. Phys. 229(19), 6931–6945 (2010)
10. Zhou, K., He, Z., Xiao, M., Zhang, Z.: Parallel Monte Carlo simulation of aerosol dynamics.
Adv. Mech. Eng. 6, 1–11 (2014)
Structural Damage Detection of Elevator Steel
Plate Using GNARX Model
Abstract. For elevator steel plate, the tiny crake is difficult to find out. How-
ever, the crake growth may cause fatality. Thus, it’s crucial to detect structural
damage of elevator steel plate. With the expression of GNARX model deduced,
the modified Mahalanobis distance least square (MMDLS) is proposed for
parameter estimation. Then, the structure pruning algorithm based on parame-
ters’ rate of standard deviation (SPRSD) is proposed for structure identification.
With experimental data, GNARX model is applied to structural damage
detection for elevator steel plate. The results show that the effect of structural
damage detection of GNARX model is better than those of AR, ARX, GNAR
models, which indicates the superiority of GNARX model applied to structural
damage detection of elevator steel plate.
1 Introduction
As closely related to people’s daily life, elevator safety receives more and more
attention. However, for elevator steel plate, the initial and tiny crack is hard to be
discovered. Yet, crack growth may cause serious failure, sometimes even leading to
fatal disaster. Hence, it is critical to detect, locate, and estimate the extent of the
structural damage.
Generally, structural damage detection can be categorized as local-damage detec-
tion and global-damage detection [1]. The local-damage detection techniques [2–4]
refer to dye penetration, magnetic powder, eddy current, radial, ultrasound, strain, etc.
The main advantage is that there is no need to develop specific model or obtain baseline
data of undamaged structure. Thus, local-damage detection is very effective for small
and regular structures. However, for large and complex structures in invisible or closed
environments, it is very difficult and time-consuming to complete an inspection of the
whole structure using local-damage detection methods. To overcome the limitation of
local-damage detection, the vibration-based structural damage detection as a global-
damage detection technique is proposed [5, 6].
According to the modeling strategy of time series analysis, the general linear and
nonlinear auto-regressive model (GNAR) takes a zero mean white noise {at} as input
to the system [10]. When one of the exogenous inputs {ut} is known, the GNAR model
is converted into the GNARX model with a single exogenous input.
If the system has two exogenous inputs, ut and vt, the abbreviation of GNANX
model with double inputs can be written as GNARX (p; su, sv; nw,1, nw,2, …, nw,p; nu,1,
nu,2, …, nu,p; nv,1, nv,2, …, nv,p), which is expressed as follows:
xt;i;1 ¼ fwt1 ; ; wtnw;i ; utsu ; ; utsu nu:i þ 1 ; vtsv ; ; vtsv nv:i þ 1 g ð1Þ
nw;1 þX
nu;1 þ nv;1
wt ¼ hði1 Þxt;1;1 ði1 Þ
i1 ¼1
nw;1 þX
nu;1 þ nv;1 nw;2 þX
nu;2 þ nv;2
þ hði1 ; i2 Þxt;2;1 ði1 Þxt;i;1 ði2 Þ
i1 ¼1 i2 ¼1
nw;1 þX
nu;1 þ nv;1 nw;p þX
nu;p þ nv;p ð2Þ
Y
p
þ þ hði1 ; ; ip Þ xt;p;1 ðik Þ
i1 ¼1 ip ¼1 k¼1
p nw;1 þX
X nu;1 þ nv;1 nw;j þX
nu;j þ nv;j Y
j
¼ hði1 ; ; ij Þ xt;j;1 ðik Þ þ at
j¼1 i1 ¼1 ij ¼1 k¼1
memory step of the jth-order term of output {wt}; nu,j and nv,j (j = 1, 2, …, p) are the
memory step of the jth-order term of input {ut} and input {vt}, respectively.
Similarly, Eq. (2) can also be generalized into multi-input systems, which need not
be repeated here.
xt;i;1 ¼
fwt1 ; ; wtnw;i ; utsu ; ; utsu nu:i þ 1 ; vtsv ; ; v
tsv nv:i þ 1 g
xt;i;1 ð1Þfxt;i;1 ð1Þg; xt;i;1 ð2Þfxt;i;1 ð1Þ; xt;i;1 ð2Þg; ;
xt;i;2 ¼
xt;i;1 ðmi;1 Þxt;i;1
.. ð3Þ
.
xt;i;i1 ð1Þfxt;i;i1 ð1Þg; xt;i;i1 ð2Þfxt;i;i1 ð1Þ; xt;i;i1 ð2Þg; ;
xt;i;i ¼
xt;i;i1 ðmi;i1 Þxt;i;i1
Accordingly,
e ¼ ½e1 ; e2 ; ; en T ¼ w X ^
h0 ð8Þ
where ^h0 is calculated with Eq. (7). Thus, ei is the model residual of ith sample.
Covariance matrix of e is given as Ce.
526 J. Ma and Y. Dou
h i 1X k
Ce ¼ E ðe E½eÞðe E½eÞT ¼ eðiÞ eTðiÞ ð10Þ
k i¼1
where diag(Ce, 0) is the leading diagonal of Ce and diag(Ce, l) is the lth diagonal.
Thus, GNARX model parameter estimation with MMDLS is given as follows:
Start
One term No
remains
Yes
The end
Nm 1 X K
Nf 1 X
K
AIC ¼ lnð r2m;i þ r2 Þ þ 2R=N ð13Þ
N K i¼1 N K i¼1 f ;i
where r2m;i is the ith sample’s variance of the modeling residuals; r2f ;i the ith sample’s
variance of forecasting error; R is the number of model parameters; N is the sequence
length; Nm is the modeling sequence length; Nf is the forecasting sequence length.
GNARX model is applied to steel plate structural damage detection. With parameter
estimation and structure identification, suitable model is developed and model
parameters are taken as feature vector. With k-nearest neighbors (KNN) algorithm, the
steel plates with crake and without crake are identified.
where a = (a1, a2, …, an) and b = (b1, b2, …, bn) are two sample data; n is the sample
length.
528 J. Ma and Y. Dou
P
n
dðyi ; ci Þ
i¼1
Acuuracy ¼ ð15Þ
n
where yi and ci denote the true category label and the obtained cluster label, respec-
tively; d is a function that equals 1 if yi = ci and equals 0 otherwise.
Fig. 2. Dimension of the damaged steel plate and the location of four measuring points
Fig. 3. The sketch map of experimental facility and the layout of sensors
Structural Damage Detection of Elevator Steel Plate Using GNARX Model 529
0 0 0 0
0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000
sampling sampling sampling sampling
#1 measure point #4 measure point #1 measure point #4 measure point
sound pressure (µBar)
0 0 0 0
0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000
sampling sampling sampling sampling
Fig. 4. One group of acoustic emission data for each steel plate
xt;1 :wt1 ; wt3 ; wt4 ; wt5 ; wt7 ; wt8 ; wt10 ; utsu 1 ; utsu 2 ; utsu 3
xt;2 :wt2 wt3 ; w2t3 ; wt1 utsu 1 ; wt2 utsu 1 ; wt3 utsu 1 ;
ð17Þ
wt3 utsu 2 ; wt4 utsu 2 ; u2tsu 2
xt;3 :w3t1 ; wt1 u2tsu 1
530 J. Ma and Y. Dou
xt;1 : wt1 ; wt2 ; wt3 ; wt4 ; wt6 ; wt8 ; wt10 ; utsu 2 ; utsu 3 ; utsu 5
xt;2 : w2t1 ; w2t2 ; u2tsu 1 ; wt1 utsu 2 ð19Þ
xt;3 : w3t1 ; w2t1 utsu 1 ; wt1 u2tsu 1 ; u3tsu 1
To make the feature vectors exactly exhibit the nonlinear characteristics and the
vector dimension of undamaged and damaged steel plate data stay the same, the final
model structures are shown as follows:
The final upper model structure:
xt;1 :wt1 ; wt3 ; wt4 ; wt5 ; wt7 ; wt8 ; wt10 ; utsu 1 ; utsu 2 ; utsu 3
xt;2 :wt2 wt3 ; w2t3 ; wt1 utsu 1 ; wt2 utsu 1 ; wt3 utsu 1 ;
ð20Þ
wt3 utsu 2 ; wt4 utsu 2 ; utsu 1 utsu 2 ; u2tsu 2
xt;3 :w3t1 ; wt1 u2tsu 1
xt;1 : wt1 ; wt2 ; wt3 ; wt4 ; wt6 ; wt8 ; wt10 ; utsu 2 ; utsu 3 ; utsu 5
xt;2 : w2t1 ; w2t2 ; u2tsu 1 ; wt1 utsu 2 ð21Þ
xt;3 : w3t1 ; w2t1 utsu 1 ; wt1 u2tsu 1 ; u3tsu 1
With parameter estimation, the model parameters are taken as the feature vectors
and KNN algorithm is applied. The results are listed in Tables 1, and 2. For com-
parison, the results of AR, ARX, different order GNAR, and different order GNARX
models are also listed in Tables 1, and 2.
From the above, the following can be obtained:
(1) As steel plate structural damage detection, the classification accuracy of GNARX
models are higher than those of AR, ARX, and GNAR models. This indicates that
GNARX model is suitable for structural damage detection.
(2) The classification accuracy of GNARX model with SPRSD is the highest. This
indicates the effectiveness of SPRSD for GNARX model structure identification.
(3) The classification accuracy of the bottom model is obviously higher than that of
the upper model. This indicates that model closer to damaged location embodies
more damage information.
Structural Damage Detection of Elevator Steel Plate Using GNARX Model 531
Table 1. The KNN classification accuracy of different models applied to the upper data of
undamaged and damaged steel plate
Model Maximum accuracy* Mean accuracy**
AR(6) 66.67% 61.11%
GNAR(2;6,1) 65.00% 61.11%
GNAR(3;8,4,1) 70.00% 65.74%
ARX(8,3,53) 68.33% 62.78%
GNARX(2;53;7,3;1,2) 73.33% 67.22%
GNARX(3;53;8,4,1;1,2,1) 75.00% 68.89%
GNARX with SPRSD 75.00% 70.19%
*
is the maximum KNN classification accuracy of different k values
(k = 3, 5, 7, …, 19).
**
is the mean KNN classification accuracy of different k values
(k = 3, 5, 7, …, 19).
Table 2. The KNN classification accuracy of different models applied to the bottom data of
undamaged and damaged steel plate
Model Maximum accuracy* Mean accuracy**
AR(8) 73.33% 68.89%
GNAR(2;6,2) 68.33% 62.59%
GNAR(3;8,1,1) 75.00% 71.67%
ARX(10,3,53) 70.00% 67.04%
GNARX(2;53;10,2;2,2) 85.00% 83.15%
GNARX(3;53;10,2,0;5,2,1) 93.33% 86.11%
GNARX with SPRSD 95.00% 91.48%
5 Conclusions
The expression of GNARX model is deduced. On the basis of the structure charac-
teristics of GNARX model, a novel approach of parameter estimation (MMDLS) and
structure identification (SPRSD) for GNARX model is proposed. With the experi-
mental data, structural damage of elevator steel plate is detected by time series models,
among which the effect of GNARX model is obviously better than those of other
models.
In this paper, simple structure steel plates’ damage detection is used for research.
For elevator car, multiple GNARX models can be developed for subsections of elevator
car. Then, parameter matrix can be obtained, of which the change state can reflect the
structural damage’s the location and extent of elevator car. However, this needs further
study.
532 J. Ma and Y. Dou
References
1. Ghiasi, R., Fathnejat, H., Torkzadeh, P.: A three-stage damage detection method for large-
scale space structures using forward substructuring approach and enhanced bat optimization
algorithm. Eng. Comput. 35, 1–18 (2019)
2. Janapati, V., Kopsaftopoulos, F., Li, F., et al.: Damage detection sensitivity characterization
of acousto-ultrasound-based structural health monitoring techniques. Struct. Health Monit.
15(2), 143–161 (2016)
3. Souridi, P., Chrysafi, A.P., Athanasopoulos, N., et al.: Simple digital image processing
applied to thermographic data for the detection of cracks via eddy current thermography.
Infrared Phys. Technol. 98, 174–186 (2019)
4. Tabatabaeipour, M., Hettler, J., Delrue, S., et al.: Non-destructive ultrasonic examination of
root defects in friction stir welded butt-joints. NDT E Int. 80, 23–34 (2016)
5. Santos, A., Santos, R., Silva, M., et al.: A global expectation–maximization approach based
on memetic algorithm for vibration-based structural damage detection. IEEE Trans. Instrum.
Meas. 66(4), 661–670 (2017)
6. Loh, C.H., Chan, C.K., Chen, S.F., et al.: Vibration-based damage assessment of steel
structure using global and local response measurements. Earthq. Eng. Struct. Dyn. 45(5),
699–718 (2016)
7. Vahidi, M., Vahdani, S., Rahimian, M., et al.: Evolutionary-base finite element model
updating and damage detection using modal testing results. Struct. Eng. Mech. 70(3), 339–
350 (2019)
8. Vamvoudakis-Stefanou, K.J., Sakellariou, J.S., Fassois, S.D.: Vibration-based damage
detection for a population of nominally identical structures: Unsupervised Multiple Model
(MM) statistical time series type methods. Mech. Syst. Signal Process. 111, 149–171 (2018)
9. Ma, J., Xu, F., Huang, K., et al.: Improvement on the linear and nonlinear auto-regressive
model for predicting the NOx emission of diesel engine. Neurocomputing 207, 150–164
(2016)
10. Huang, R., Xu, F., Chen, R.: General expression for linear and nonlinear time series models.
Front. Mech. Eng. China 4(1), 15–24 (2009)
Production Management
The Innovative Development and Application
of New Energy Vehicles Industry
from the Perspective of Game Theory
Abstract. Since the advent of new energy vehicles, it has attracted much
attention from many parties, but the market performance has not been very
competitive. Therefore, in order to make more consumers accept new energy
vehicles, so as to improve the state energy consumption structure, how to
effectively promote new energy vehicles has become an urgent problem to be
solved by the government. This paper believes that the marketing of new energy
vehicles is a typical game process. By constructing the game model among the
supply and demand side (automobile manufacturer and consumer), the supply
side (the automobile manufacturer and competitor), the government and the
enterprise (government and enterprise), the game optimal solution of the par-
ticipants of the new energy vehicle market is analyzed. Finally, this paper
proposes that consumers should actively improve their product needs, the auto
manufacturers should focus on improving the profit of the products, and the
government should use different support policies at different stages.
1 Introduction
Under the impact of the 2008 world financial crisis, oil prices rose, the global car
industry sales fell, and auto makers began to reflect on the car structure and production
technology upgrading. At the same time, because of the enhancement of public crisis
awareness of the traditional energy exhaustion, more and more pollution to the modern
society such as haze, acid rain and so on, the voice of the ecological environment
protection and sustainable development is becoming intense. Under the multifaceted
situation, energy conservation and new energy vehicles have developed rapidly under
the intensive technology innovation support led by government and began to gradually
enter the public view. China is the world’s fastest-growing auto market, with more than
23.6 million vehicles sold in 2016. By 2020, China is projected to have around 300
million automobiles, which would surpass the current U.S. fleet of 265 million. Some
consumers are willing to consume for environmental protection because of the dete-
rioration of their living environment, but many other consumers also have many
concerns about new energy vehicles, such as price performance, safety and practicality.
As a result, all car manufacturers have intensified their technological research and
joined the market competition successively. In addition, there are different demands
between automobile manufacturers and the government. Car manufacturers pursue
profit maximization because it needs a lot of cost investment for car manufacturers to
shift from traditional energy vehicles to new energy vehicles. The government hopes to
bring more social value and implement the concept of sustainable development, so it
need to publicize and advocate new energy vehicles and guide market demand.
However, because of the high cost of new energy vehicles, the market price has
remained high. Due to the price, technology and supporting facilities, consumers who
are willing to buy the new energy vehicles are still small even if the government has
introduced the relevant preferential subsidy policies. As a product that breaks through
the energy restriction, the new energy vehicle is conducive to changing the energy
consumption structure of the country, and its marketing promotion is particularly
important.
2 Literature Review
The game theory was founded in 1944 by John von Neumann and Oskar Morgenstern.
In the later study, foreign scholars (Harsanyi [1], Nash, Selen) continued to enrich the
theoretical framework and contributed to the important achievements of the transfor-
mation of the sea and the Nash equilibrium and so on.
In the study of new energy vehicles marketing promotion, the foreign scholar Scott
believes that the government’s subsidy policy has a positive effect on the enterprise,
that is, the government subsidy policy has a stimulating effect on the R & D investment
of the enterprise [2]. Chinese scholars generally believe that the development of new
energy vehicles is in line with the national conditions of our country. It can deal with
the impact of the international financial crisis while solving the energy environment
problems. At the same time, it is the important breakthrough point of the industrial
upgrading and the establishment of strategic new industries [3].
Shen [4], Zhang [5], Xu [6], Luo [7] and so on, respectively from the product,
publicity and marketing strategy, marketing mode and policy support, pointed out the
current problems in the development of new energy vehicles in China, and put forward
solutions and suggestions. With the background of the development of new energy
vehicles in Japan, Jin [8] put forward the optimization strategy for the promotion of the
new energy vehicle in China by studying the specific practices and the factors that restrict
the development in new energy vehicle promotion in Japan. In combination study of
market promotion and game theory, Wang and Miu had obtained a more reasonable
government subsidy mode through the game model between government subsidies and
enterprises [9]. Wang and Wang used the game evolution model to study the income
matrix of the government and related enterprises in the process of technology research
and development of new energy vehicles and put forward a proposal to promote the
technology development of new energy vehicle by analysing the optimal solution [10].
The Innovative Development and Application of New Energy Vehicles Industry 537
It can be seen that many scholars have proposed solutions to the promotion of new
energy vehicles from the aspects of products, prices, channels and promotions. On the
basis of this, this paper tries to establish a game model for the promotion of new energy
vehicle. From three perspectives of supply-demand side, supply side and government
enterprise, this paper analyzed the strategic combination of game players and sought
equilibrium strategy, so as to provide countermeasures and suggestions for the market
promotion of new energy vehicle.
To sum up, the utility is assigned and compared in order to find out the optimal
solutions under different circumstances in the game in new energy vehicle. Because of
the influence of government policy, it can be easily found out that Sc + UC is greater
than UC or 0, so that only when consumers have vehicles consumption demand each
other’s best decisions are considered at this time.
1. If Sc + UC is greater than UC and Sm + Un is greater than Uo, the equilibrium
solution is (consumption, popularizing new energy vehicles).
2. If Sc + UC is larger than UC and Sm + Un is less than Uo, the equilibrium solution
is (consumption, popularizing traditional cars).
Conclusion: when the profits of new energy vehicles are greater than those of
traditional vehicles, the cost factors of new energy vehicles and consumers’ demand for
new energy vehicles have little effect on the optimal results. under these circumstances,
vehicles manufacturers are more inclined to promote new energy vehicles. As new
energy vehicles have certain environmental benefits, consumers will be more willing to
buy new energy vehicles when their environmental awareness is enhanced.
Based on the competitors’ point of view, the competitors have two options. They
can choose to promote new energy vehicles and obtain the profit Un, or they can
choose to promote the traditional vehicles and obtain the profit 0. The profit 0 here
refers to no enjoyment of government’s policies welfare in promoting traditional cars,
which is no more utility than conventional gains.
540 J. Wang and J. Ma
Based on manufacturers’ point of view, the manufacturers also have two options
when choosing the product. They can choose to promote new energy vehicles and
obtain the profit Un, or they can choose to promote the traditional vehicles and obtain
the profit zero.
As rational-economic man, the manufacturers and their competitors will consider
the best decision of the other party when making their best action.
In summary, because Un is greater than zero, the equilibrium solution is to (pro-
mote new energy vehicles, promote new energy vehicles).
Conclusion: when the profit of the new energy vehicle industry is higher than that
of the traditional vehicle, the manufacturer and the competitor will both choose to
promote new energy vehicle. However, as new energy vehicles are new products, it is
faced with a great challenge when entering the market. In absence of knowing each
other’s action, as a rational person, the manufacturer will choose to promote the tra-
ditional vehicles in order to avoid risks.
From the perspective of government, government has two choices in this game.
First, it can choose to maintain the order of vehicle market to ensure the healthy and
orderly development. At this time, the revenue of the government is Ug-Cg or Uo-Ug.
The specific revenue needs to be determined according to the dominant strategy of the
other side in the game.
The Innovative Development and Application of New Energy Vehicles Industry 541
Government can also choose not to maintain the order of the vehicle market in this
competition and let it develop naturally. At that time, the government’s revenue is Un
or 0. Un refers to the benefits brought by the increasing market share of new energy
vehicles in vehicle market. 0 means that there will be no more revenue and more cost
when manufacturers promote traditional vehicles under the circumstance of no gov-
ernment’s order maintain in vehicle market.
Based on vehicle manufacturer’s point of view, vehicle manufacturer still has two
choices when choosing the vehicle. It can choose to promote the new energy vehicle,
whose utility is Un-Cn. The utility is not necessarily positive, which needs to be
determined according to the government’s decision. If the government’s dominant
decision is to maintain the vehicle market, the earning of the vehicle manufacturer is
Un-Cn. And if government does not maintain the market, the earning of the vehicle
manufacturer to promote new energy vehicles is still Un-Cn.
Vehicle manufacturer can also choose to promote traditional vehicle, and the profit
is -C0 or zero. In the same way, the specific utility needs to be determined by the
government’s dominant decision. If government chooses to maintain market order, the
vehicle manufacturer needs to face a certain penalty -Co. If government does not
maintain market order, the vehicle manufacturer which promotes traditional vehicle
gains zero and the zero here means that vehicle manufacturer does not need to bear the
payment used to maintain the market and has no more utility.
As rational economic man, the vehicle manufacturer and government must consider
the best decision of the other party first when they making the best action.
To sum up, we compare the utilities in order to find out the optimal solution for the
new energy vehicle promotion under different circumstances.
3. If Un-Cn is greater than zero and Uo-Ug is less than zero, the equilibrium solution
is (not to maintain the market, to promote new energy vehicles).
4. If Un-Cn and Uo-Ug are both greater than zero, the equilibrium solution is (not to
maintain the market, to promote new energy vehicles).
5. If Un-Cn is less than- Co and Ug is greater than Uo, the equilibrium solution is (not
to maintain the market, to promote traditional vehicles).
6. If Un-Cn is less than - Co and Ug is less than Uo, the equilibrium solution is to (to
maintain the market, to promote new energy vehicles).
7. If Cois less than Un-Cn and less than zero, and Ug is greater than Uo, the equi-
librium solution is (not to maintain the market, to promote traditional vehicle).
8. If Co is less than zero and less than Un-Cn and Ug is less than Uo, this situation is
special. When the decision of vehicle manufacturer is to promote new energy
vehicles, the optimal decision of government is not to maintain market order. When
the decision of vehicle manufacturers is to promote traditional vehicles, govern-
ment’s best decision is to maintain market order. When the government’s decision
is to maintain market order, the best decision of vehicle manufacturer is to promote
new energy vehicles. When the government’s decision is not to maintain market
order, the best decision of vehicle manufacturer is to popularize traditional vehicles.
Since it is a static game, there is neither Nash equilibrium and nor optimal solution.
Conclusion: when the profits of new energy vehicles are not as high as those of
traditional vehicles, the vehicle manufacturers are more willing to focus their
542 J. Wang and J. Ma
production on the original products. But government has different demands compared
to vehicle manufacturers. Government hopes that the market consumption can be self-
adjusted and upgraded. Both vehicle manufacturers and customers can turn to new
energy vehicles, which can be realized when the profits of new energy vehicles are
better than traditional vehicles, but in the current market, it needs to cut down the cost
further in order to make sure that profits of new energy vehicles are as good as these of
traditional vehicle. Therefore, government should maintain order in the vehicle market
so as to ensure the smooth promotion of new energy vehicles.
4 Conclusions
As a whole, new energy vehicles are not mature enough and have not been widely
accepted by consumers. The profit of promoting the new energy vehicles may not be
able to achieve the profit from the promotion of traditional vehicles. As a rational
economic man, vehicle manufacturer will not take the social responsibility and promote
new energy vehicles on his own for the sake of environment. The market demand of
new energy vehicles is unknown, and no one can undertake the loss which manufac-
turers will face in promoting new energy vehicles.
At this time, government’s intervention is needed to guide the healthy and orderly
development of the new energy vehicle industry. Firstly, we should make macro
control means play a role in the development of new energy vehicle market, and let the
policy be effectively landed. Secondly, government needs to guide consumer demand,
so that vehicle consumers can accept and purchase new energy vehicles.
First, consumers should pay close attention to the good policy of the new energy
vehicle market. They should seize the opportunity to enjoy the welfare subsidies. At the
same time, whether they need convenience, safety or other attributes, they should clear
their own demand for the product itself, and must communicate effectively with vehicle
manufacturers through a reasonable path. To ensure that the manufactured car products
can meet their own needs. In addition, consumers should improve the awareness of
environment, and should not let the vehicle manufacturers do these things alone. When
the supply and demand are turned to new energy vehicles, market development of new
energy vehicles can enter a virtuous cycle and jointly improve the energy consumption
structure.
Second, when promoting new energy vehicles, vehicle manufacturers should fur-
ther reduce the production cost and channel management cost. Manufacturers should
increase the investment of technology, break through the limitation of poor battery
endurance, make the battery endurance of new energy vehicles greatly improved, and
reduce the production cost of new energy vehicles by mass production or cooperation
with other manufacturers. Vehicle manufacturers must catch hold of Internet plus era.
Production or distribution should be flat as possible in order to respond quickly, save
marketing costs and enhance overall competitiveness. Except for product quality, the
service cannot be ignored. As a new product, the new energy vehicle has a certain
resistance to it. It needs to do a good job of after-sales service for new energy vehicles,
and to invest in the construction of the supporting facilities in the area covered by the
market including increasing the input of charging pile and equipment maintenance in
The Innovative Development and Application of New Energy Vehicles Industry 543
order to eliminate consumer concerns. Try to avoid competing with competitors in the
price war, which is not helpful to the healthy and long-term development of new
energy vehicle industry. Manufacturers should set up their own brand with correct
positioning and pay close attention to the consumer’s product demand in order to meet
consumers’ need while get the product upgrade and development.
Third, the government should play a different role in the different stages in pro-
moting new energy vehicle. At the beginning, government should guide the con-
sumption by propaganda and education of consumption consciousness in the field of
new energy, and introduce the subsidy policy of vehicle purchasing in order to reduce
the consumer’s using cost. Government can also adjust the fuel tax policy to raise the
cost of using traditional vehicles. At the same time, through the technology subsidy
policy, the automobile manufacturers should be encouraged to carry out technological
innovation, cultivate professional talents, strengthen the protection of intellectual
property rights, and establish a national technical standard system based on the national
conditions of China, so as to meet the needs of new energy vehicles production.
Meanwhile, we should maintain the market order of new energy vehicles, and reso-
lutely avoid some bad manufacturers’ cheating on tax fraud, and try not to let “bad
money drives out good money” appear. In the middle period of the development of new
energy vehicle market, government should focus on the construction of service projects
which large manufacturers can’t accomplish by themselves, such as public charging
piles, maintenance facilities, and purchase new energy vehicles in the public transport
field. Through various efforts, government assists manufacturers and other forces to
promote the market promotion of new energy vehicles.
Acknowledgement. This research was financially sponsored by fund of six talents peak in
Jiangsu province (Grant No. JY-001), Project of Philosophy and Social Science Research in
Colleges and Universities in Jiangsu Province (Grant NO. 2017ZDIXM004). We would like to
thank anonymous references for their insightful comments and suggestions which lead to the
significant improvement and better presentation of the paper.
References
1. Harsanyi, J.C.: Games with incomplete information played by “Bayesian” players, the basic
model. Manag. Sci. 14(3), 159–182 (1997)
2. Scott, J.T.: Firm versus industry variability in R&D intensity. NBER Chapters, pp. 233–248
(1984)
3. Sun, L.W.: Development status and countermeasures of new energy vehicles in China. China
Sci. Technol. Inf. 7(7), 135 (2012)
4. Shen, L.: Introduction strategy of new energy vehicle market. Shanghai Mot. 1(1), 37–40
(2009)
5. Zhang, F., Bao, X.J.: Problems and countermeasures of new energy vehicle market
promotion in China. Price Theory Pract. 5(5), 85–86 (2011)
6. Xu, C.X.: New energy vehicles need a new marketing model. China’s Strat. Emerg. Ind.
14(21), 54–55 (2014)
7. Luo, J.: Research on marketing strategy in new energy vehicle. J. Qigihar Inst. Eng. 5(2),
52–54 (2011)
544 J. Wang and J. Ma
8. Jin, Y.H.: Reference to Japan’s marketing strategy in new energy vehicle for China.
Northeast Asia Forum 21(3), 105–112 (2012)
9. Wang, H.X., Miao, X.M.: Game research on R&D subsidies of new energy vehicles in
China. Soft Sci. 27(6), 29–32 (2013)
10. Wang, R., Wang, Z.L.: Evolutionary game analysis of R&D process of new energy vehicle.
J. Daqing Norm. Univ. 36(3), 61–66 (2016)
Survey and Planning of High-Payload
Human-Robot Collaboration: Multi-modal
Communication Based on Sensor Fusion
Gabor Sziebig(&)
1 Introduction
2 Related Research
3 Scenarios
In this section, after a short introduction of the proposed architecture, scenarios will be
described, which will highlight the novelty of sensor-fusion based Human-Robot
Collaboration.
System Overview
The Intelligent Factory Space (IFS) concept represents a framework for interaction
between a human and an automated system (e.g. industrial robot, mobile robot or CNC
machine). Multiple layers build up the IFS, which are representing services for the
humans in a modular way.
The IFS concept was previously developed by the author [16] and here is only a
very short overview is given, in order to give an understanding for the following
scenarios and cases.
The IFS architecture is composed from three layers, as illustrated in Fig. 1. The
layers are ordered in a hierarchical manner, mirroring the necessary autonomy and
requirements for the given layer. The combination of these layers forms the Intelligent
Factory Space. The IFS layers offer specific services, in order to increase comfort and
collaboration for the human operators, interacting with the automated machines. To
establish connection between the IFS and humans, a physical interface is proposed,
which can also be seen in Fig. 2. This physical interface is called POLE and placed in
the lowest layer (single element). The available functions and services are also shown
in Fig. 2.
Overall Scenario
The industrial robot is loading/unloading goods from pallet and places the heavy boxes
to a transport conveyor. The operator is delivering the pallets with using a jack. The
emptied pallets are also removed by the operator. An overview of the scenario can be
seen in Fig. 3. In a standby case, the operator is outside the theoretical work-zone of the
industrial robot and the Pole system projects a green circle around the work-zone of the
industrial robot, signaling that everything is fine.
Cases
The operator enters the work-zone of the industrial robot in order to carry out its work.
As soon as the operator enters the work-zone the Pole system detects and notifies the
worker, that he/she is recognized with projecting a green circle around the worker, as
seen in Fig. 4.
548 G. Sziebig
The closer the worker goes to the industrial robot; the circle’s colour will turn
toward red as a warning for the operator, that the situation is not comfortable for the
human neither for the industrial robot, see Fig. 5. The Factory management cloud
learns the standard behaviour of a worker for the typical execution of the tasks that are
happening in the safety critical zones. If there is a difference, either from worker or
from robot side, the Pole system can warn the operator and take countermeasure actions
on robot task execution also in order to prevent any unwanted event. The Pole system
adapts to the task’s natural execution and limits or modifies the industrial robot’s path
for maximum safety. As the Pole system is designed to provide two-way communi-
cation and behaviour learning we can also detect if a worker begins to be tired and can
adapt even to the capabilities/mood of the given worker who will interact with the
industrial robot.
Survey and Planning of High-Payload Human-Robot Collaboration 549
If there was any additional equipment used to carry out the task and for some reason
this remains in the work-zone of the industrial robot, the equipment is highlighted
similar way a human being and the operator is warned about the situation.
550 G. Sziebig
Fig. 5. Operator is in close proximity of the industrial robot, red circle is signalling for
unnecessary proximity to the industrial robot
4 Discussion
5 Conclusion
In this paper Human-Robot Collaboration scenarios has been introduced. The scenarios
are based on the Intelligent Factory Space concept and describes the use of the IFS. It
can stated that such scenarios are not possible in today’s safety standards, especially
when we would like to use high-payload industrial robots and not so-called collabo-
rative robots. Today’s system are “stupid” proof and need changing, with the scenarios
Survey and Planning of High-Payload Human-Robot Collaboration 551
detailed here, this could be the first toward fair responsibility sharing between human
co-worker and industrial robot.
Acknowledgements. The work reported in this paper was supported by the centre for research
based innovation SFI Manufacturing in Norway, and is partially funded by the Research Council
of Norway under contract number 237900.
References
1. Youssefi, S., Denei, S., Mastrogiovanni, F.: A real-time data acquisition and processing
framework for large-scale robot skin. Robot. Auton. Syst. 68, 86–103 (2015)
2. IFF: Tactile sensor system, 15 February 2017. http://www.iff.fraunhofer.de/content/dam/iff/
en/documents/publications/tactile-sensor-systems-fraunhofer-iff.pdf
3. Haddadin, S.: Injury evaluation of human-robot impacts. In: IEEE International Conference
on Robotics and Automation ICRA 2008 (2008)
4. https://www.pilz.com/en-INT/eshop/00106002207042/SafetyEYE-Safe-camera-system
5. Szabo, S., Shackleford, W., Norcross, R., Marvel, J.: A testbed for evaluation of speed and
separation monitoring in a human robot collaborative environment. NIST
Interagency/Internal Report (NISTIR) – 7851 (2012)
6. Saenz, J., Vogel, C., Penzlin, F., Elkmann, N.: Safeguarding collaborative mobile
manipulators - evaluation of the VALERI workspace monitoring system. Procedia Manuf.
11, 47–54 (2017)
7. Baranyi, P., Solvang, B., Hashimoto, H., Korondi, P.: 3D Internet for cognitive info-
communication. In: 10th International Symposium of Hungarian Researchers on Compu-
tational Intelligence and Informatics, CINTI 2009, pp. 229–243 (2009)
8. Gleeson, B., MacLean, K., Haddadi, A., Croft, E., Alcazar, J.: Gestures for industry Intuitive
human-robot communication from human observation. In: 8th ACM/IEEE International
Conference on Human-Robot Interaction (HRI), Tokyo, pp. 349–356 (2013)
9. Liu, H., Wang, L.: Gesture recognition for human-robot collaboration: a review. Int. J. Ind.
Ergon. 68, 355–367 (2017)
10. Cao Z., Simon T., Wei S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using
part affinity fields. In: CVPR (2017)
11. Simon T., Joo H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using
multiview bootstrapping. In: CVPR (2017)
12. Vincze, D., Kovács, S., Gácsi, M., Korondi, P., Miklósi, A., Baranyi, P.: A novel application
of the 3D VirCA environment: modeling a standard ethological test of dog-human
interactions. Acta Polytech. Hung. 9(1), 107–120 (2012)
13. Herath, S., Harandi, M., Porikli, F.: Going deeper into action recognition: a survey. Image
Vis. Comput. 60, 4–21 (2017)
14. Baranyi, P., Nagy, I., Korondi, B., Hashimoto, H.: General guiding model for mobile robots
and its complexity reduced neuro-fuzzy approximation. In: Ninth IEEE International
Conference on Fuzzy Systems. FUZZ- IEEE 2000 (Cat. No. 00CH37063), San Antonio, TX,
USA, vol. 2, pp. 1029–1032 (2000)
15. Shu, B., Sziebig, G., Pieskä, S.: Human-robot collaboration: task sharing through virtual
reality. In: IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics
Society, Washington, DC, pp. 6040–6044 (2018)
16. Reimann, J., Sziebig, G.: The intelligent factory space – a concept for observing, learning
and communicating in the digitalized factory. IEEE Access 7, 70891–70900 (2019)
Research on Data Encapsulation Model
for Memory Management
1 Introduction
One of the important links in the modern industrial manufacturing process is detection.
The detection of industrial product is the evaluation of the product itself and the results
can be used as the basis for the improvement of the manufacturing process. The present
problem, however, is that the host computer of the real-time detection system always
keeps connection with the lower computer, to get the state of the lower computer and
execute tasks, such as database reading, writing, etc. It is impossible to analyze the
detection data of the lower computer in time. Both Peng and Zhang have developed a
real-time detection system, but the data encapsulation model of each sensor are not
packaged [1, 2]. Yang proposed a dynamic Scratch-pad memory management with data
pipelining, which can effectively improve embedded systems’ performance [3]. Stilk-
erich, on the other hand, proposed a cooperative memory management method, but it is
not suitable for the general industrial production environment [4]. Sun develops a fault
detection device based on neural network model. It’s detection function is well clas-
sified [5].
Detection data is generally stored in Flash memory or RAM [6]. Flash memory can
be erased and programed without removing the memory chip, which is a common way
of data storage, but the number of erasure allowed per memory unit is limited. The Nor
Flash can be used 100000 times. If the Flash is abnormal, the data written will go
wrong. However, data refresh is frequent to a real-time detection system. In order to
avoid the rapid loss of Flash, one method is to reduce the refresh frequency of each
storage sector, thereby reducing the number of writes per unit and improving the
utilization of Flash memory capacity [7]. Another method is to store the data in RAM
when the storage requirement is small and there is no need for power-off storage. RAM
is divided into dynamic RAM (DRAM) and static RAM (SRAM). DRAM uses
capacitance storage and must be refreshed once every once in a while. SRAM is a kind
of memory with static access. This paper introduces the method of memory manage-
ment by storing data in SRAM.
To solve the problem, this paper proposes a data storage model for memory
management, which can realize the dynamic storage of data. The rest of this paper is
organized as follows: Sect. 2 illustrates the data encapsulation model. Memory man-
agement method is presented in Sect. 3. Finally, the experiment and conclusions are
given in Sects. 4 and 5.
The detection data is required to be well packed. For embedded real-time detection
system based on CAN-bus, the time and frequency of each sensor are variable and the
data is stored in the lower computer in advance. When the data of a certain function is
to be used, the master controller of the lower computer sends an instruction to the slave
controller through the CAN-bus. The data packets of each sensor are retrieved from the
controller in the function and function attribute code. Therefore, each packet contains
the code of this function, the frequency and time of data acquisition and so on.
The data encapsulation model for each detection packet is building. The form of
encapsulation is shown in Table 1. The data start bit is 0xF0. The function and attributes
are saved later. Bytes 4–6 are reserved bits. Byte 7 holds the number and number of
detection sensors. If one or several of them is 1, the corresponding sensor value is stored,
otherwise it is not stored. Time and sampling frequency of data are stored in bytes 8 and
9, respectively. The value of the sensor is then stored, and the end mark is 0xF1.
The bytes of data stored for the function is as follows:
ns number of sensors
f the sampling frequency
t the acquisition time
A the number of bytes per sensor value (usually two bytes, that is, a = 2)
N constant (the fixed length of function code, attribute and so on)
Nd ¼ A ns f t þ N ð1Þ
554 L. Lu et al.
The storage model is given in Fig. 1. The memory is divided into fixed-size blocks
(such as 10 KB), according to the size of the packet. The detection data for each
function takes up a block. The codes of the function and function attribute are stored in
the fixed storage location. Each time a packet is retrieved, it is only required to add
10 KB to the previous position. In order to improve memory utilization, the storage
block size should be similar to the maximum packet size.
F0 F0
00 00
02 04
10KB
F1
F1
Unused
Unused
In addition to the automatic detection mode, the detection system also includes the
manual detection mode. When a function is detected manually, there may be a problem
with the same functional code storing the data again. In this fixed-size blocks model,
the following data will not be affected as a result of the different stored data length.
Research on Data Encapsulation Model for Memory Management 555
The detection data is stored in SRAM, which is a kind of memory with static access
function and can save the data without refreshing the circuit. Memory management is
mainly used to manage the allocation of memory resources in the running process of
MCU, to achieve rapid allocation and to recover memory in due course.
The memory management consists of two parts: memory pool and memory man-
agement table. The memory pool is divided into n blocks, the corresponding memory
management table size is also n. When the pointer calls the function to allocate
memory, firstly, the number of the required memory blocks is calculated according to
the required memory size. If the continuous m-blocks memory is not occupied (i.e. the
value in the memory management table is 0), the memory management table memory
corresponding to the continuous m-blocks memory is set. The memory of the segment
is marked to be occupied (Fig. 2).
Memory
Block 1 Block 2 Block 3 Block n
pool
Memory
No.1 No.2 No.3 No.3 management
table
The memory allocation method is shown in Fig. 3. When collecting and storing a
function data, the lower computer first retrieves the function list according to the
function and attribute code. If there is a same function, the new data over-writes it.
Otherwise, computer will add a new column in the array. Then the required memory
space is calculated. After the allocation is successful, the first address of the memory is
stored in the corresponding column of the functional address array. And the functional
code and related information are stored in the later memory.
Function_list Function_addr
Based on the storage model and allocation method, the program flow of real-time
detection system data storage and reading is shown in Fig. 4. After initializing the
function list and data address table, the data storage or data reading is judged according
to the instructions received by CAN-bus.
Match
Read instruction
instruction
Calculate the
Read data packet
required data space
Delay according to
sampling frequency
When data is stored, the instruction contains information such as function, attribute
code, sampling frequency, time, sensor number, etc. Retrieving the list of function, the
lower computer navigates to the first unused array line, and calculates the required
Research on Data Encapsulation Model for Memory Management 557
storage space. Then the memory management table is traversed to allocate the required
memory space, the storage function and attribute code are stored into the function
array. The information related to the function is stored from the allocated memory
header address. According to the acquisition frequency, the data is collected and fil-
tered, and the filtered data is packaged and stored based on the data encapsulation
model. When data is reading, the instruction contains only the function and attribute
code that need to be read. The lower computer retrieves the list of functions until the
function and attribute code match the instruction. Then it will read the description
information in the packet and calculate the packet size. The packet is divided into 8
bytes of CAN data frames, loaded into can mailbox in turn. Finally the read state is fed
back.
4 Application Example
The data encapsulation model for memory management has been successfully applied
to the function detection equipment of massage chair, as showed in Fig. 5. By sensing
the pressure applied by the massage chair to the detection dummy, the pressure signal is
collected. The performance of massage chair is judged by the amplitude and frequency
of the signal.
In this device, the original massage chair feedback data received by the main
control board is stored in the array. Figure 6(a) shows an abnormal frame which lacks
end mark. In addition, there are other abnormal frames (not listed one by one). An
algorithm for filtering invalid frames is proposed to solve this problem. The effective
feedback frame is shown in Fig. 6(b). It can be seen that the byte length and value of
the stored data are normal, and the number of bytes from F0 to F1 is the same as the
number of bytes in the packet. The utilization of memory is consistent with the actual
situation. The data storage is more reliable and the retrieval is more convenient.
558 L. Lu et al.
5 Conclusions
In this paper, the storage method for detection data in real-time detection system is
studied. The reliable storage and retrieval of data can be realized by the encapsulation
model. The data packet is stored in SRAM. On the one hand, it can solve the problem
of storage life; On the other hand, because of the small memory, it is only suitable for
small and medium-sized detection system. The memory management is convenient to
store and read data, but it also wastes some resources, which is needed to further study.
The experimental result gives a support of the correctness and the practicability of the
data storage model. In a word, the data encapsulation model for memory management
can be widely used in industrial measurement and control and other fields.
Research on Data Encapsulation Model for Memory Management 559
References
1. Peng, D., Zhang, H., Weng, J., Li, H., Xia, F.: Research and design of embedded data
acquisition and monitoring system based on PowerPC and CAN bus. In: Proceedings of the
8th World Congress on Intelligent Control and Automation, pp. 4147–4151. IEEE (2010)
2. Zhang, X., Zhang, J.: Design of embedded monitoring system for large-scale grain granary.
In: 11th International Symposium on Computational Intelligence and Design, pp. 145–148.
IEEE (2018)
3. Yang, Y.: Dynamic scratch-pad memory management with data pipelining for embedded
systems. In: International Conference on Computational Science and Engineering, pp. 358–
365. IEEE (2009)
4. Stilkerich, I., Taffner, P., Erhardt, C.: Team up: cooperative memory management in
embedded systems. In: International Conference on Compilers, Architecture and Synthesis for
Embedded Systems. IEEE (2014)
5. Sun, L., Wang, D.: The development of fault detection system based on LabVIEW. In: 5th
International Conference on Electrical and Electronics Engineering, pp. 157–161. IEEE
(2018)
6. Zhang, H., Kang, W.: Design of the data acquisition system based on STM32. Procedia
Comput. Sci. 17, 222–228 (2013)
7. Wei, P., Yue, L., Liu, Z., Xiang, X.: Flash memory management based on predicted data
expiry-time in embedded real-time systems. In: ACM 2008 Symposium on Applied
Computing, pp. 1477–1481 (2008)
Research on Task Scheduling Design
of Multi-task System in Massage Chair
Function Detection
Abstract. The lC/OS-II system has the advantages of interrupt service, nesting
support, multi-tasking, etc., and has been successfully applied to the massage
chair function detecting device. This paper proposes a massage chair function
detection task scheduling design based on lC/OS-II real-time kernel by using
the synchronization and communication between tasks rationally according to
the task scheduling principle. It can transfer data and achieve massage chair
function detection control. The semaphores are used to achieve the synchro-
nization between related user tasks. The message mailbox mechanism and the
whole process variable provided by lC/OS-II are used to realize the commu-
nication between related user tasks. We can use ECANTOOLS to send and
receive data and read related feedback reports to verify whether the massage
chair function detection logic task scheduling which we design can meet the
requirements.
1 Introduction
In recent years, with the steady development of the economy, massage chairs as a new
health care and daily necessities have become more and more popular in the market.
Therefore, the efficient production and testing of massage chairs are particularly
important.
Hiyamizu et al. [1] proposed a massage chair function detection technology based
on a human sensory sensor, but with certain uncertainty. Nowadays, the detection of
the massage chair is carried out by lifting the humanoid inspection tool. Zoican et al.
[2] proposed the application of task scheduling in embedded, Song [3] proposed lC/
OS-II based real-time operating system design and implementation and Quammen et al.
[4] proposed more the application of the task system on the robot, there is a close
relationship between the three. Task scheduling is an important part of the operating
system. For real-time operating systems, task scheduling directly affects its real-time
The massage chair function detection is based on the lC/OS system framework. The
execution of the application depends on the scheduling of the user tasks by the lC/OS-
II real-time kernel. The task scheduling strategy is completely controlled by the
application. If the application needs to perform task A at the next moment, task A must
be made the highest priority task in the ready list.
One each user task of the application is independent, but in order to complete a certain
task, it is necessary to maintain a certain relationship between multiple user tasks to
form a whole. In the massage chair detection, the synchronization and communication
between tasks are required.
The completion of one task requires the execution result of another task, and this
kind of constraint cooperation between tasks is called the synchronization between
tasks. The message mechanism provided by lC/OS-II can be used to synchronize
between tasks, as shown in Fig. 1, semaphores are used to synchronize task and ISR,
task and task.
OSSemPost OSSemPend
ISR TASK
OSSemPost OSSemPend
TASK TASK
As shown in Fig. 1, it uses a “key” flag to indicate the semaphore. This flag
indicates the occurrence of an event. The semaphores used to synchronize the task need
to be initialized to 0. This does not indicate a mutually exclusive relationship.
562 L. Lu et al.
OSMboxPost OSMboxPend
ISR TASK
OSMboxPost OSMboxPend
TASK TASK
In the function detection of the massage chair, the message mailbox mechanism
provided by lC/OS-II and the whole process variable are used to realize the com-
munication between related user tasks, as shown in Fig. 3, it is the scheduling design of
the function of the massage chair function detection logic control based on the syn-
chronization and communication between tasks.
3 11
16
1 2 9 10
Task_Detection_Procese
6
7
Task_UART_DataVer 13
8 14
Task_CAN1_fb_Handing
...
4 5 12
ISR 15
Fig. 3. Massage chair function detection control task scheduling logic based on the synchro-
nization and communication between tasks
Research on Task Scheduling Design of Multi-task System 563
(1) The user task Task_Detect_Process gets the CPU usage rights and is in the
running state.
(2) The user task Task_Detect_Process makes the UART send function to send a
frame control instruction to the object under test, enables the UART to receive
the interrupt, waits for the message mailbox Mbox_UART_fb to be empty, and
defines the wait timeout.
(3) The user task Task_Detect_Process is suspended because the mailbox is empty
and enters the blocking state. At this point, the UART receives an interrupt
trigger, causing the CPU enters the ISR.
(4) The UART interrupt service stores the received data in bytes and counts it.
(5) After the CPU completes the ISR, repeat steps (4) and (5) until the number of
received data bytes reaches the limit. At this time, the UART receiving interrupt
is closed, and the first array address *UART_Rev_Data which is stored in the
UART data is placed in the mailbox. Mbox_UART_Rev.
(6) The user task Task_UART_DataVer waits for the mailbox Mbox_UART_Rev
event to occur, the task enters the ready state, and the task immediately enters the
running state because it has the highest priority in the ready list.
(7) The user task Task_UART_DataVer performs double redundancy check pro-
cessing on the data pointed to by the pointer to obtain a frame of valid UART
feedback information, and puts a pointer to the information into the mailbox
Mbox_UART_fb.
(8) The user task Task_Detect_Process waits for the message mailbox to be empty,
and immediately enters the ready state, and the task is the task with the highest
priority among the ready tasks, so the CPU usage right is obtained.
(9) The user task Task_Detect_Process identifies and compares the UART feedback
data stored at the address pointed to by the pointer.
(10) If the UART feedback data is correct, the user task Task_Detect_Process makes
the CAN1 send function to send a function execution instruction to the corre-
sponding lower-level module of the function.
(11) The function execution instruction is completed, the feedback function executes
the result, triggers the CAN1 reception interrupt, and the CPU enters the ISR of
CAN1.
(12) The CPU executes the ISR of CAN1, receives the CAN message, and places a
pointer to the message into the message mailbox Mbox_CAN1_Rev.
(13) The user task Task_CAN1_fb_Handing enters the ready state by waiting for the
mailbox event to occur, and becomes the ready state task with the highest
priority and enters the running state.
(14) The user task Task_CAN1_fb_Handing processes the CAN1 feedback and
marks the result. Regardless of the outcome, the entire variable is assigned.
(15) The user task Task_Detect_Process receives the global variable, ends the wait,
and enters the ready state. Because the priority is highest in the ready list, it
enters the running state.
(16) The user task Task_Detect_Process continues to detect other functions.
The massage chair massages the corresponding airbag on the liftable humanoid
inspection tool. The airbags feedback massage function report. We can achieve one-by-
564 L. Lu et al.
one detection of the function of the massage chair by controlling the humanoid
inspection tool and reading the feedback reports. In theory, this logic task scheduling
can meet the requirements.
According to the real-time event channel implemented on the CAN bus proposed by
Kaiser et al. [6], we perform short-frame test of bus communication, and send a MAC
address request to each node module by means of the CAN analyzer. If the receiver
feeds back its MAC address, then it indicates that the node has normal short frame
communication on the bus.
As shown in Fig. 4, a MAC address request is sent to the 0#–8# node module, and
the first byte of the feedback frame is the MAC address of the node. After testing, the
contents of the eight feedback information are correct. After that, the ECANTOOLS
detection tool sends the function execution instruction to the corresponding lower
computer module according to the CAN protocol. For example, here we control the
arms of the liftable humanoid inspection tool and the massage of massage chair arms.
As shown in Fig. 5, the command is sent and the corresponding feedback is obtained,
and the feedback data is correct, and the arms retract as shown in Fig. 6. What’s more,
the massage function feedback reports of the arms are as shown in Table 1.
It is known from the above that the task scheduling design of this detection system
for massage chair function detection meets the requirements.
5 Conclusions
In this paper, the task scheduling design of the detection system in the massage chair
detection is studied. The task scheduling logic design of the massage chair function
detection control is carried out to realize the maximum guarantee for the real-time
performance of the system under the controllable process and make massage chair
detection more efficient. This program can be widely used in industrial testing and
other fields.
References
1. Hiyamizu, K., Fujiwara, Y., Genno, H., et al.: Development of human sensory sensor and
application to massaging chairs. In: Proceedings of 2003 IEEE International Symposium on
Computational Intelligence in Robotics and Automation. Computational Intelligence in
Robotics and Automation for the New Millennium (Cat. No. 03EX694), Kobe, Japan, vol. 1,
pp. 140–144 (2003)
2. Zoican, S., Zoican, R., Galatchi, D.: Improved load balancing and scheduling performance in
embedded systems with task migration. In: International Conference on Telecommunication
in Modern Satellite, Cable and Broadcasting Services, pp. 354–357. IEEE (2015)
3. Song, X., Chen, L.: The design and realization of vehicle real-time operating system based on
UC/OS-II. In: 6th International Conference on Networked Computing, Gyeongju, Korea
(South), pp. 1–4 (2010)
4. Quammen, D.J., Kountouris, V.G., Stephanou, H.E., Tabak, D.: Multitasking system for
robotics source. In: Proceedings of the 1989 American Control Conference, 21–23 June 1989,
pp. 2743–2748 (1989)
5. Labrosse, J.J.: Embedded Real-Time Operating System lC/OS-II. Beijing Aerospace
University Press, Beijing (2003)
6. Kaiser, J., Brudna, C., Mitidieri, C.: Implementing real-time event channels on CAN-bus. In:
Proceedings of IEEE International Workshop on Factory Communication Systems, Vienna,
pp. 247–256 (2004)
A Stochastic Closed-Loop Supply Chain
Network Optimization Problem Considering
Flexible Network Capacity
1 Introduction
In today’s global market, the competition is not only between different individual
enterprises but also largely between different supply chains. The effectiveness and
efficiency in handling material flow, information flow and capital flow within a supply
chain will determine the profitability and success of a company. Supply Chain Man-
agement (SCM) aims, through decision-makings at both strategic level and operational
level, at properly managing different players and flows within a supply chain in order to
maximize the total supply chain surplus or profit [1].
Network design is one of the most essential strategic decisions in SCM, which
formulates the configuration of a supply chain through facility selection and determines
the operational strategies. Traditionally, the design of a supply chain only focuses on
the forward direction from raw material supplier towards end customer. However, due
to the concern of environmental challenges, global warming and climate change from
the whole society, increasing attention has been paid to the value and resource recovery
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 567–576, 2020.
https://doi.org/10.1007/978-981-15-2341-0_71
568 H. Yu et al.
through reverse logistics activities [2, 3]. Closed-loop supply chain (CLSC) is a new
concept and practice, which combines both traditional forward supply chain and
reverse logistics in order to simultaneously maximize the utilization of resources and
minimize the generation of wastes. Compared with the traditional supply chain network
design, the planning of a CLSC is more complicated due to the involvement of more
players. Furthermore, reverse logistics involves more uncertainties compared to the
forward supply chain [3], and this needs to be appropriately treated in a CLSC network
design problem.
Due to the aforementioned complexity of the CLSC network design problem,
significant efforts have been given in order to develop advanced optimization models
and algorithm for a better decision-making. Yi et al. [4] developed a mixed integer
linear program for minimizing the total cost of a retailer oriented CLSC for the
recovery of construction machinery. The model was solved by an enhanced genetic
algorithm and was validated through a case study in China. Özceylan et al. [5] pro-
posed a linear program for maximizing the total profit generation of an automotive
CLSC. The model was solved by CPLEX solver and was validated by a real world case
study in Turkey. Taking into account of the recovery options, Amin et al. [6] inves-
tigated a tire manufacturing CLSC network optimization problem, which was validated
by a real world case study in Canada.
In addition to the economic incentives from incorporating reverse logistics activi-
ties, some research works considered the overall environmental performance of a
CLSC. Hasanov et al. [7] formulated a mathematical model for the optimization of a
CLSC network design problem considering remanufacturing options. The model aims
at minimizing the total cost and emission cost of greenhouse gas (GHG) through
optimal decision-making on both production planning and inventory management.
Taleizadeh et al. [8] investigated a bi-objective optimization model for CLSC network
design considering the balance between total cost and total CO2 emission. A fuzzy
Torabi-Hassini (TH) method was used to solve the multi-objective optimization
problem.
Due to the complexity of the proposed mathematical models, significant compu-
tational efforts may be required to solve the optimization problems of CLSC network
design. Therefore, several research works have been conducted to develop improved
algorithm. Soleimani and Kannan [9] developed a hybrid genetic algorithm (GA) and
particle swarm optimization (PSO) for improving the computational efficiency of a
multi-period and multi-level CLSC network optimization model. Chen et al. [10]
investigated a location-allocation problem for the CLSC network design of cartridge
recycling, which was solved by an enhanced two-stage GA. Hajipour et al. [11] for-
mulated a non-linear mixed integer program for maximizing the profit generation in
CLSC network design. Two metaheuristics: PSO and greedy randomized adaptive
search procedure (GRASP) were employed to solve the proposed mathematical model.
The treatment of uncertainty within the life cycle of a CLSC is another focus of the
recent modeling efforts. Zhen et al. [12] proposed a two-stage stochastic optimization
model for optimizing the decision-making of facility location and capacity allocation in
a CLSC, and an enhanced Tabu search algorithm was developed to solve the model.
Jeihoonian et al. [13] formulated a two-stage stochastic model for CLSC network
design considering uncertain quality. Mohammed et al. [14] proposed a stochastic
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 569
optimization model for minimizing the total cost of CLSC network design. The model
was further incorporated with different carbon policies in order to test their effective-
ness in carbon reduction.
In this paper, we developed a new two-stage stochastic mixed integer program for
CLSC network optimization. Compared with the existing models, the main difference
is the capacity flexibility is taken into account in order to improve the stability and
consistency of the objective values under different scenarios. In addition, the sample
average approximation (SAA) method is used to test the performance of the proposed
mathematical model.
2 Mathematical Model
In this paper, the flexible network capacity is taken into account and is formulated
in the mathematical model. The capacity limitation in a traditional facility location
model may lead to unstable objective values and sub-optimal decisions under a
stochastic environment [3, 15]. For example, because of the rigid capacity constraint,
one more facility may be opened for dealing with a small increase on the customer
demand in some scenarios, which results in unreasonable decisions and inefficient use
of capacity opened. Due to this reason, the flexible network capacity is formulated as a
penalty in the objective function in order to solve the problem and generate reasonable
decisions. In practice, the inclusion of the flexible network capacity is a more realistic
representation of the decision-making problem of CLSC network design, which
enables different interpretations under different conditions, i.e., increase of facility
capacity, outsourcing options, hire of temporary or seasonal workers, or even loss of
sales. In addition, the uncertainty related to the customer demand in the forward supply
570 H. Yu et al.
chain and the rate of waste generation and the quality level in the reverse logistics are
taken into consideration and are formulated as stochastic parameters.
X
M X
W X
C X
R
Min cost ¼ð F m im þ Fw iw þ Fc ic þ Fr ir Þ
m¼1 w¼1 c¼1 r¼1
!
X
S X
M X
R X
W X
M
þ Us Pm Apsm þ Asrm þ Pw Asmw
s¼1 m¼1 r¼1 w¼1 m¼1
!
X
C X
V X
R X
C
þ Pc Asvc þ Pr Ascr
c¼1 v¼1 r¼1 c¼1
X
M X
W W X
X V V X
X C
þ Cmw Asmw þ Cwv Aswv þ Cvc Asvc ð1Þ
m¼1 w¼1 w¼1 v¼1 v¼1 c¼1
!
X
C X
R C X
X D R X
X M
þ Ccr Ascr þ Ccd Ascd þ Crm Asrm
c¼1 r¼1 c¼1 d¼1 r¼1 m¼1
X
M X
V X
V
þ Pum Apsm þ Ov Asv þ Orv Arvs
m¼1 v¼1 v¼1
!
X
D X
C
þ Pd Ascd
d¼1 c¼1
Subject to:
X
W
Dsv Aswv þ Asv ; 8s; v ð2Þ
w¼1
X
C
#s Dsv Asvc þ Arvs ; 8s; v ð3Þ
c¼1
X
R X
W
Apsm þ Asrm ¼ a Asmw ; 8s; m ð4Þ
r¼1 w¼1
X
M X
V
Asmw ¼ Aswv ; 8s; w ð5Þ
m¼1 v¼1
X
V X
R X
D
Asvc ¼ Ascr þ Ascd ; 8s; c ð6Þ
v¼1 r¼1 d¼1
X
V X
R
qs b Asvc ¼ Ascr ; 8s; c ð7Þ
v¼1 r¼1
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 571
X
C X
M
c Ascr ¼ Asrm ; 8s; r ð8Þ
c¼1 m¼1
X
R
Apsm þ Asrm Capm im ; 8s; m ð9Þ
r¼1
X
M
Asmw Capw iw ; 8s; w ð10Þ
m¼1
X
V
Asvc Capc ic ; 8s; c ð11Þ
v¼1
X
C
Ascr Capr ir ; 8s; r ð12Þ
c¼1
X
C
Ascd Capd ; 8s; d ð13Þ
c¼1
X
V
Asv Uo; 8s ð14Þ
v¼1
X
V
Arvs Uro; 8s ð15Þ
v¼1
Objective function (1) minimizes the total cost that is comprised of fixed facility
cost, processing cost, transportation cost, purchasing cost, flexible network capacity
cost and disposal cost. Besides, the model includes 14 constraints. Constraints (2) and
(3) require the CLSC system should be capable to deal with the customer demands in
both forward and reverse directions. Constraints (4) and (5) specify the relationship
between the input and output amount in the forward channels. Constraints (6)–(8)
balance the material flows in the reverse logistics. Constraints (9)–(13) are capacity
requirements of respective facilities. Constraints (14) and (15) give the upper limits of
the flexible network capacity in both forward and reverse logistics. Besides, the
decision variables fulfill their respective binary and non-negative requirements.
3 Algorithm
adapt those changes and can be easily altered in order to maximize the system per-
formance. Solving a stochastic programming model is a complex optimization problem
that may require large computational efforts. In this paper, a sample average approx-
imation (SAA) method is employed in order to obtain the optimal objective value of a
large stochastic optimization problem with a great number of scenarios.
min
f ðx; yÞ :¼ CT x þ EP ½Uðx; nðyÞÞ ð16Þ
x; y 2 H
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 573
( )
min 1 XQ
~f ðx; yÞ :¼ C x þ
T q
Uðx; nðy ÞÞ ð17Þ
x; y 2 H Q Q q¼1
With the SAA, the optimal objective value is approximated by solving a set of
randomly generated small problems repeatedly instead of solving the original problem
directly, as shown in Eq. (17). In such a way, the computational efforts required is
manageable. Figure 2 illustrates the algorithmic procedures of the SAA method. For
more details of the solution method, the research works given by Verweij et al. [16] and
Kleywegt et al. [17] can be referred.
In order to illustrate the application of the proposed model for CLSC network opti-
mization, this section presents a computational experiment based on a set of randomly
generated parameters. The stochastic parameters are generated from uniform distri-
bution of respective parameter intervals. Besides, we investigated the performance of
three different sample sizes: 10, 30 and 50, respectively. All the optimizations were
performed with Lingo 18.0 solver. The results are presented in Figs. 3 and 4.
Fig. 3. CV of the total cost, facility operating cost, transportation cost and flexible network
capacity cost.
First, the in-sample stability is tested with coefficient of variation (CV) that is
obtained using CV ¼ r=l . Figure 3 illustrates the CVs of the total cost as well as
different cost components. With the increase of sample size, the CVs of all relevant cost
574 H. Yu et al.
Then, the quality of the SAA solutions are tested with the reference example. As
shown in Fig. 4, the optimality gap reduces significantly with the increase of sample
size. Compared with 10 scenarios, the optimality gap is decreased by 90.6% when 50
scenarios are used. However, in this case, the combined standard deviation will be
increased by 2.51%. Thus, the solution quality of the stochastic optimization problem
can be improved drastically with the increase of the sample size. It is noteworthy that
the selection of the sample size is based upon a trade-off analysis between quality of
solution and computational efforts required.
5 Conclusions
In this paper, a novel two-stage mixed integer programming model is formulated for
the network optimization of a single-product multi-echelon CLSC. The model aims at
minimizing the total cost for opening and operating the CLSC through optimal
decision-makings on both facility locations and transportation strategies. Compared
with the existing optimization models, this model takes the flexible network capacity
into account and thus formulates a penalty in the objective function. In order to solve
the proposed model, the SAA method is used. The result of the computational
experiment has shown that the inclusion of the flexible network capacity can signifi-
cantly improve the in-sample stability, and the increase on sample size will improve the
quality of solution of a large stochastic optimization problem.
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 575
For further improvement of the current research, two suggestions are given. First,
the environmental impact and policies, i.e., different carbon policies or strategies
[3, 14], may be formulated in the CLSC network optimization problem under an
uncertain environment. Second, different alternatives may be tested in order to increase
the network flexibility [18].
Notations
References
1. Chopra, S., Meindl, P.: Supply chain management: strategy, planning, and operation (2016)
2. Yu, H., Solvang, W.D.: A general reverse logistics network design model for product reuse
and recycling with environmental considerations. Int. J. Adv. Manuf. Technol. 87(9–12),
2693–2711 (2016)
3. Yu, H., Solvang, W.D.: A carbon-constrained stochastic optimization model with augmented
multi-criteria scenario-based risk-averse solution for reverse logistics network design under
uncertainty. J. Clean. Prod. 164, 1248–1267 (2017)
4. Yi, P., Huang, M., Guo, L., Shi, T.: A retailer oriented closed-loop supply chain network
design for end of life construction machinery remanufacturing. J. Clean. Prod. 124, 191–203
(2016)
5. Özceylan, E., Demirel, N., Çetinkaya, C., Demirel, E.: A closed-loop supply chain network
design for automotive industry in Turkey. Comput. Ind. Eng. 113, 727–745 (2017)
6. Amin, S.H., Zhang, G., Akhtar, P.: Effects of uncertainty on a tire closed-loop supply chain
network. Expert Syst. Appl. 73, 82–91 (2017)
7. Hasanov, P., Jaber, M., Tahirov, N.: Four-level closed loop supply chain with remanufac-
turing. Appl. Math. Model. 66, 141–155 (2019)
8. Taleizadeh, A.A., Haghighi, F., Niaki, S.T.A.: Modeling and solving a sustainable closed
loop supply chain problem with pricing decisions and discounts on returned products.
J. Clean. Prod. 207, 163–181 (2019)
9. Soleimani, H., Kannan, G.: A hybrid particle swarm optimization and genetic algorithm for
closed-loop supply chain network design in large-scale networks. Appl. Math. Model.
39(14), 3990–4012 (2015)
10. Chen, Y., Chan, F., Chung, S.: An integrated closed-loop supply chain model with location
allocation problem and product recycling decisions. Int. J. Prod. Res. 53(10), 3120–3140
(2015)
11. Hajipour, V., Tavana, M., Di Caprio, D., Akhgar, M., Jabbari, Y.: An optimization model for
traceable closed-loop supply chain networks. Appl. Math. Model. 71, 673–699 (2019)
12. Zhen, L., Sun, Q., Wang, K., Zhang, X.: Facility location and scale optimisation in closed-
loop supply chain. Int. J. Prod. Res. 57, 7567–7585 (2019)
13. Jeihoonian, M., Zanjani, M.K., Gendreau, M.: Closed-loop supply chain network design
under uncertain quality status: case of durable products. Int. J. Prod. Econ. 183, 470–486
(2017)
14. Mohammed, F., Selim, S.Z., Hassan, A., Syed, M.N.: Multi-period planning of closed-loop
supply chain with carbon policies under uncertainty. Transp. Res. Part D: Transp. Environ.
51, 146–172 (2017)
15. King, A.J., Wallace, S.W.: Modeling with Stochastic Programming. Springer, New York
(2012). https://doi.org/10.1007/978-0-387-87817-1
16. Verweij, B., Ahmed, S., Kleywegt, A.J., Nemhauser, G., Shapiro, A.: The sample average
approximation method applied to stochastic routing problems: a computational study.
Comput. Optim. Appl. 24(2–3), 289–333 (2003)
17. Kleywegt, A.J., Shapiro, A., Homem-de-Mello, T.: The sample average approximation
method for stochastic discrete optimization. SIAM J. Optim. 12(2), 479–502 (2002)
18. Yu, H., Solvang, W.D.: Incorporating flexible capacity in the planning of a multi-product
multi-echelon sustainable reverse logistics network under uncertainty. J. Clean. Prod. 198,
285–303 (2018)
Solving the Location Problem of Printers
in a University Campus Using p-Median
Location Model and AnyLogic Simulation
1 Introduction
The location problem of printers in a university campus is to select the optimal locations
from a set of pre-determined candidates so that the accessibility and satisfaction of users
(students and employees in this case) can be improved. Considering the nature of this
problem, it is a service location and network design problem that has already been
extensively focused on and investigated by both researchers and practitioners for more
than half a century. In management science, the basic idea of this problem is to locate a
number offacilities and, meanwhile, to allocate customer demand to different facilities [1].
Over the years, several methods, i.e., mathematical optimization model, analytical hier-
archy process (AHP), geographical information system (GIS), etc., have been developed
and used to support the decision-making of the location problem and network design of
service facility.
2 Method
In this paper, the p-median model is employed to make the optimal decisions of the
locations of printers. Then, AnyLogic is applied to create a realistic simulation envi-
ronment and to validate the optimal result obtained.
Solving the Location Problem of Printers in a University Campus 579
S.t.
X
uij ¼ 1; 8i 2 I ð2Þ
j2J
uij xj ; 8i 2 I; j 2 J ð3Þ
X
xj ¼ p ð4Þ
j2J
Objective function (1) minimizes the total travel distance to satisfy all the customer
demands. Constraint (2) assigns each customer to one service facility. Constraint (3)
requires a customer can only be allocated to an opened facility. Constraint (4) is the
requirement on the number of service facilities. Constraint (5) is the binary requirement
of decision variables.
580 X. Sun et al.
3 Case Study
Combining with both p-median location model and agent-based simulation in Any-
Logic, we present a case study of the location problem of printers at the third floor of
the main building at UiT The Arctic University of Norway, Narvik campus. The
objective is to locate five printers in order to minimize the total travel distance. The
optimization process and result with p-median model have been given by Yu et al. [2].
In order to simplify the problem, several assumptions are made.
1. Each room is considered as a unique customer demand point and a set of candidate
locations is pre-determined.
2. The users are divided into three groups: academic employees, laboratory employees
and students. The demand for printing service from different types of users are by
no means identical.
3. The demand is aggregated at the center point of each room.
4. The demand is associated with three influencing factors: type of user, printing
frequency and number of user, respectively. Besides, it is also adjusted by the
sensitivity to distance from different types of users.
5. The distances between the customer locations and the candidate locations of printers
are approximated by the Manhattan distance.
Solving the Location Problem of Printers in a University Campus 581
Figure 2 illustrates the layout of the studied area. The original location plan (red
squares) and the optimal location plan (green squares) are also given in the figure.
Fig. 2. The building layout and the printer locations in the original plan and the optimized plan.
In this paper, both the original and the optimal location plans of printers are
simulated in AnyLogic in order to validate the optimization and visualize the result
under a realistic environment. In this modeling and simulation environment, there are
many independent objects/individuals (students and employees), so an agent-based
approach is used. Compared with the original optimization procedures, several realistic
assumption are made as follows in order to have a better representation of the real
problem and generate a more reliable analysis.
1. In order to maintain the consistency, the customer demand estimated by Yu et al. [2]
is used in the simulation for determining the generation of agents (number and
frequency).
2. Instead of aggregating all the customers in the center point of each room, the
customer demand can be generated at a random location within the room in the
simulation.
3. Instead of using the Manhattan distance to calculate the distances between the
customer locations and the candidate locations of printers, the real routes are
defined in the simulation.
4. Compared the optimization environment from Yu et al. [2] with the current layout
of the building, significant changes of the layout of the area served by the left most
printer in Fig. 2 are observed. Thus, this part is not taken into account in the
simulation and only the four-printer scenario is tested.
582 X. Sun et al.
The flow chart of agent movement is illustrated in Fig. 3. The sources (students and
employees) are randomly generated from respective classrooms or offices. They will go
to the printing area via the real routes defined, use the printers and then move to the
exit. In this case, the goal is to calculate the movement distance of all the agents within
the studied period. In order to simplify the calculation and reduce the simulation time,
we only considered the movement distance in one direction: from the offices or
classrooms to the printers. The distance in the reverse direction (back to the rooms) is
assumed to be the same as that in the forward direction, so it will not influence the
result of the comparison between different location plans. In addition, the movement
speed of all the agents is set to 1 m/s, so the movement distance is directly proportional
to the time consumed in the movement.
Table 1. Performance evaluation of the optimal location plan and the original location plan in
both optimization and simulation environments
Performance evaluation Optimization Simulation
1 month 3 month 5 month
Reduction of total travel distance 10% −8.06% −7.08% −7.36%
Solving the Location Problem of Printers in a University Campus 583
Fig. 4. Illustration of the difference between the Manhattan distance (blue) and the actual
distance (green).
5 Conclusions
References
1. Abareshi, M., Zaferanieh, M.: A bi-level capacitated p-median facility location problem with
the most likely allocation solution. Transp. Res. Part B Methodol. 123, 1–20 (2019)
2. Yu, H., Solvang, W.D., Yang, J.G.: Improving accessibility and efficiency of service facility
through location-based approach: a case study at Narvik University College. Adv. Mater.
Res. 1039, 593–602 (2014)
3. Hakimi, S.L.: Optimum distribution of switching centers in a communication network and
some related graph theoretic problems. Oper. Res. 13(3), 462–475 (1965)
4. Wheeler, A.P.: Creating optimal patrol areas using the p-median model. Polic.: Int. J. 42(3),
318–333 (2019)
5. Won, Y., Logendran, R.: Effective two-phase p-median approach for the balanced cell
formation in the design of cellular manufacturing system. Int. J. Prod. Res. 53(9), 2730–2750
(2015)
6. Pamučar, D., Vasin, L., Atanasković, P., Miličić, M.: Planning the city logistics terminal
location by applying the green-median model and type-2 neurofuzzy network. Comput. Intell.
Neurosci. 2016 (2016). http://downloads.hindawi.com/journals/cin/2016/6972818.pdf. Arti-
cle ID: 6792818
7. Taleshian, F., Fathali, J.: A Mathematical model for fuzzy-median problem with fuzzy
weights and variables. Adv. Oper. Res. 2016 (2016). Article ID: 7590492
8. Adler, N., Njoya, E.T., Volta, N.: The multi-airline p-hub median problem applied to the
African aviation market. Transp. Res. Part A Policy Pract. 107, 187–202 (2018)
9. Yu, H., Solvang, W.D.: A comparison of two location models in optimizing the decision-
making on the relocation problem of post offices at Narvik, Norway. In: Proceeding of IEEE
International Conference on Industrial Engineering and Engineering Management, Thailand,
pp. 814–818 (2018)
10. Yang, Y., Li, J., Zhao, Q.: Study on passenger flow simulation in urban subway station
based on anylogic. J. Softw. 9(1), 140–146 (2014)
11. Borshchev, A., Karpov, Y., Kharitonov, V.: Distributed simulation of hybrid systems with
AnyLogic and HLA. Future Gener. Comput. Syst. 18(6), 829–839 (2002)
12. Karpov, Y.G., Ivanovski, R.I., Voropai, N.I., Popov, D.B.: Hierarchical modeling of electric
power system expansion by anylogic simulation software. In: Proceeding of IEEE Power
Tech Conference, Russia, pp. 1–5 (2015)
13. Karaaslan, E., Noori, M., Lee, J., Wang, L., Tatari, O., Abdel-Aty, M.: Modeling the effect
of electric vehicle adoption on pedestrian traffic safety: an agent-based approach.
Transp. Res. Part C Emerg. Technol. 93, 198–210 (2018)
14. Kim, S., Kim, S., Kiniry, J.R.: Two-phase simulation-based location-allocation optimization
of biomass storage distribution. Simul. Model. Pract. Theory 86, 155–168 (2018)
Intelligent Workshop Digital Twin Virtual
Reality Fusion and Application
1 Introduction
With the rapid development of industrial technology and the new generation of
information technology, equipments in various fields such as intelligent workshops and
industrial manufacturing have been upgraded, such as industrial robots, 3D printers,
and machining centers, and the integration and intelligence of typical equipment have
been continuously improved [1]. Along with the formation of information space, a
large number of sensors are needed to collect and collect various information of
complex equipment, and collect various information required for monitoring, con-
necting, and interacting with each other in real time, so as to realize the intelligence of
the workshop [2]. However, at present, the physical world and the information world of
the workshop are isolated from each other, and the data in the middle cannot be
transmitted or integrated, which leads to the inability to realize interaction and
integration in the virtual and real space, and the healthy quality management of the
entire workshop cannot be realized [3].
Digital twin is the best way to achieve smart factory upgrades. Workshop manu-
facturing data has large data features such as large-scale, multi-source heterogeneous,
multi-temporal scale, and multi-dimensional. Through the deep integration of digital
twinning and workshop management system, the multi-source data of the whole
workshop can be obtained, and the data correlation fusion modeling can be carried out
to realize the digitization of the workshop. More importantly, the introduction of digital
twin can make the traditional workshop Health management is more open and
expansive. At present, the research and application of digital fusion modeling based
on digital twinning in China is still immature and lacks experience in implementation
[4, 5].
This focus is based on the health monitoring of the whole life cycle of the work-
shop. It mainly uses the application of multi-source data acquisition and digital inte-
gration to realize the monitoring of the health status of the workshop under actual
operation, and does not fully apply all the technologies of digital twin key point.
Through the intelligent data mapping integration and digital fusion modeling tech-
nology of the heterogeneous equipment data in the workshop, the function of digital
health technology in the health monitoring of the workshop is realized, and the basis
for the subsequent in-depth development of the digitalization of the workshop is
provided.
2 System Framework
The digital twinning system of the intelligent workshop introduced in this paper is
mainly about several modules of multi-source data acquisition, data denoising mod-
eling, data fusion modeling and data analysis results. The overall framework of the
system is shown in Fig. 1.
The main functions of the digital twin modules in the intelligent workshop are as
follows.
(1) Multi-source data acquisition module: The lowest-level information building
module of the intelligent workshop is also the most basic module for building
digital twins. For the data collection of multi-source heterogeneous equipment in
the workshop, the system adopts PLC, wireless AP and various sensor converters,
etc. Through the construction of the hardware network of the underlying equip-
ment, the robot movement information, the processing status of the processing
equipment, and the logistics equipment are obtained and the real-time informa-
tion. At the same time, the remaining sensor devices are used to obtain the state
information such as the temperature and speed of the actual processing equipment.
Since the data collected by different sensors often appear in different formats, the
multi-source heterogeneity data of the workshop is generated.
(2) Data pre-processing module: The massive data collected by the multi-source
acquisition module of the workshop will often generate some false information
due to the sudden events of the workshop or the error of data collection, which
will result in inaccuracy of the data. Before the information, the Web-Service is
used to cluster the massive data of the workshop, and the error and inaccurate
information are denoised and filtered to obtain the representative information of
the workshop data.
(3) Data fusion processing module: For the multi-source data information collected
by the workshop, after denoising and filtering the false information of the
workshop, an improved BP neural network fusion algorithm is adopted to inte-
grate redundant and complementary information according to certain rules. Pro-
cessing conflicting data yields an accurate judgment of the target.
logistics information for specific equipment The workshop data is collected, and the
real-time data and information of the first part of the line are finally obtained through
the design and integration of the PLC network technology. The other channels are also
arranged in such a way that the orderly management of the corresponding data is
completed, and the real-time data of the workshop is effectively managed and
monitored.
the sensory data collected by the wireless sensor network through the neural network,
and combines the collected sensor data with the clustering route.
When applying the GAPSOBP algorithm, it is first necessary to determine the
structure of the BP neural network according to the topology map of the wireless
transmission network. The wireless sensor network forms clusters according to the
LEACH algorithm, and each cluster is regarded as a BP neural network structure. The
number of nodes in the cluster is the number of input layers of the BP neural network,
and the number of output layers, that is, the number of cluster heads is 1. The number
of hidden layers is determined according to formula 1, and then the number of optimal
hidden layers is determined by trial and error.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
L= ðm þ nÞ þ a ð0\a \10Þ ð1Þ
Where: n is the number of input layers, and m is the number of output layers
The workflow of the GAPSOBP algorithm mainly consists of three phases:
(1) Particle initialization: Firstly, the BP neural network structure needs to be
determined. Then the initial weight and threshold of the BP neural network are
passed to the particle swarm algorithm to become the primary population. The
particle swarm algorithm encodes the neural network weights and thresholds, and
initializes the candidate solutions and particles speed.
(2) Solving the optimal BP neural network parameters: The particle swarm opti-
mization algorithm calculates the individual extremum and the group extremum
according to the actual output of the BP neural network and the error fitness
function of the expected output, and updates the particle optimization speed and
position of the PSO algorithm. In the optimization process of the particle swarm,
the crossover and mutation operations of the genetic algorithm are added, and then
the fitness value is recalculated to determine whether the termination condition is
satisfied. If the condition is satisfied, the optimization result is transmitted to the
BP neural network for network training, otherwise the iterative update particle
speed is continued. And position until the algorithm reaches the termination
condition.
(3) Training BP neural network: BP neural network uses the optimization results of
genetic particle swarm optimization algorithm to train the network, update the
weight and threshold, until the network parameters are determined, and then the
data fusion processing of the wireless sensor network can be performed.
Through the above three stages of operation, the grouped data is regarded as a node,
and the specific rules and methods are used to fuse the processing according to the
nodes of the specific object, and the representative information of the specific device
object is obtained, and the processing process of the algorithm is completed.
Intelligent Workshop Digital Twin Virtual Reality Fusion 591
The digital twinning system designed in this paper has been successfully applied to the
production workshop of Shanghai Intelligent Manufacturing and Robot Key Labora-
tory, which satisfies the need for unified collection, management and operation of
workshop information, and solves the unified centralized data management of
heterogeneous equipment in the workshop. Provide a digital data foundation for
intelligent management of the shop floor. The system obtains the representative
information of the equipment through data collection, filtering and fusion processing,
and displays the running data and real-time status of the entire workshop production
line, as shown in Fig. 4, that is, the information collected on the workshop production
line is displayed in a fusion manner, and the collection is performed. Real-time
information, such as robot motion data and working status, is collected. Through the
filtering and denoising and fusion display processing of the collected multi-source data,
the accurate representative information of the robot arm can be obtained, and the
working state information, the grabbing object, and the six-joint rotation angle infor-
mation shown in the figure are displayed together.
5 Conclusions
This paper is based on the co-author’s “Research on the Virtual Reality Synchro-
nization of Workshop Digital Twin” and carries out in-depth research on data fusion.
This time, based on the health monitoring of the whole life cycle of the workshop, the
application of multi-source data acquisition and digital fusion modeling is used to
monitor the health status of the workshop under actual operation, and the digital health
is not fully applied. Technology and key points. Through the intelligent data mapping
integration and digital fusion modeling technology of the heterogeneous equipment
data in the workshop, the function of digital health technology in the health monitoring
of the workshop is realized, and the basis for the subsequent in-depth development of
the digitalization of the workshop is provided.
References
1. Liu, D., Guo, K., Wang, B., Peng, Y.: Summary and prospect of digital hygiene technology.
Chin. J. Sci. Instrum. 39(11), 1–10 (2018)
2. Guo, D., Bao, J., Shi, G., Zhang, Q., Sun, X., Weng, H.: Modeling of aerospace structural
parts manufacturing workshop based on digital twinning. J. Donghua Univ. (Nat. Sci. Ed.) 44
(04), 578–585 + 607 (2018)
3. Chen, Z., Ding, X., Tang, J., Liu, Y.: Exploration of production control model of aircraft
assembly workshop based on digital hybrid. Aeronaut. Manufact. Technol. 61(12), 46–
50 + 58 (2018)
4. Tao, F., Liu, W., Liu, J., Liu, X., Liu, Q., Qu, T., Hu, T., Zhang, Z., Xiang, F., Xu, W., Wang,
J., Zhang, Y., Liu, Z., Li, H., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Yan, F., He, L., Yi, W.,
Min, C.H.: Digital hygiene and its application exploration. Comput. Integr. Manufact. Syst.
24(01), 1–18 (2018)
5. Jiakai, G.: Digital twins: the best bond to connect the manufacturing physics world and digital
virtual world. Softw. Integr. Circuits 09, 4 (2018)
6. Wu, P., Qi, M., Gao, L., Zou, W., Miao, Q., Liu, L.: Research on the virtual reality
synchronization of workshop digital twin. In: 2019 IEEE 8th Joint International Information
Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, pp. 875–879
(2019)
Harvesting Path Planning of Selective
Harvesting Robot for White Asparagus
Abstract. In order to optimize the harvesting path of harvesting robot for white
asparagus, a global path planning algorithm based on multi-fork tree is designed
according to the location distribution of the harvesting point and the placed
collecting box point, the optimal harvesting path with the shortest harvesting
distance is obtained. On this basis, a path planning algorithm based on
sequential harvesting of white asparagus is proposed, which effectively increases
the speed of path planning when the distance of harvesting path is not much
different from the optimal path. The simulation results show that both algorithms
can effectively improve the harvesting efficiency of white asparagus. With the
global path planning algorithm, the moving distance of end-effector can be
saved by 42.83% on average when the number of white asparagus is different.
By adopting the path planning algorithm based on sequential harvesting, the
average distance of end-effector motion can be saved by 37.2%, and the real-
time performance is very good. The path planning of white asparagus harvesting
process has a great impact on improving the harvesting efficiency of white
asparagus.
1 Introduction
At present, China is the country with the largest production and export of white
asparagus, but its complicated harvesting process is the bottleneck of the development
of white asparagus industry. And the manual harvesting method is widely used at home
and abroad, the domestic white asparagus harvesting machinery has not been reported
[1, 2]. Based on the visual positioning and harvesting system of the selective harvesting
robot of white asparagus in the laboratory, this paper studied the harvesting path
planning of the end-effector, which is of great significance to improve the harvesting
efficiency of white asparagus [3].
Firstly, the image of the harvesting area is acquired by the machine vision system,
and the white asparagus in the current area is identified and located by the image
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 593–599, 2020.
https://doi.org/10.1007/978-981-15-2341-0_74
594 P. Zhang et al.
processing technology to obtain the position coordinates of the asparagus tip, according
to the location coordinates of white asparagus, the end-effector runs to the harvest point
on the X-axis and Y-axis screws [4–6]. After the white asparagus is clamped and cut,
the end-effector moves upwards by the Z-axis screw to bring the white asparagus out of
the ground, and the end-effector moves to the collecting box point to place the white
asparagus, a harvest is completed. The above process is repeated until the last white
asparagus is harvested in the current harvesting area, and the robot enters the next area
for harvesting [7–9]. Because the white asparagus is usually irregularly distributed on
the ridge surface, in order to improve the harvesting efficiency, the harvest path
planning is necessary and the optimal path is the shortest path taken by the end-effector
from the starting point of harvesting to the completion of all white asparagus in the
current region.
In this paper, the global path planning algorithm with the shortest distance and the
path planning algorithm based on sequential harvesting are proposed, and the simu-
lation analysis of the algorithm shows that the designed algorithm can effectively
improve the harvesting efficiency of white asparagus.
Fig. 1. Schematic diagram of white asparagus harvesting path by global planning algorithm
The coordinates of each point are shown in Fig. 1, the boundary point of the har-
vesting area is Mðxmax ; ymax Þ, The vertical distance from the position of the collecting
box point to the boundary of the collecting box point is d. In Fig. 2, the harvesting path
marked with the red line in the decision tree is O ! A ! A1 ! B ! B1 ! C ! C1 .
Then calculate the distance the end effector moves.
According to the above method of establishing a path planning decision tree to traverse
all feasible paths, although the shortest path can be found, as the number of white
asparagus in the harvesting area increases, it takes a long time to determine the optimal
596 P. Zhang et al.
harvesting path, and before the current target point is placed, the specific location of the
collecting box point should be calculated according to the location of the next target
point, the end-effector waits for the harvesting time to be much longer than the har-
vesting time, and the placement process requires the x-axis and the y-axis screw to
move at the same time, the control of the end-effector is complicated. In view of the
above characteristics, in the case where the number of white asparagus is relatively
large, the method of harvesting in the order may be adopted, the harvest path planning
algorithm of the end-effector is as follows:
(1) After obtaining the position coordinates of each target point in the harvesting area,
all the target points are sorted according to the order of the value of the ordinate y
from small to large.
(2) The end-effector starts from the initial point O and selects the target point with the
smallest y coordinate after sorting (assuming target point A) for collection.
(3) After harvesting, select the collecting box point close to the target point to place.
(4) Find the target point of the remaining (except for the target point A that has been
harvested), calculate the distance between point A and other target points, and
select the one closest to point A, if the distance between the target point and A is
less than L set previously, the target point is preferentially harvested, otherwise
the target points sorted in step (1) are sequentially collected.
(5) Repeat steps (3) and (4) until the target points of the current area are all harvested.
(6) Record the moving path and calculate the distance the end-effector moves.
O y
O
d1
A1
x d2 A2
d3 A xa,ya
B1 d4 B xb,yb B2
d5
C1
d6
C2
C xc,yc
M
Fig. 3. Schematic diagram of white asparagus harvesting path based on sequential algorithm
Take the three white asparagus A, B, and C in the harvesting area as an example.
The harvest path is shown in Fig. 3. First, all the target points are sorted according to
the order of the value of the ordinate y from small to large. If the initial planned path
after sorting is A ! C ! B, and calculate the distance between A and B, C, which are
d1, d2 respectively, if d1\L\d2, the harvesting path is adjusted to A ! B ! C, the
size of L is set according to the actual situation. Then calculate the distance the end
effector moves.
Harvesting Path Planning of Selective Harvesting Robot 597
By analyzing the data in the Table 1, white asparagus harvesting area number 2, 3,
4 and 5 respectively, each case respectively using five different groups of area har-
vested, the group with the highest efficiency is selected among the 5 groups, after
adopting the path optimization algorithm, the shortest path distance saves 48.66%,
54.99%, 54.89% and 52.12% respectively compared with the longest path distance,
with an average efficiency improvement of 42.83%. It can be seen that the path opti-
mization algorithm can effectively improve the harvesting efficiency of the end effector.
The results is shown in Table 2, take the number of 3 white asparagus as an example,
the minimum and maximum values of the movement distance of the end effector are
228 cm and 303 cm respectively in five different harvesting areas, which is because the
position of white asparagus in the ridge is unevenly distributed and 5 groups of harvesting
areas is randomly selected. Compared with the longest path distance of the global optimal
path planning algorithm, the maximum moving efficiency of the end effector of each
group is 51.59%, and the average value of improving efficiency is 41.85%.
In the four cases, the maximum efficiency is 46.27%, 51.59%, 46.85% and 44.78%
respectively, and the average of the maximum efficiency is 47.37%. Considering the
five different harvesting areas, the average of the efficiency in the four cases is 31.94%,
41.85%, 40.27%, 34.75% respectively, the average efficiency increases by 37.20%.
Therefore, the path planning algorithm based on sequential harvesting can effectively
improve the harvesting efficiency of end-effector.
5 Conclusion
In this paper, the harvesting path of the harvesting robot of white asparagus is planned,
and the global optimal path planning algorithm and the path planning algorithm based
on sequential harvesting are proposed. According to the global optimal path planning
algorithm, the path with the smallest running distance of the end-effector can be
obtained. The path planning algorithm based on sequential harvesting is adopted, the
harvesting sequence of target points can be obtained faster and the operation of the end
effector is simple when the white asparagus is placed at the collecting box point.
Simulation results show that the algorithm is effective. Both methods can effectively
improve the harvesting efficiency, especially in the case of a large number of asparagus,
the improved path planning algorithm based on sequential harvesting has a good real-
time performance and can better meet the actual needs.
References
1. Lu, B.: Development status and development trend of asparagus industry in China. Shanghai
Vegetables 12(4), 3–4 (2018)
2. Chen, D., Zhang, Q., Wang, S., et al.: Current status and future solutions for asparagus
mechanical harvesting. J. China Agric. Univ. 21(4), 113–120 (2016)
3. Dong, F., Heinemann, W., Kasper, R.: Development of a row guidance system for an
autonomous robot for white asparagus harvesting. Comput. Electron. Agric. 79(2), 216–225
(2011)
4. Li, Q., Hu, T., Wu, C., et al.: Review of end-effectors in fruit and vegetable harvesting robot.
Trans. Chin. Soc. Agric. Mach. 39(3), 175–179 (2008)
5. Yuan, J., Du, S., Liu, X.: A clip-cut white asparagus harvesting device and harvesting
method. ZL201610887545.8
6. Zhang, S., Ai, Y., Zhang, B., Sun, X., Zhang, M., Zhang, T., Song, G.: Path recognition and
control of cigarette warehouse robot based on machine vision. Shandong Agric. Sci. 51(03),
128–134 (2019)
7. Wang, X.: Studies on information acquisition and path planning of greenhouse tomato
harvesting robot with selective harvesting operation. JiangSu University (2012)
8. Chen, J., Wang, H., Jiang, H., et al.: Design of end-effector for kiwifruit harvesting robot.
Trans. Chin. Soc. Agric. Mach. 43(10), 151–154 (2012)
9. Liu, X., Du, S., Yuan, J., Li, Y., Zou, L.: Analysis and experiment on the operation of the
end actuator of the white asparagus selective harvester. Trans. Chin. Soc. Agric. Mach. 49
(4), 110–120 (2018)
10. Yuan, Y., Zhang, X., Hu, X.: Algorithm for optimization of apple harvesting path and
simulation. Trans. CSAE 25(4), 141–144 (2009)
Optimization of Technological Processes
at Production Sites Based on Digital Modeling
Institute of Computer Science and Technology, Peter the Great St. Petersburg
Polytechnic University, St. Petersburg, Russia
{drob,voinov}@ics2.ecd.spbstu.ru
1 Introduction
The main value of modern production is information, the amounts of which have
become too large for a human to process effectively. Technologies are changing faster
than enterprises manage to integrate them; the level of automation is constantly
growing. However, it is not enough only to provide modern equipment for a factory, it
is necessary to ensure the efficiency of its work. This can be achieved by an adequate
analysis of the incoming information and its subsequent processing. At the modern
mechanical engineering site, the main work on the manufacturing of products is carried
out on equipment with computer numerical control, therefore the optimization of the
technological process often comes down to the optimization of the program code for
these machines. At the same time, the work on the analysis and processing of infor-
mation is not always fully automated due to the need to operatively adapt to the
production environment, especially for small enterprises with small-scale production.
In this area, it is necessary to quickly create production plans that can change
depending on the state of the process equipment and manufactured products, and the
implementation of plans should be effectively automated.
Usually, the tasks of operational planning and automated production management
are carried out by the manufacturing execution systems (MES) [1]. They occupy an
intermediate place in the hierarchy of enterprise management systems between the level
of information collection from equipment in workshops done by supervisory control
and data acquisition (SCADA) systems [2] and the level of operations over a large
amount of administrative, financial and accounting information done by enterprise
resource planning (ERP) [3] systems. Nowadays on the Russian market there are three
most popular largest solutions: PHOBOS system [4], YSB.Enterprise.Mes system and
PolyPlan system [5]. PHOBOS is traditionally used in large and medium-sized
mechanical engineering enterprises. YSB.Enterprise.Mes originated from the wood-
working industry and focuses on the sector of medium and small enterprises. The
PolyPlan system has a smaller set of MES functions, but is positioned as an operational
scheduling system for automated and flexible manufacturing in engineering.
The developed solution given in this work is designed to solve a narrower class of
problems - to simplify the technological preparation of production for small-scale
mechanical engineering site, which can be based on the introduction of operative
digital modeling and analysis of the technological process of the production site in
order to optimize and generate programs for managing and monitoring the production
process.
The technologist obtains the necessary parameters from the drawing, reference
catalogues or other documentation. For a number of parameters, ranges of possible
values are specified.
In addition to the surface modules, the modules for the technological process of
manufactured detail, the modules for equipment and gear, the modules for instrumental
adjustment and the modules for measuring instruments are described in a similar way.
Using modular formalization in AWT, the construction of technological blocks,
modules to be processed in which use the same tool for the processing, and techno-
logical groups, which divide processing blocks into phases, is automated. As a result,
technological routes (TRs) for manufacturing of a detail are formed from technological
groups.
All this information is recorded in a specialized database. Info about each tech-
nological route contains a list of surface modules with specified values of parameters.
Information of each surface module contains a detailed description of the manufac-
turing operations necessary for its processing with symbolic parameters. By creating
queries to the database, a route with symbolic parameters, on the basis of which a
specific detailed route will be created, is automatically formed.
In the approach presented here we use the MSC language [6] for the encoding of
the technological route. MSC is a standardized language for describing behaviors using
message exchange diagrams between parallel-functioning objects (CNC, robots). The
diagram example is shown in Fig. 2.
Optimization of Technological Processes at Production Sites 603
The relative estimate of the route shown in Fig. 4 can be obtained as the sum of the
estimates of each operation on each individual surface module that make up the route,
which is sufficient for ranking alternative solutions on the choice of parameters of the
route. To obtain absolute values, it suffices to use the multiplicative and additive
correction factors obtained on the basis of statistical estimates of the technological
processes of a particular production.
The correct route can be optimized. By changing the parameters of the route within
the allowable ranges and re-calculating the indicators of processing time and cost, the
technologist can get a solution that meets the limitations of the management on a
particular job or get the Pareto-optimal solution [8]. However, it should be noted that
the mentioned optimization is valid provided that the production by the route is carried
out without taking into account the current state and restrictions on the resources of the
production site. Obtaining more realistic estimates is possible with the help of simu-
lation modeling of the distribution of resources for the routes simultaneously performed
at the production site. Theoretical basis for the modeling approach including formal-
ization of the structure of a technological process and technological matrix is described
in details in previous work of the authors [9]. Shown below in Sect. 4 is a software
module used for graphical representation.
The digital model of the production site simulates the implementation of the production
of different batches of details by different specified technological routes. The site model
is built on the basis of information on the resources of the production site (CNC
Optimization of Technological Processes at Production Sites 605
machines, transport robots, warehouses, staff etc.) which include amounts of time for
their usages. The size of the batch of details is also associated with the route.
The model uses the method of dynamic priorities to simulate the workload of
resources of the production site and determine the duration of the realization of the
technological process for orders.
The result of simulation modeling is a schedule for the implementation of the
technological process shown in Fig. 5, which provides an estimate of the time to
manufacture a batch of details in accordance with a specific route along with an
estimate of the lead time for all routes shown in Fig. 6. A set of estimates of the time of
execution of the route can be analyzed for the fulfillment of certain criteria and
restrictions characterizing the conditions of the order.
5 Conclusions
The final result of the work is the creation of a software system for the automation of
the preparation and control of the technological processes of the mechanical engi-
neering production site. Currently, a working prototype of the system has been
implemented, on which the following properties have been tested:
1. Ability to quickly adapt to specific production conditions: equipment, resources and
orders.
2. Optimization of the characteristics of specific production processes in accordance
with the selected set of criteria for its success, carried out on-line.
3. Efficiency assessment of the execution time and cost of the order, which is very
important for the small-scale production manager with the flow of orders for small
batches of different products that require different technological routes to be performed.
The platform provides a significant increase in the productivity of the technologist
at the technological preparation phase of production. Total preparation time decreases
to approximately 1 day per order.
Acknowledgements. The work was financially supported by the Ministry of Science and
Higher Education of the Russian Federation in the framework of the Federal Targeted Program
for Research and Development in Priority Areas of Advancement of the Russian Scientific and
Technological Complex for 2014–2020 (14.584.21.0022, ID RFMEFI58417X0022).
References
1. Miklosey, B.: The basics of MES. Assembly 62(3) (2019)
2. Ford, D.: SCADA is dead: Rethink your approach to automation. In: 91st Annual Water
Environment Federation Technical Exhibition and Conference, WEFTEC 2018, pp. 2781–
2785 (2019)
3. Potts, B.: ERP implementation: define what ‘best practice’ means. Plant Eng. 73(3), 10 (2019)
4. “PHOBOS” MES-system. http://www.fobos-mes.ru/fobos-system/fobos-MES-system.html.
Accessed 12 Aug 2019
5. “PolyPlan” MES-system. http://polyplan.ru/index.htm. Accessed 12 Aug 2019
6. Recommendation ITUT Z. 120. Message Sequence Chart (MSC), 11/2000
7. Drobintsev, P., Kotlyarov, V., Letichevsky, A., Selin, I.A.: Industrial software verification
and testing technology. In: CEUR Workshop Proceedings, vol. 1989, pp. 221–229 (2017)
8. Voinov, N., Chernorutsky, I., Drobintsev, P., Kotlyarov, V.: An approach to net-centric
control automation of technological processes within industrial IoT systems. Adv. Manufact.
5(4), 388–393 (2017)
9. Kotlyarov, V., Chernorutsky, I., Drobintsev, P., Voinov, N., Tolstoles, A.: Structural
modelling and automation of technological processes within net-centric industrial workshop
based on network methods of planning. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T.
(eds.) Advanced Manufacturing and Automation VIII. IWAMA 2018. Lecture Notes in
Electrical Engineering, vol. 484, pp. 475–488 (2019)
10. Chernorutsky, I., Drobintsev, P., Kotlyarov, V., Voinov, N.: A new approach to generation
and analysis of gradient methods based on relaxation function. In: 19th IEEE UKSim-AMSS
International Conference on Modelling and Simulation, UKSim 2017, pp. 83–88 (2018)
Smart Maintenance in Asset
Management – Application
with Deep Learning
Abstract. With the onset the digitalization and Industry 4.0, the maintenance
function and asset management in a company is forming towards Smart
Maintenance. An essential application in smart maintenance is to improve the
maintenance planning function with better criticality assessment. With the aid
from artificial intelligence it is considered that maintenance planning will pro-
vide better and faster decision making in maintenance management. The aim of
this article is to develop smart maintenance planning based on principles both
from asset management and machine learning. The result demonstrates a use
case of criticality assessment for maintenance planning and comprise compu-
tation of anomaly degree (AD) as well as calculation of profit loss indicator
(PLI). The risk matrix in the criticality assessment is then constructed by both
AD and PLI and will then aid the maintenance planner in better and faster
decision making. It is concluded that more industrial use cases should be con-
ducted representing different industry branches.
1 Introduction
The mission of asset management can be comprehended as the ability to operate the
physical asset in the company through the whole life cycle ensuring suitable return of
investment and meeting the defined service and security standards [1]. Further, it is also
stated in ISO 55000 that asset management will realize value from the asset in the
organization where asset is a thing or item that has potential or actual value for the
company [2]. The role of the maintenance function in asset management has been
further detailed in EN 16646 standard for physical asset management and considers the
relationship between operating and maintaining the asset is [3]. In particular it is
recommended in this standard that dedicated key performance indicator (KPI) can be
applied in physical asset management. A proposed KPI that improves the integrated
planning process between the maintenance and the production function in asset man-
agement is denoted as profit loss indicator (PLI). This indicator evaluates the different
types of losses in production from an economic point of view. Also, PLI has been
tested in different industry branches such as the petroleum industry [4] and manufac-
turing industry [5].
With the onset of digitalization in industry enabled by breakthrough innovations
from Industry 4.0 changes the maintenance capability in the company. The shift is from
a “off-line” maintenance function where data is collected and analyzed manually,
towards a digital maintenance [6] and is often denoted as smart maintenance [7, 8].
Artificial intelligent (AI) and machine learning which is a central part of smart
maintenance is considered as a fundamental way to process intelligent data. Yet, there
is a difference between traditional machine learning and data driven artificial intelli-
gence [9]. The difference lies in the performance of feature extractions, in manufac-
turing often mentioned as machine learning or Advanced Manufacturing. In this article
anomaly detection for smart maintenance will be investigated more in details.
Application of AI is also relevant in order to improve the plant uptime. Anomaly in
mechanical systems usually cause equipment to breakdown with serious safety and
economic impact. For this reason, computer-based anomaly detection systems with
high efficiency are imperative to improve the accuracy and reliability of anomaly
detection, and prevent unanticipated accidents [10].
From a smart maintenance perspective, the result of a more digitalized asset
management should also include that maintenance is planned with insight from the
individual equipment in combination of the system perspective of the asset [6]. This
need is further supported with empirical studies that points out the necessity for crit-
icality assessment when increasing the productivity through smart maintenance
planning [8]. In fact, maintenance planning is regarded to be unlikely to achieve
optimum maintenance planning without a sound criticality assessment of the physical
asset such as the machines [8].
Smart maintenance has also been denoted with other terms such as deep digital
maintenance (DDM) [11] where application of PLI is of relevance. In DDM it still
remains to investigate in appropriate scenarios for the planning capabilities in smart
maintenance that includes anomaly detection and criticality assessment.
The aim of this article is to develop an approach for decision support in smart
maintenance planning based on principles both from asset management and machine
learning-based anomaly detection and criticality assessment.
The future structure of this article is as follows: Sect. 2 presents relevant literature
in smart maintenance whereas Sect. 3 demonstrates an essential application in smart
maintenance planning where criticality assessment is conducted based on anomaly
detection and PLI calculation. Finally, Sect. 4 discuss the results with concluding
remarks.
610 H. Rødseth et al.
inspired from [17] and [18]. The starting point in this framework is the data source and
includes external data such as inventory data of spare parts from suppliers. In addition,
product data is from the equipment such as condition monitoring data, as well as
enterprise data from computerized maintenance management system (CMMS). All the
raw data sources are then aggregated in multiple formats in a data cloud.
Fig. 1. Value creation with data in smart maintenance framework inspired from [17] and [18].
The raw data is further applied as for smart data analytics including both predictive
and prescriptive analytics. In the maintenance field, the predictive analytics will
comprise e.g. forecasts of the technical condition of the machine. To ensure value
creation of the physical asset it is also important to include prescriptive analytics that
supports in recommended actions in maintenance planning. This will include anomaly
detection to evaluate the probability of future machine breakdowns. In addition, it is
also necessary to evaluate the consequences of the machine failure. In DDM, the PLI
seems promising for this purpose [11]. To assess the criticality of the machine in
maintenance planning, both the probability and the consequence can be combined in a
risk matrix.
The result of smart data is deeper insight in the business, where e.g., the plant
capacity has increased as well as deeper insights of the partners where e.g., spare part
supply is improved.
As shown in Fig. 1 and explained above, criticality assessment [8] could be an essential
element of prescriptive analysis in our smart maintenance framework. In overall a
criticality assessment should evaluate both the probability and consequences for each
failure of the equipment. We hereby use a demonstration use case to explain our
proposal of criticality assessment with three steps: structured approach (1) PLI esti-
mation; (2) the anomaly degree calculation; (3) the criticality assessment. So far, few
612 H. Rødseth et al.
companies have collected data that can be used for PLI estimation and anomaly
detection, we could not get both data from the same machine. The data we present in
the case study come from two different machines. However, for the demonstration
purpose and for explaining our ideas, we believe it is applicable to merge the data to
explain our idea by assuming that the data come from the same machine.
Step 1: PLI estimation
The calculation of profit loss indicator is applied based on earlier case study from both
[11] and [5]. The case study considers the malfunction of an oil cooler in a machine
center. The malfunction was first observed when the machine cantered produced
scrappage. A quality audit meeting evaluated economic loss of this scrappage. In
addition, maintenance personnel conducted inspection on the machine center and found
that the cause of this situation was due to malfunction of the oil cooler. This oil cooler
was replaced, and the machine center had in total 6 days with downtime. In addition to
scrappage it was also evaluated that the machine had lost revenue due to the downtime.
Table 1 summarizes the different type of losses that occurred due to this situation of the
malfunction of machine center.
Table 1. Expected PLI of malfunction of a machine center based on both [11] and [5].
Situation Type of loss PLI value/sNOK
Damaged part (Scrappage) Quality loss 120 000
Quality audit meeting Quality loss 3 500
Maintenance labor costs Availability loss 21 570
New oil cooler Availability loss 47 480
Loss of internal machine revenue Availability loss 129 600
Sum 322 150
When the consequences for the failure has been estimated, the next step is to
calculate the anomaly degree (AI) for the physical asset and the industrial equipment’s.
Step 2: Anomaly degree (AD) Calculation
Figure 2 shows the obtained anomaly degree (AD) of one machine. An increasing AD
will indicate an increasing probability of equipment failure. When maintenance plan-
ning is conducted, updated information about the anomaly degree for each equipment
should be collected and analyzed.
Likewise the calculation of PLI, the data used for calculating AD is also from an
actual industry equipment. However, the data is not from a machine center and rep-
resents another industry branch. The primary datasets include failure records and
measurement data from the monitoring system. The target is to obtain the anomaly
degree of the equipment by using machine learning based analysis approaches. In the
experiment, we labelled both failure and normal records. Thus, the obtained anomaly
degree can describe the difference between the target observation and normal samples.
Smart Maintenance in Asset Management 613
0.3
Anomaly Degree (AD)
0.2
0.1
0
0 20 40 60 80 100
Time (measuring poitns)
During the experiment to calculate AD, we adjusted the measurement data collected
in different scales to a common scale. Then, we applied standard normalization to pre-
process the raw data. The applied machine learning model is constructed through a
fully connected deep neural network with seven hidden layers. SoftMax is used as the
activation function of the final output layer. Leaky Relu is applied as the activation
functions of the hidden layers. The number of nodes in hidden layers of the constructed
deep neural network is 64, 32, 32, 16, 16, 8, 2, respectively, to train the neural network
smoothly. We selected Adam and categorical cross-entropy as the optimizer and loss
function during the training process. Results in Fig. 2 demonstrates the obtained
anomaly degree of the machine from the analysis using deep neural network, which
represents the degradation of the machine’s health state along the time.
Step 3: Criticality assessment
When both the probability and the consequences have been evaluated for a future
malfunction of a machine, the criticality assessment can be performed in a risk matrix.
Figure 3 illustrates a proposed risk matrix in smart maintenance that supports
planning of preventive work orders. In the consequence category, the PLI is established
for the physical asset and classified as “medium, high” in. The probability category is
evaluated with AD. By trending AD in the risk matrix it is possible to evaluate when a
preventive maintenance work order should be executed and the possible costs and
consequences.
The color code is following a traffic-light logic; if the equipment is located in green
zone, no further actions are necessary. If the equipment is in a yellow zone, it is an
early warning where maintenance actions should be executed. If the equipment is in the
red zone, it is an alarm where immediate maintenance actions should be executed.
In addition to the color-code system each field in the matrix is marked with a
number indicating a priority number. The criticality is of the machine has a yellow code
in the start but will have a red color code if no maintenance actions are performed.
When the maintenance planner has several machines that are being criticality assessed,
it will be possible to prioritize which machine that should be maintained first.
614 H. Rødseth et al.
Acknowledgements. The authors wish to thank for valuable input from both the research
project CPS-plant (grant number: 267750), as well as the research project CIRCit – Circular
Economy Integration in the Nordic Industry for Enhanced Sustainability and Competitiveness
(grant number: 83144).
Smart Maintenance in Asset Management 615
References
1. Schneider, J., et al.: Asset management techniques. Int. J. Electr. Power Energy Syst. 28(9),
643–654 (2006)
2. ISO: ISO 55000 Asset management - Overview principles and terminology. Switzerland
(2014)
3. CEN, EN 16646: Maintenance - Maintenance within physical asset management (2014)
4. Rødseth, H., et al.: Increased profit and technical condition through new KPIs in
maintenance management. In: Koskinen, K.T., et al. (Eds.) Proceedings of the 10th World
Congress on Engineering Asset Management (WCEAM 2015), pp. 505–511. Springer,
Cham (2016)
5. Rødseth, H., Schjølberg, P.: Data-driven predictive maintenance for green manufacturing. In:
Advanced Manufacturing and Automation VI, pp. 36–41. Atlantis Press (2016)
6. Bokrantz, J., et al.: Maintenance in digitalised manufacturing: Delphi-based scenarios for
2030. Int. J. Prod. Econ. 191, 154–169 (2017)
7. Yokoyama, A.: Innovative changes for maintenance of railway by using ICT-to achieve
“smart maintenance”. Procedia CIRP 8, 24–29 (2015)
8. Gopalakrishnan, M., et al.: Machine criticality assessment for productivity improvement:
smart maintenance decision support. Int. J. Prod. Perform. Manage. 68(5), 858–878 (2019)
9. Wang, J., et al.: Deep learning for smart manufacturing: methods and applications.
J. Manufact. Syst. 48, 144–156 (2018)
10. Li, Z., et al.: A deep learning approach for anomaly detection based on SAE and LSTM in
mechanical equipment. Int. J. Adv. Manufact. Technol. 103(1), 499–510 (2019)
11. Rødseth, H., Schjølberg, P., Marhaug, A.: Deep digital maintenance. Adv. Manufact. 5(4),
299–310 (2017)
12. Rødseth, H., Eleftheradis, R.: Successful asset management strategy implementation of
cyber-physical systems. In: WCEAM 2108 (2019)
13. Khuntia, S.R., Rueda, J., Meijden, M.: Smart Asset Management for Electric Utilities: Big
Data and Future (2017)
14. Tamilselvan, P., Wang, P.: Failure diagnosis using deep belief learning based health state
classification. Reliab. Eng. Syst. Saf. 115, 124–135 (2013)
15. DIN, German Standardization Roadmap - Industry 4.0, in Version 3, Berlin (2018)
16. CEN, EN 17007: Maintenance process and associated indicators (2017)
17. Porter, M.E., Heppelmann, J.E.: How smart, connected products are transforming
companies. Harvard Bus. Rev. 93(10), 96–114 (2015)
18. Schlegel, P., Briele, K., Schmitt, R.H.: Autonomous data-driven quality control in self-
learning production systems. In: Proceedings of the 8th Congress of the German Academic
Association for Production Technology (WGP), Aachen, 19–20 November 2018, pp. 679–
689 (2019)
Maintenance Advisor Using Secondary-
Uncertainty-Varying Type-2 Fuzzy Logic
System for Offshore Power Systems
Haitao Sang(&)
1 Introduction
Figure 1 outlines the type-2 fuzzy hidden Markov model for individual offshore power
equipment. A regular Markov model has been used to provide a quantitative connec-
tion between maintenance and reliability [10]. Di, i = 1, 2, … N. D1 denotes the “as
good as new” state, D2, D3,…, Dn are the states with different levels of deteriorations,
and Df is the failed state. The transition rates among different states form the matrix K.
where C(t) represents the different operational variations and uncertainties for indi-
vidual component in the time interval t. fT2 represents the mapping function of fuzzy
logic system from the input C(t) to output DK(t). Given DKðtÞ, the reliability indices,
such as MTTF and failure probability, which are obtained directly from the Markov
model can be updated according to the day-to-day operation conditions.
618 H. Sang
In a fuzzy logic system, the overall effects of uncertainties on reliability are captured by
developing a rule-base expert system based on the available data. The rules are chained
together by a reasoning process known as inference engine. The methods to propagate
the uncertainties among the rules are essential for a inference engine, and are
accomplished by the experts who are well acquainted with the characteristics of the
operation in power systems. The inputs and outputs in a fuzzy logic system are
combined through “IF-THEN” rules given by experts using the fuzzy inference engine
to get the fuzzified output.
In this work, a simpler way to implement the type-2 fuzzy logic is proposed,
namely secondary-uncertainty-varying type-2 fuzzy logic system. The secondary
uncertainty is captured by initializing a group of primary membership functions. As
shown in Fig. 2, at a specific value x ¼ x0 ðx 2 XÞ, the membership functions take on
the values wherever the vertical line intersects them. As a result, there are a range of
membership values at x = x′, each of which is given by one specific membership
function.
A1
A2 B1 B2 B3 B4 B5
1 1
A3 Uupp(x')
A4
0.8 0.8
A5
0.6 0.6
U
0.4 0.4
0.2 0.2
Ulow(x')
0 0
x' Ulow y Uupp
system, age, load and time from previous maintenance are the input of transformer,
each input has three membership functions. In addition, three additional uncertainties
are superimposed on three inputs respectively, which are represented by three fuzzy
sets. If type-1 fuzzy system is used to express uncertainty, it needs 729 rules, while
type-2 fuzzy system needs 87 rules. Therefore, in dealing with uncertainty, type-2
fuzzy logic is superior to type-1 Fuzzy Logic in computational complexity.
40
Age
(Yr)
20
0 5 10 15 20
Component
Conditions
Good
Normal
Bad
0 5 10 15 20
Maintenance Interval
1400 1400
With Initial Operational Condition With Initial Operational Condition
1200 With Continuing Aging using Type-1 1200 With Continuing Aging using Type-1
With Continuing Aging & Uncertainty using Type-2 With Continuing Aging & Uncertainty using Type-2
1000 1000
800 800
A1
A1
600 600
400 A2 400 A2
200 S 200 S
0 0
1.25 1.3 1.35 1.4 1.45 1.5 1.55 230 240 250 260 270 280
Energy Not Served (MWh/y) 4 3
x 10 Failure Cost ($10 )
Fig. 5. Pareto Fronts with the impacts of aging process and its uncertainty
9500 6000
S S
Energy Not Served
Energy Not Served
9000 A1 A1
A2 5500 A2
(MWh/y)
(MWh/y)
8500
5000
8000
7500 4500
0 5 10 15 20 0 5 10 15 20
Maintenance Interval Maintenance Interval
Fig. 6. Variation of ENS at two load points with different maintenance schedules
(b) Impacts of load: The load factor is modeled as an operational variation (Fig. 7),
and there is no uncertainty associated with load. Pareto fronts before and after
considering the varying load are shown in Fig. 8, showing the impacts of load on
the optimization of maintenance schedules. It can be seen that the fuzzy expert
system correctly relates higher load factor to higher ENS.
622 H. Sang
100
Factor
Load
(%)
50
0
0 5 10 15 20
Maintenance Interval
(c) Impacts of maintenance information: The time from previous maintenance for
the transformers is given in Fig. 9. The Pareto fronts are plotted to show the
impacts of different operational conditions on the scheduling of maintenance in
Fig. 10. As can be seen in Fig. 10, the longer time from previous maintenance in
intervals 4, 8, and 19 makes the schedule C1 be chosen rather than S to provide
the ENS of 1.36 105 MWh/y.
Figure 11 shows the variations of ENS due to performing the schedules S and C1.
As expected, the fuzzy system correctly relates the higher ENS with longer time lapse
from previous maintenance and the lower ENS with shorter lapse from previous
maintenance.
1400 1400
With Initial Operational Conditions With Initial Operational Conditions
1200 1200 With Varying Load using Type-1 or Type-2
With Varying Load using Type-1 or Type-2
Operational Cost ($10 )
3
Operational Cost ($103)
1000 1000
800 800
B1 600
600
B1
400 400
200 200 S
S
0 0
1.25 1.3 1.35 1.4 1.45 1.5 1.55 230 240 250 260 270 280
Energy Not Served (MWh/y) x 10
4
Failure Cost ($103)
20
Maintenance
Time from
Previous
(mth)
10
0
0 5 10 15 20
Maintenance Interval
1400 1400
With Initial Operational Conditions With Initial Operational Conditions
1200 With Different Time from 1200 With Different Time from
800 800
600 600
400 C1 400 C1
200 S 200 S
0 0
1.25 1.3 1.35 1.4 1.45 1.5 230 240 250 260 270 280
Energy Not Served (MWh/y) 4 3
x 10 Failure Cost ($10 )
Fig. 10. Pareto Fronts including the impacts of different time from previous maintenance &
uncertainty
S S
9000 C1 6000 C1
(MWh/y)
(MWh/y)
8000 5000
7000 4000
0 5 10 15 20 0 5 10 15 20
Maintenance Interval Maintenance Interval
(a) Load Point 1 (b) Load Point 2
Fig. 11. Variation of ENS of two load points with different maintenance schedules
5 Conclusions
References
1. Wang, K., Wang, Y.: How AI affects the future predictive maintenance: a primer of deep
learning. In: IWAMA 2017. Lecture Notes in Electrical Engineering, vol. 451 (2018)
2. Mo, H., Sansavini, G., Xie, M.: Performance-based maintenance of gas turbines for reliable
control of degraded power systems. Mech. Syst. Sig. Process. 103, 398–412 (2018)
3. Dai, Z., Zhang, T., Liu, X., et al.: Research on smart substation protection system reliability
for condition-based maintenance. Power Syst. Prot. Control 44(16), 14–21 (2016)
4. Zadeh, L.A.: Fuzzy sets. Inf. Control 8, 338–353 (1965)
5. Endrenyi, J.: Reliability Modeling in Electric Power Systems. Wiley, New York (1978)
6. Mohanta, D.K., Sadhu, P.K., Chakrabarti, R.: Fuzzy Markov model for determination of
fuzzy state probabilities of generating units including the effect of maintenance scheduling.
IEEE Trans. Power Syst. 20(4), 2117–2124 (2005)
7. Tanrioven, M., et al.: A new approach to real-time reliability analysis of transmission system
using fuzzy Markov model. Int. J. Electr. Power Energy Syst. 26(10), 821–832 (2004)
8. Zadeh, L.A.: The concept of a linguistic variable and its application to approximate
reasoning I. Inf. Sci. 8(3), 199–249 (1975)
9. Pająk, M.: Modelling of the operation and maintenance tasks of a complex power industry
system in the fuzzy technical states space. In: International Scientific Conference on Electric
Power Engineering. IEEE (2017)
10. Yang, F., Kwan, C.M., Chang, C.S.: Multi-objective evolutionary optimization of substation
maintenance using decision-varying Markov model. IEEE Trans. Power Syst. 23(3), 1328–
1335 (2008)
Determine Reducing Sugar Content in Potatoes
Using Hyperspectral Combined
with VISSA Algorithm
1 Introduction
1. Hyperspectral camera 2. Bracket 3. light source 4. Loading stage 5. Light box 6. Collector
7.Computer
Is Id
I¼ ð1Þ
Iw Id
In the formula: I is the corrected image; Is is the original image; Iw is the white
board image; Id is the black image.
628 W. Jiang et al.
Fig. 4(a)–(b). VISSA method screened out 32 optimal variables related to reducing
sugar, and CARS selected 30 characteristic variables. Compared to the full spectra
results, the number of variables decreased by 84% and 85%, respectively. Obviously,
the wavelength points near 430–440 nm, 520–530 nm, 750 nm and 970 nm are chosen
for both methods. The fourth and fifth frequency absorption peaks of O-H bond and
C-H bond corresponding to the chemical structure of reducing sugar near 750 nm are
also important wavelength points for establishing quantitative models of sugar content
commonly used in literatures. This also proves that CARS and VISSA are very
effective feature wavelength selection algorithms in potato sample system. However, if
we compare the details, we can find that the CARS algorithm also chooses some
wavelength points, such as the wavelength points near 810 and 830 nm, which is why
the prediction result of the CARS algorithm is slightly worse than that of VISSA.
Fig. 4. The wavelength selected by different methods on the potato. (a) VISSA; (b) CARS.
significance level is high and the model established has good predictability. Therefore,
VISSA-PLS was selected as the prediction model of reducing sugar content in potatoes.
4 Conclusions
In this paper, hyperspectral imaging technology was used to detect the reducing sugar
content of potatoes rapidly and nondestructively. The average spectral data of samples
(400–1000 nm) were obtained. A new VISSA algorithm was used to extract charac-
teristic wavelengths representing effective spectral information. After VISSA extraction,
32 wavelength points were mostly concentrated between 420–440 nm and 520–
530 nm, which accorded with the characteristics of spectral curves. The PLSR regres-
sion model of reducing sugar content was established with 32 characteristic wavelengths
as input variables. The results were better than those of the whole band. The RMSECV
was 0.0392 and the RMSEP was 0.0238. The results showed that the calibration model
VISSA-PLS based on hyperspectral images had a higher value. Prediction accuracy.
References
1. Cui, H., Shi, G., An, J.: Comparison study on the testing method of the content of reduced
sugar in potato. J. Anhui Agri. Sci. 34(19), 4821–4823 (2006)
2. Horvat, Š., Roščić, M., Horvat, J.: Synthesis of hexose-related imidazolidinones: novel
glycation products in the Maillard reaction. Glycoconjugate J. 16(8), 391–398 (1999)
3. Yang, B., Zhang, X., Zhao, F., et al.: Suitability evaluation of different potato cultivars for
processing products. Trans. Chin. Soc. Agric. Eng. 31(20), 301–308 (2015)
4. Liu, C., Gao, H., Li, A., et al.: Near-infrared model establishment for testing potato tubers
potassium content. Chin. Potato J. 2, 65–68 (2011)
5. Song, J., Wu, C.: Simultaneous detection of quality nutrients of potatoes based on
hyperspectral imaging technique. J. Henan Univ. Technol. 37(1), 60–67 (2016)
6. Helgerud, T., Wold, J.P., Pedersen, M.B., et al.: Towards on-line prediction of dry matter
content in whole unpeeled potatoes using near-infrared spectroscopy. Talanta 143, 138–144
(2015)
7. Chen, Z., Feng, H., Yin, S., et al.: Assessment of potato dry matter concentration using VIS-
SWIR spectroscopy. J. Heilongjiang Bayi Agric. Univ. 30(2), 47–51 (2018)
8. Jiang, W., Fang, J., Wang, S., et al.: Detection of starch content in potato based on
hyperspectral imaging technique. Int. J. Sig. Process. Image Process. Pattern Recogn. 8(12),
49–58 (2015)
9. Chen, M., Chen, Y., Zhang, Y., et al.: Determination of soluble protein in potato by
attenvated total reflection mid-infrared spectroscopy. J. Chin. Cereals Oils Assoc. 33(12),
118–125 (2018)
10. López, A., Arazuri, S., Jarén, C., et al.: Crude protein content determination of potatoes by
NIRS technology. Procedia Technol. 8, 488–492 (2013)
11. Ahmed, R., Daniel, G., Lu, R.: Evaluation of sugar content of potatoes using hyperspectral
imaging. Food Bioprocess Technol. 8(5), 995–1010 (2015)
12. Jiang, W., Fang, J., Wang, S., et al.: Hyperspectral determination of reducing sugar in
potatoes based on CARS. Int. J. Hybrid Inf. Technol. 9(9), 35–44 (2016)
13. Dacal-Nieto, A., Formella, A., Carrión, P., Vazquez-Fernandez, E., et al.: Common scab
detection on potatoes using an infrared hyperspectral imaging system. Image Anal. Process.
6979, 303–312 (2011)
14. Ainara, L., Janos, C.K., Mohammad, G., et al.: Non-destructive detection of blackspot in
potatoes. by Vis-NIR and SWIR hyperspectral imaging. Food Control 70, 229–241 (2016)
15. Huang, T., Li, X., Jin, R., et al.: Non-destructive detection research for hollow Heart of
potato based on semi-transmission hyperspectral imaging and SVM. Spectrosc. Spectral
Anal. 35(1), 198–202 (2015)
16. Wang, C., Li, X., Wu, Z., et al.: Machine vision detecting potato mechanical damage based
on manifold learning algorithm. Trans. Chin. Soc. Agric. Eng. 30(1), 245–252 (2014)
17. Zheng, J., Zhou, Z., Zhong, M., et al.: Chestnut browning detected with near-infrared
spectroscopy and a random-frog algorithm. J. Zhejiang A & F Univ. 33(2), 322–329 (2016)
18. Deng, B., Yun, Y., Liang, Y., et al.: A novel variable selection approach that iteratively
optimizes variable space using weighted binary matrix sampling. Analyst 139, 4836–4845
(2014)
19. Xu, Y., Wang, X., Yin, X., et al.: Visualization spatial assessment of potato dry matter.
J. Agric. Mach. 49(2), 339 (2018)
Game Theory in the Fashion Industry:
How Can H&M Use Game Theory
to Determine Their Marketing Strategy?
1 Introduction
companies are going down the sustainable route so what else can they do to maintain a
large consumer base as well as make a profit. By having a strong marketing strategy it
should help them achieve both a large consumer base and high profits.
The structure of this paper is as follows: Sect. 2 presents an overview of the game
theory concept. Section 3 presents the application of game theory in the industry,
Sect. 4 identifies the limitation of game theory as an application and, lastly, Sect. 5
concludes the paper.
2 Literature Review
Game theory started out as an attempt to find solutions for duopoly, oligopoly and
bilateral monopoly problems. Within all these situations, a solution was difficult to
come to as the interests and strategies of the organisations or individuals were
conflicting. Hence why game theory was used in attempts to come to various equi-
librium solutions which is based on rational behaviour of the participants involved.
Companies are now increasingly utilising game theory to assist them in making
high risk/high reward decisions in highly competitive markets. Game theory has been
around for a long time and proven an ability to provide ideal strategic choices in
multiple different situations, companies and industries. This theory is a very useful tool
for predicting what may happen between a group of firms interacting, where the actions
of a single firm can directly affect the payoff of other firms. Each participating player is
a decision maker in the game of business [5]. So, when making a choice or choosing a
strategy, all players, also known as firms, must take in consideration the potential
choices and payoffs caused by other firms. This understanding that is quantified
through payoff calculations allows a company to form their best strategy. A properly
formed game can assist many businesses by reducing business risk. This can be done
by yielding valuable insights into competitors and improving internal alignments
around decision making which maximises strategic utility [6].
Not only is game theory used to gain valuable insight into competition but it can
assist majorly when trying to make strategic decisions in relation to any business
function. For example, game theory is excellent for situations like auctions, product
decisions, bargaining and supply chain decisions etc. By applying game theory to all
these functions that are used in the day to day running of a business it can help the
company make the most strategic choices as you will be able to see all the outcomes.
Peleckis [7] discusses how using game theory is effective in determining equilibrium
within a market. It is used during risk analysis to determine optimal price strategy,
number of customers and market share etc., to reduct business risk.
Experience In relation to marketing in the fashion industry, Game theory can allow
companies to see what type of marketing will benefit them the most [8]. Researchers
debated whether it is possible to apply this theory to solve problems regarding mar-
keting, especially predicting competitive behaviour [9]. The discussion on whether
Game Theory in the Fashion Industry 635
game theory can be used in marketing then extended to other possibilities. Nash
equilibrium is also a very important concept to consider when referring to game theory
as it refers to a stable state in a game where no player can gain an advantage by
changing their strategy, though this theory is typically used in economics.
The fashion industry already uses game theory but they use it to determine how
customers will buy their clothes. The fashion industry is hard for both buyers and
sellers. Buyers are constantly trying to find good deals on clothes that suit them. Sellers
on the other hand want to move as much inventory as possible at the best price
possible. The solution to both worlds is sales. Sales move high inventory for sellers and
buyers get reasonably prices clothing [10]. This can be seen as using game theory as
they’ve found the best solution for both players. If sellers kept prices high all the time,
buyers wouldn’t buy meaning that inventory will be low. If sellers kept prices low then
inventory movement would increase, however, sellers wouldn’t be making the most out
of their products. So, by having sales every now and then, buyers maintain happy and
as do sellers.
Game theory is also used in this industry to protect designs and explain fashion trends.
For example, every season or phase there is always a trend that every retailer sells until the
next trend passes by. How this works is that the fashion industry is in an oligopolistic
competition with each other. Each firm’s product is typically unique to their own brand
but they all want to maximise profit so they compete by creating products on trend until
equilibrium is achieved. Game theory shows the outcomes and how people will benefit if
the designs are copied. If copying the exact design gives the copier a high incentive to
copy then the designers are more likely to legally protect their designs [11].
If the fashion industry already use game theory to determine how customer will buy
their clothes then why not see if game theory can determine their marketing. A good
marketing strategy plays a very important role in a market where competitors are
targeting the same consumers with either identical or very similar products. Companies
competing in this market have to choose from two main marketing strategies: product
discounting or advertisement expenditure. As expenditure on advertisement can help
the brand and differentiate the product, amplify the consumers perception about the
product and make sure that consumers are aware of the products advantages. However,
product discounting implies that the product is sold at a cheaper rate and assist the
brand in increasing the customer base so more people buy it. Both strategies have
benefits and limitations and by using game theory, it should assist the firm in finding a
balance between the two strategies for optimum payoffs. H&M [4] have the difficulty of
finding a suitable strategy as their products are of a reasonable price as well as have
excellent expenditure on their advertisement, hence why game theory should be able to
help them decide what route is best for them.
There are many limitations to using game theory when making marketing decisions.
Many marketers are against using game theory as they believe it’s not useful. They
believe that because game theory is very practical it doesn’t take in any consideration
of managerial insights regarding competition and co-operative decision making
636 C. Luo and Y. Wang
behaviour [12]. There are many criticisms regarding the usefulness of using game
theory in making marketing decisions, for example, in marketing the best choice can’t
be chosen due to how good the price is but instead chosen with irrational motives like
the emotional connection that the consumer may have [13]. In relation to H&M a
marketing decision that may seem logical might not be the best suited because H&Ms
consumers may not be able to relate or understand the marketing strategy.
While game theory is useful in indicating the outcomes of many different strategic
choices, it may be unlikely for it to yield precise solutions in regards to marketing
issues. This theory requires choosing the appropriate set of variables or available
options, the set of potential outcomes and the objectives assumed by the firms involved.
Due to the uncertainty presented in all of the different measure it means that the
precision of the choice is limited [14].
One of the main issues with using game theory as a marketing tool is that game theory
analyses behaviour of rational participants, with predictable decisions and easily
explained deviations if any. Marketing’s main reason for existence is to control consumer
behaviour, which is typically irrational and can be affected by multiple and usually
unidentifiable factors, like feelings and desires which cannot be predicted. Game theory
also doesn’t consider the marketing departments role in making sure that the brands
image is created and protected. Due to the uncertainty of the public opinion, it means that
a decision that may seem the most rational or logical could be the worst approach in terms
of publicity [13]. This means that a marketing idea that may be ideal, may not work as it
doesn’t work with H&Ms branding which can cause many issues with consumers.
The hypothesis on which game theory is mainly founded on can be seen as far from
the realities of this world, hence why game theory may be considered as useless in the
complex world of marketing. The criticism that keeps coming up in regards to the
application of game theory in marketing is that game theory analyses rational players
behaviours. In marketing, the relation between price and quality of goods are not the
main reason for a consumers purchase. Irrational factors as well as intangible factors
can sometimes come before physical and price factors. H&M could produce expensive
products but because the products are sustainable or related to consumers desires, they
won’t worry about how much the item costs, in the same way that if a product is very
cheap but harms a lot of animals in the process or isn’t good for consumers then they
wouldn’t purchase it as they don’t believe in the product.
Game Theory Benefits
Although there are many marketing practitioner’s who are against using game theory in
marketing, there are some who are for this theory. Competition by many other models
are often not handled well as other marketing models in the earlier days were mostly
optimising and asymmetric because they took the view of ‘a single active decision
maker’ [15]. Competitors are often thought of as non-reactive when in reality it is the
complete opposite. Game theory, however, is the ideal model for interdependence and
the effects of interactions that exists between competing firms as it doesn’t assume the
competition will not react but addresses the competition directly and makes it an
essential part of making a marketing decision [16]. Non-cooperative competition like
this links well with Nash equilibrium, as it takes in consideration ways competitors
may go against you which allows the business to prepare for the worse and use it to an
Game Theory in the Fashion Industry 637
advantage. This means that when deciding on marketing, you can see all potential
outcomes [17]. This means that H&M are able to observe everything that may happen
in regards to their competition and be able to find the best strategy suited to them
considering all the competition.
Bacharach [18] claims that there are many attributions to using game theory to
determine a marketing strategy. He believes that game theory provides a well defined
set of possibilities for all players, which allows all players to consider each possible
result and then choose the best path.
Big data allows game theory attribution to be feasible in this day and age because
theoretically the system becomes more precise each time data is collected on a con-
sumers buying journey. It may not be the ultimate solution as game theory’s main
incompatibility with marketing is when it studies rational decision makers, but it has
gone a lot further than before [19]. Instead of using typical models like the last click
attribution, the game theory can share credit for sales across multiple points of a
customers purchasing journey. This gives marketers a chance to paint a clearer picture
of what the person should do more and where they can save money.
The main reason why people don’t use game theory as a marketing tool is because
consumers don’t make choices by considering the costs and benefits, but instead by
thinking and choosing depending on how they emotionally feel about the product. This
defies game theory as it defies logic. However, as long as the situation can be ratio-
nalised then game theory is actually very helpful. Game theory hasn’t been used much
in marketing not because it is impossible but because it is a challenge [13]. Whether the
outcomes are worth the effort it will be dependable on the individual but marketing is a
notoriously competitive world and using game theory may just be the competitive edge
the fashion industry needs.
Lastly, Nash equilibrium focuses on non-cooperative competition which basically
takes into account the ways in which competitors may stray away and go against you.
Nash equilibrium is typically what companies want to consider when masking strategic
decisions when the market is stable as no particular benefits can be gained from drastic
changes. This is helpful to H&M incase their competitors aren’t in a cooperating
environment as it takes in to account what other retailers may do to try and go against
you. So, in H&Ms case, they have many competitors and it is hard to ensure that all
competitors will cooperate to ensure everyone benefits as the fashion industry is a huge
competitive market.
5 Conclusion
Ultimately, game theory has been used in many fields and is a risk as well as a
challenge when it is used in deciding marketing strategies as participants can be seen as
irrational. However, by using game theory, H&M can see all possible outcomes and
decide which strategy is best suited to them. Not only that, but they can also see
potential actions their competitors may pursue, so in hindsight H&M is much better off
using game theory to decide with route is best for them in terms of marketing strategy.
Game theory may work the best for them as it can mimic a lot of real life situations well
and it can be applied easily to any field.
638 C. Luo and Y. Wang
References
1. Montet, C., Serra, D.: Game Theory and Economics. Palgrave Macmillen, Houndsmill
(2003)
2. Ott, U.: International business research and game theory: looking beyond the prisoner’s
dilemma. Int. Bus. Rev. 22(2), 480–491 (2013)
3. Mohammadi, A., et al.: The combination of system dynamics and game theory in analyzing
oligopoly markets. Manage. Sci. Lett. 19(1), 265–274 (2016)
4. H&M.: H&M Group, Vision & Strategy. About.hm.com (2019). https://about.hm.com/en/
sustainability/vision-and-strategy.html. Accessed 29 Apr 2019
5. Azar, O.: The influence of psychological game theory. J. Econ. Behav. Organ. 20(2), 234–
240 (2018)
6. Smith, C., Neumann, J., Morgenstern, O.: Theory of games and economic behaviour. Math.
Gaz. 29(285), 131 (1945)
7. Peleckis, K.: The use of game theory for making rational decisions in business negations: a
conceptual model. Entrepreneurial Bus. Econ. Rev. 3(4), 105–121 (2015)
8. Dufwenberg, M.: Game theory. Wiley Interdisc. Rev. Cogn. Sci. 2(2), 167–173 (2010)
9. Herbig, P.: Game theory in marketing: applications, uses and limits. J. Mark. Manage. 7(3),
285–298 (1991)
10. Mediavilla, M., Bernardos, C., Martínez, S.: Game theory and purchasing management: an
empirical study of auctioning in the automotive sector. In: Umeda, S., (eds.) Advances in
Production Management Systems: Innovative Production Management Towards Sustainable
Growth, pp. 199–206. Springer, Cham (2015)
11. Wong, T.: To copy or not to copy, that is the question: the game theory approach to
protecting fashion designs. Univ. PA Law Rev. 160(04), 1139–1193 (2012)
12. Rivett, P., Wagner, H.: Principles of operations research. Oper. Res. Q. (1970–1977) 21(4),
484 (1975)
13. Dominici, G.: Game theory as a marketing tool: uses and limitations. Elixir Mark. 36, 3524–
3528 (2011)
14. Moorthy, K.: Using game theory to model competition. J. Mark. Res. 22(3), 262 (1985)
15. Chatterjee, K., Lilien, G.: Game theory in marketing science uses and limitations. Int. J. Res.
Mark. 3(2), 79–93 (1986)
16. Özer, O.: Determining the best sales time period for dried figs: a game theory application.
J. Int. Food Agribusiness Mark. 27(2), 91–99 (2015)
17. Possajennikov, A.: Imitation dynamic and nash equilibrium in Cournot oligopoly with
capacities. Int. Game Theory Rev. 05(03), 291–305 (2003)
18. Bacharach, M.: Economics and the Theory of Games. Westview Press, Boulder (1977)
19. Zheng, Z., et al.: Game theory for big data processing: multileader multifollower game-based
ADMM. IEEE Trans. Signal Process. 66(15), 3933–3945 (2018)
Multidimensional Analysis Between
High-Energy-Physics Theory Citation
Network and Twitter
1 Introduction
The first network consists of a subnet of Twitter mentions and retweets related to a
given period (continuously updated) consisting of 3657 nodes and 188712 arches. Each
node represents a user (@user_name). If the user @A mentions the user @B or
retweets a tweet, a direct arc is created from the @A node to the @B node and not
necessarily an arc from @B to @A.
The second network concerns the e-prints of HEP-Th section, contained in arXiv
archive, starting from January 1993 until April 2003. This latter contains 27770 nodes
and 352807 arches. Each node represents a writer. If writer A cites writer B, there is a
direct arc from node A to node B, and not necessarily an arc from B to A.
The network of citations of theoretical physics has a giant connected component
[2], composed of 27,400 nodes, equivalent to 98.7% of the total network. A few other
components disconnected from the central one has been detected. In the Twitter net-
work of mentions and retweets, on the other hand, the connected giant component is
composed of 3,656 nodes, which equals 100% of the total network. Although the
number of nodes of the two giant components is different, the percentage ratio confirms
the possibility of comparing them. Network analysis produced the following results:
will also be part of the study. In particular, in the in-degree, a greater accumulation of
nodes with a degree between 100 and 500 is remarkable, probably due to the presence
of texts that are more cited than others.
The red line in the figure identifies to the power law [5], which is corresponding to
both the trend’s evolutions, even if the in-degree distribution is closest.
Analyzing the degree distributions of the Twitter network (Fig. 2), a more evident
uniformity regarding the output arcs is observable. That implies a regular trend with
respect to the power law, a factor that is highlighted by a greatest number of nodes
having only one arc in output. The effect is underlined by a decrease of the number of
nodes in conjunction with the increase in the degree. In in-degree distribution, on the
other hand, the trend is not so regular, since the greatest number of nodes has only one
path in input, but there is a peak corresponding to a group of nodes with a number of
incoming paths between 100 and 500, as in the citations network. Since the present
network is composed of both mentions and retweets, the peak is probably due to two
factors: a massive presence of users who are often cited or retweeted (like “influen-
cers”) or tweets related to trending topics. In the second case the popularity of a trend
(marked with hashtag #) can act as a “boost” to quickly increase the mentions of a
specific user whose tweet was precisely marked as particularly relevant [6].
Both the graphs count a small number of nodes with a high degree: these are called
hubs and denote the presence of a scale-free network [7].
Fig. 3. High-energy physics theory citation Fig. 4. Twitter mentions e retweets network
network Shortest Path Length Shortest Path Length
platform and many of the mentions made over a certain period of time concern a
limited number of popular themes (and users) at that time (trend-topic).
Therefore, assuming that the user @C is a popular user, the probability that both the
user @A and the user @B would mention it increase, contributing to a decrease in the
average distance.
The third metric analyzed is the clustering coefficient, which represents the estimation
of how many nodes adjacent to another node are also related to each other. The
detection of this index is useful in order to analyze the potential dissemination of
information between the various nodes. The greater the coefficient, the lower the
network’s efficiency in disseminating information, as it is a manifestation of greater
closure of the network itself. In social networks, where arcs represent an interaction, the
clustering coefficient provides an estimate of how close the group, or community, is
respect to other nodes in the network. The expected result before proceeding with the
scanning of the data let intuitively foresee a considerable gap between the two coef-
ficients, meaning by far more (and therefore much more passive) that of textual quo-
tations than that of Twitter. On the contrary, by scanning the two datasets, the result
that emerges is a slight discrepancy of only a few tenths, with the social network
coefficient even slightly higher: 0.157 found for citations against 0.174 of Twitter.
However, before proceeding with definitive conclusion, a further observation based
on results obtained by the interpretation of the different sizes of the networks has been
turned necessary. In order to do so, the average of the number of neighboring nodes has
been analyzed, since it represents the first index responsible for determining the
clustering coefficient. The values obtained are 84.673 for Twitter against 25.372 for
HEP-Th citations. Despite the average of the number of neighbors obtained in social
network is more than 3 times that of citations, the clustering coefficient is almost
similar. A result which is very low on Twitter, considering the different size in
numerical terms of the two networks. Direct interpretation of this comparison is pre-
cisely the greater fluidity in finding, being part of it and sharing the flow of information
on social networks, compared to other types of networks. In case of Twitter this flow
can be triggered by retweeting or searching via hashtag. Not to be overlooked is also
the fact that the interaction process on Twitter can be done very easily even among
non-neighbors in the same “network route”. In fact, the information can be found not
only through a tweet of a following, but also (and very often) by virtue of “threads of
discussion” generated by the Trend-Topics of that particular moment.
Even the conformation of the two distributions confirms what has been mentioned.
The graph of Twitter (Fig. 5) appears in fact with a flow of nodes that does not follow a
regular line of cohesion, despite the presence of areas characterized by hubs with
greater density of relationships. Reading the graph of the physical citations network
(Fig. 6) an apparent cleanliness in the distribution of relational flows is here reported,
developed with curvilinear proportionality from left to right (the peak of the number of
neighboring nodes we have between 0.1 and 0.01). This index shows how the
644 L. Chirici et al.
coefficient decreases with the increase in the number of neighbors and therefore there is
more openness in the dissemination of information.
Fig. 5. High-energy physics theory citation Fig. 6. Twitter mentions e retweets network
network Clustering Coefficient Clustering Coefficient
The relational dimension of Twitter, on the other hand, in addition to being very
jagged, does not follow the theoretical logic of clustering coefficient (> close
nodes ) < closure), since most of the aggregated data shows how the increase in the
number of close nodes increases as well the coefficient. It follows, therefore, that the
relational dynamics of retweets and mentions are not mainly based on the principle of
“closeness” [11], but it gives privilege to other parameters such as the “richness” and
“popularity” of the contents of the tweets.
4 Conclusion
In conclusion it can be inferred that the analysis of the in/out degree, average shortest
path length and clustering coefficient fully confirm Millgram’s studies on the “six
degree of separation”, from which the small world effect is then extrapolated.
According to this theory, each node is connected to a few other nodes, but it can reach
any other node in the network thanks to the presence of hubs.
Directly connected to these phenomena and widely developable in other environ-
ments (always with the same networks) could be the experiment of the information
cascade, according to which the behavior of a node in managing the incoming com-
munication flow is influenced by the degree of similarity of the others neighboring
nodes in making choices of this type.
Ultimately, it can be seen that the major difference between the behaviors of the two
datasets lies in the regularity or otherwise in managing information flows. Where the
textual citations correctly respond to the forecasts of the expected values, those of
Twitter presumably suffer the influence of external factors that cannot be directly
monitored.
Multidimensional Analysis Between High-Energy-Physics Theory Citation Network 645
References
1. Leskovec, J., Kleinberg, J., Faloutsos, C.: Graphs over time: densification laws, shrinking
diameters and possible explanations. In: ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining (KDD), pp. 149–151 (2005)
2. Klimkova, E., Senkerik, R., Zelinka, I., Sysala, T.: Visualization of giant connected
component in directed network - preliminary study, Mendel, pp. 412–416 (2011)
3. Zhang, J., Luo, Y.: Degree centrality, betweenness centrality, and closeness centrality in
social network. Adv. Intell. Syst. Res. 132, 300–303 (2017)
4. Wang, Y., Ma, H.-S., Yang, J.-H., Wang, K.-S.: Industry 4.0: a way from mass
customization to mass personalization production. Adv. Manuf. 5(4), 311–320 (2017)
5. Barabasi, A.-L., Jeong, H., Neda, Z., Ravasz, E., Schubert, A., Vicsek, T.: Evolution of the
social network of scientific collaborations. Physica 311, 590 (2002)
6. Newman, M.E.J.: Scientific collaboration networks. I. Network construction and funda-
mental results. Phys. Rev. E 64, 016131 (2001)
7. Barabási, A.: Linked: How Everything is Connected to Everything Else and What It Means
for Business, Science, and Everyday Life. Plume, New York (2003)
8. Marvel, S.A., Martin, T., Doering, C.R., Lusseau, D., Newman, M.E.J.: The small-world
effect is a modern phenomenon (2013)
9. Shamai, G., Kimmel, R.: Geodesic distance descriptors, pp. 3624–3632 (2017)
10. Sampaio, C., Moreira, A., Andrade, R., Herrmann, H.J.: Mandala networks: ultra-small-
world and highly sparse graphs. Sci. Rep. 13, 9082 (2015)
11. Okamoto, K., Chen, W., Li, X.-Y.: Ranking of closeness centrality for large-scale social
networks. In: Preparata, F.P., Wu, X., Yin, J. (eds.) Frontiers in Algorithmics, pp. 186–195.
Springer, Heidelberg (2008)
Application of Variable Step Size Beetle
Antennae Search Optimization Algorithm
in the Study of Spatial Cylindrical Errors
1 Introduction
In fact, the evaluation of spatial cylindrical error by using minimum region hair is to
measure the deviation of a cylinder relative to an ideal cylinder, that is, to determine the
two best coaxial cylinders containing measuring points. The measured cylinder will
have many spatial measurement points. These measuring points will be contained by
different coaxial cylinders. According to the principle of minimum region, there will be
an optimal two coaxial cylinders containing all measuring points [16]. The spatial
cylindrical error is the radius difference between two coaxial cylindrical surfaces
containing the measured points. Here it is shown as the difference between the max-
imum distance and the minimum distance between the measured point and the ideal
axis. The spatial cylindrical error schematic diagram is shown in Fig. 1.
As shown in the Fig. 1, the axis direction of the cylinder is assumed to be (a, b, c),
and the coordinate origin is used as the plane in which (a, b, c) is found. The inter-
section point between the plane and the axis of the cylinder to be measured is set as (x0,
y0, z0). Then the expression of the ideal axis of the cylinder to be measured is
x x0 y y0 z z 0
¼ ¼ ð1Þ
a b c
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
A2 þ B2 þ C 2
ri ¼ ð2Þ
a2 þ b2 þ c 2
Finally, the objective function of spatial cylindrical error can be expressed as follows:
According to the minimum zone method, the solution of cylindrical error is the
optimization problem of the objective function (3) with respect to the non-linear
function of variables (x0, y0, z0).
3 Proposed Algorithm
XL ¼ x þ dx rand/2 ð4Þ
FL ¼ f ðXLÞ ð6Þ
FL ¼ f ðXLÞ ð7Þ
5. If the number of iterations does not reach the maximum number of iterations, return
to step 4 for iteration. When the number of iterations reaches the maximum number
of iterations, all calculations stop. The result of the iteration is the error value of the
space cylinder. The algorithm flow chart is shown in Fig. 2.
No
Yes
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
u2P dðu; P Þ
GDðP; P Þ ¼ ð13Þ
jPj
The measurement data of CMM in this paper come from reference. See Table 1 for the
original data. Then the variable step size bas algorithm is verified by experiments. The
parameters of VSBAS are as follows: population size NP = 100, variable step
652 C. Wang et al.
Table 2 shows the final results obtained by using secondary annealing teaching and
learning algorithm (2ATLBO), particle swarm optimization (PSO), and variable step
size teaching and learning algorithm VSBAS designed in this paper based on the data
measured in Table 1. According to Table 2, compared with other algorithms, the
VSBAS algorithm has the highest accuracy and better convergence speed. It shows that
the VSBAS algorithm has better results in solving the problem of spatial cylindrical
error evaluation.
Figure 3 shows the iteration curves of the three algorithms mentioned above. As
shown in Fig. 3, VSBAS algorithm has better accuracy and convergence speed than
2ATLBO and PSO algorithm.
5 Conclusions
In this paper, the BAS is improved and a variable step BAS algorithm (VSBAS) is
designed. The VSBAS algorithm is applied to solve the mathematical model of spatial
cylindrical error. Variable step size method can help BAS algorithm avoid falling into
local optimal. The VSBAS algorithm is tested by the test functions GD, and IGD. The test
results show that the algorithm has good convergence and convergence speed. It is found
that the VSBAS algorithm is superior to other algorithms (2ATLBO, PSO) in solving
accuracy and convergence speed. At the same time, compared with other algorithms,
VSBAS algorithm has fewer parameters and more convenient programming solution.
The data measured by the three coordinate machine are brought into the algorithm and
applied to the evaluation of spatial cylindrical error, better results can be obtained.
References
1. Kjølle, A.: Mechanical Equipment. Hydropower in Norway, Trondheim, December 2001
2. Mobley, R.K.: An Introduction to Predictive Maintenance, 2nd edn. Butterworth
Heinemann, Boston (2003)
3. Wang, K., Wang, Y.: How AI affects the future predictive maintenance: a primer of deep
learning. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T. (eds.) Advanced Manufacturing
and Automation VII, IWAMA 2017. Lecture Notes in Electrical Engineering, vol. 451,
pp. 1–9. Springer, Singapore (2017)
4. Wang, Y., Ma, H.-S., Yang, J.-H., Wang, K.-S.: Industry 4.0: a way from mass
customization to mass personalization production. Adv. Manuf. 5(4), 311–320 (2017)
5. Bram, J., Ruud, T., Tiedo, T.: The influence of practical factors on the benefits of condition-
based maintenance over time-based maintenance. Reliab. Eng. Syst. Saf. 158, 21–30 (2017)
6. Gao, Z., Sheng, S.: Real-time monitoring, prognosis, and resilient control for wind turbine
systems. Renew. Energy 116(B), 1–4 (2018)
7. Matheus, P.P., Licínio, C.P., Ricardo, K., Ernani, W.S., Fernanda, G.C., Luiz, M.: A case
study on thrust bearing failures at the SÃO SIMÃO hydroelectric power plant. Case Stud.
Therm. Eng. 1(1), 1–6 (2013)
8. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780
(1997)
9. Martin, L., Lars, K., Amy, L.: A review of unsupervised feature learning and deep learning
for time-series modeling. Pattern Recogn. Lett. 42, 11–24 (2014)
10. Subutai, A., Alexander, L., Scott, P., Zuha, A.: Unsupervised real-time anomaly detection
for streaming data. Neurocomputing 262, 134–147 (2017)
A Categorization Matrix and Corresponding
Success Factors for Involving External
Designers in Contract Product Development
1 Introduction
2 Supplier Involvement
3 Methodology
A list of success factors for supplier team involvement and information sharing was
derived from literature. The success factor that were found in research pertain to off-
line product development [2, 4] and [7]. The impact of these factors for design supplier
involvement in contract development project success was investigated through a
survey.
Data was collected via a survey from three groups. Group A are companies which
recently had one or more projects that included design supplier integration. Group B
are companies that do not integrate design suppliers. Group C consist of consultants,
project managers and professionals that are not employed by the design supplier or the
buyer. These professionals work with coordinating development projects that involve
design suppliers. The companies selected are involved with the authors company,
therefore some prior knowledge about their business is known. This article focusses on
technology companies based in Norway that do product development. The companies
in focus have products that have low volume single batch production, i.e. they only
make one or few batches of a products before they change the design. The goal for
collecting this data is to investigate if it is possible to create a model for categorizing
the role of the design supplier in contract product development along with corre-
sponding success factors.
4 A Categorization Approach/Matrix
involved, as found in [7] are: arm’s length, and strategic involvement. The develop-
ment risk concerns the complexity of the project, and thereby the degree of involve-
ment. High risk implies long development time and high degree of supplier
involvement, leading to the buyer’s company needing to strategically choose their
collaboration partners wisely. Low risk involvement puts less critical decisions on the
supplier, utilizing them more as support.
In this study, we combine the two categorization approaches found in [5] and [6]
for project type and supplier involvement respectively. Using the survey data this
article proposes combining the type of project and the degree of risk into a matrix, as
shown in Fig. 1. The figure shows a categorization of the roles of the design suppliers
in contract development, split into four groups. Firstly, a split of the buyer’s needs
between capacity and know-how projects. Secondly, a split between the degree of
development risk (low and high) which correspond to arm’s-length involvement and
strategic involvement of the supplier.
Supplier’s involvement
Arm’s length Strategic
Fig. 1. Design supplier roles based on buyers needs and supplier involvement.
The first role, “purchased design capacity”, pertains to projects where the design
suppliers have responsibility for less critical parts or components. The project is firmly
inside the design supplier’s main core working area and they are considered competent
in their field. The buyer typically asks for a solution to their problem, the design
supplier delivers the part or component with minimal interaction after the first inquiry.
While the part or component in question is designed by the supplier it may be a
standard solution but looking from the buyer side this is a component developed by the
supplier.
The second role, “module design specialist”, is somewhat similar in that the
supplier-buyer interaction is limited. However, “module design specialist requires more
information sharing as the design supplier provides a custom product that meets the
specifications of the buyer. This may be a customized made-to-order development and
the design supplier will customize one of their standard products to meet the buyer’s
needs. The design supplier is considered an expert in the field and has been selected by
the buyer for this reason.
The “design team member” role requires a significant amount of information
sharing as the design supplier is involved in development of a complex system. The
design supplier responsibility is not on the critical components, but the complex nature
of the development requires coordination of information such as product specifications,
interfaces and other non-trivial information. The design supplier is considered part of
the development team but does not take the lead role in specification of the entire
A Categorization Matrix and Corresponding Success Factors 659
system. The buyer requires the capacity of the design supplier in order to complete the
project.
The last role is “systems architect”, here the design supplier has special knowledge
of the critical sub systems of the project. The design supplier is included in the
development team. Information sharing is critical to the success of the product. Product
specification, interfaces, production methods and most of the key decisions concerning
the development is done in coordination with the buyer. Often the design supplier and
buyer will co-locate in order to maximize the coordination and allow for informal
information sharing. The design supplier is considered an expert in the field. The
design supplier will design critical parts or components and make technical decisions
concerning the development.
From the survey, the results for companies not currently involving design suppliers
(group B), show that they would consider involving design suppliers using the “pur-
chased design capacity” or “design team partner” roles. The respondents in general
want to supplement their current activities by outsourcing some of the development of
less complex tasks, this is done to free up internal capacity or general lack of project
engineers. Costs and quality are important. While they could have involved design
suppliers, they have chosen to do the development in-house.
The survey results for companies involving design suppliers (group A) correlate to
the know-how projects. They want to reduce cost and increase quality by having a
specialist perform the development. The reasoning may be that a specialist can perform
the task in less time than developing the expertise in-house. It is not evident how the
respondents are distributed between the “module design specialist” and “systems
architect” roles.
The professionals survey data (group C) show no clear signs of belonging to either
the capacity or know-how projects, which is natural as they have worked in a wide
variety of projects, leading to no clearly defined position. However, this article assumes
that they were involved in the high degree of development risk roles, “design team
partner” or “systems architect”. This is because the professionals are hired to lead
projects that have a high degree of technical complexity.
5 Success Factors
In order to determine the success factors that correlate to each role, this article leans on
the research done by [2, 4] and [7]. In [7] success factors are grouped in two: rela-
tionship structuring factors and asset allocation factors. The asset allocation factors are
directly linked to the successful involvement of suppliers on the development team.
The relationship structuring factors improve the effect of the asset allocation factors.
Using the success factors found and the survey data this article suggests a structuring of
the factors so that they are associated with the roles presented in Fig. 1. The success
factors are shown in Fig. 2 for each categorization of design supplier roles.
Factors are organized so arm’s-length development (“purchased design capacity”
and “module design specialist”) have less long-term focus. The success factors are
organized in a way that the relationship structuring factors are more prevalent in the
strategic involvement roles (“design team member” and “systems architect”). Each
660 A. W. Nilsen and E. Alfnes
Co-location
Joint agreement on system functions and perfor-
mance
Buyer and supplier management commitment
Shared end user requirements
Common and linked information systems
Supplier is trusted partner on development team
Joint agreement on module function and perfor-
mance
Technology sharing
Design team member
Fig. 2. Success factors for the design supplier roles in contracted development.
level in Fig. 2 should include the factors of the levels below, so the systems architect
level includes factors from all four groups. For all four roles the success factors;
“specify functions and performance”, “coordinating development activities with sup-
pliers” and “formulate communication and information sharing guidelines” are all
included. The last factor is of special interest as establishing and formulating infor-
mation sharing guidelines are directly connected to the quality of the relationship
between buyer and supplier. These guidelines should be introduced at the start of a
project or collaboration. This will ensure that both the buyer and supplier have agreed
on the reporting methods. The buyer and supplier also agree how information is shared
within the development team. Note that in Fig. 2, the factors “joint agreement on
module/system functions and/or performance” replace each other in the different
supplier roles.
The goal of the model (Figs. 1 and 2) is to increase trust, commitment, information
sharing and cooperation so that the design supplier can maximize the development
performance. The insight into the role of the design supplier, based on the buyer’s
needs (project type) and the risk the design supplier takes on, provides decision makers
with an ability to assess their allocation of resources. For example, if the project is a
capacity project, the design supplier need not allocate their own internal expert. The
categorization can also limit ambiguity regarding whom should make the critical
decisions. The two axes represent the two sides of the relationship, the project type
describes the buyer’s needs, the supplier’s development risk is the willingness to take
an active role in the development. Pairing the role of the supplier and the corresponding
success factors also allows for the buyer and supplier to effectively set the parameters
of the project, for example co-locating only if the role of the supplier indicates that it is
prudent.
A Categorization Matrix and Corresponding Success Factors 661
6 Conclusion
This article proposes a new model for considering the role of a design supplier in a
development team. The degree of development risk and buyer’s needs will identify the
role of the design supplier. This article applies the theory for supplier involvement in
product development to the field of contract product development.
By using the identified roles, a buyer can facilitate the type of relationship needed
to have an effective collaboration with a design supplier. A reflected approach to why a
certain design supplier is selected and what kind of role the design supplier has in the
project can help define what success factors that need be in place in order to suc-
cessfully involve the design supplier in contract product development. The model
presented helps decision-makers to facilitate an effective cooperation between a buyer
and a supplier, thereby increasing the likelihood for a successfully completed project.
References
1. Alderman, N., Thwaites, A., Maffin, D.: Project-level influences on the management and
organisation of product development in engineering. Int. J. Innov. Manage. 05(04), 517–542
(2001)
2. Johnsen, T.: Supplier involvement in new product development and innovation: taking stock
and looking to the future. J. Purchasing Supply Manage. 15(3), 187–197 (2009)
3. Clark, K.B.: Project scope and project performance: the effect of parts strategy and supplier
involvement on product development. Manage. Sci. 35(10), 1247–1263 (1989)
4. Ragatz, G.L., Handfield, R.B., Petersen, K.J.: Benefits associated with supplier integration
into new product development under conditions of technology uncertainty. J. Bus. Res. 55(5),
389–400 (2002)
5. Wynstra, F., Ten Pierick, E.: Managing supplier involvement in new product development: a
portfolio approach. Eur. J. Purchasing Supply Manage. 6(1), 49–57 (2000)
6. Wagner, S.M., Hoegl, M.: Involving suppliers in product development: Insights from R&D
directors and project managers. Ind. Mark. Manage. 35(8), 936–943 (2006)
7. Ragatz, G.L., Handfield, R.B., Scannell, T.V.: Success factors for integrating suppliers into
new product development. J. Product Innov. Manage. 14(3), 190–202 (1997)
Engineering Changes in the Engineer-to-Order
Industry: Challenges of Implementation
1 Introduction
2 Methodology
This research has been performed as a single in-depth case study in an ETO manu-
facturing company. Under the ETO context, the practices for EC implementation were
investigated. Furthermore, the case studied six EC as multiple embedded units of
research. Efficient EC implementation is affected by factors that contribute to negative
effects. These factors are any circumstance, fact or influence that contributes to ECM
performance. A range of factors that potentially influence EC implementation were
identified through a literature study. The data collection methods included semi-
structured interviews with four project managers, field observations and documentation
review. The final step in the research process was to perform a focus group that was
performed to validate the list of factors. The participants all have leadership positions
from both engineering and project management. The participants’ work experience
ranged from six to over 20 years of relevant experience. The participants were asked to
rate the influence each factor has on the negative effects of EC implementation, on a
scale from one to five (strongly disagree to strongly agree).
664 L. F. Hinojos A. et al.
EC refers to modifications to the released structure (fits, forms and dimensions, sur-
faces, materials etc.), behavior (stability, strength, corrosion etc.), function (speed,
performance, efficiency, etc.), or the relations between functions and behavior (design
principles), or behavior and structure (physical laws) of a technical artefact [9]. EC are
categorized as either emergent or initiated [13]. From a simplified point of view,
emergent EC are requested to remove errors from a product and initiated to enhance it
in some way [6]. Furthermore, EC can be distinguished as early, mid-production and
late EC in the product development process [14]. This distinction is relevant because
the degree of negative impact will vary according to the point of time of the project
when the EC is requested. As established, EC lead to negative effects such as increased
delivery time, reduced profit margins, production schedules disturbances and increased
resource allocation [1, 6].
ECM refers to the organization and control of the process of making alterations to
products [6]. ECM involves planning, controlling, monitoring and recording the EC
within many departments of a manufacturing company and systematic means of com-
munication are required. Several researchers have studied the process for ECM
[5, 6, 14, 15]. The following six steps are referred to as the ECM generic process for this
paper:
– Identify and request engineering change
– Identify possible solutions to change request
– Perform an impact evaluation of possible solutions
– Select and approve a solution
– Release and implement engineering change
– Perform post-implementation review
The following factors are identified to influence EC implementation:
1. Product complexity. Refers to the number of components, the number of levels in
the product structure and the number of design interdependencies as defined in the
Bill of Materials [6, 7, 15, 16].
2. Product customization. This refers to the level to which a product accommodates
the customer-specific and individual requirements [17]
3. Product innovation. Refers to the introduction of new technology in the product
[14, 16].
4. EC timing. Refers to the project stage at which the EC is requested [14, 16, 18].
5. EC propagation. Refers to the phenomenon by which one single modification to a
component initiates a series of other changes to parts and systems the component
interacts with [13, 16].
6. Intra-organizational integration. This refers to adequate information sharing and
interaction between internal disciplines involved in EC implementation [8, 15, 18].
7. Cross-organizational integration. This refers to adequate information sharing and
interaction across organizations involved in EC implementation [1, 10].
8. Established ECM process. Refers to whether the company has adopted and follows
the activities involved in the ECM generic process [5, 6, 14, 15].
Engineering Changes in the Engineer-to-Order Industry 665
4 Results
The case company provides customized propulsion and position system for advanced
vessels. First, it was found that the company does not have dedicated tools for ECM.
Although the ERP system and the IBM planning system aid the ECM process, they are
only used in their respective departments. The business systems do not have ECM ded-
icated capabilities and do not provide a streamlined workflow nor decision-making
support for EC implementation. Second, project managers are responsible for the ECM
process. Even though they utilize aids, such as a sales configurator and the ERP system,
the EC impact evaluation was found to be very dependent on the experience of the project
manager. Third, EC approval from the customer is done through EC forms exchanged by
email. Lastly, it is difficult to assess the impact evaluation accuracy and implementation
performance since the company does not perform a post-implementation review of EC.
The study selected six EC as multiple units of research. The primary criterion of
selection was disruptive EC with significant negative effects. In addition, the six EC
differed in timing (i.e. early, mid-production, and late) and origin (i.e. internally or
externally). They were selected from different projects and customers to have a rep-
resentative sample. The projects varied in terms of duration and number of engineering
hours. In every case, the amount of equipment provided was diverse and so the degree
of product customization. All six EC incurred in either delay in delivery times,
increased costs or other unwanted consequences. Table 1 provides a summary of the
characteristics of the EC mapped during the semi-structured interviews.
Next, it is described the EC where each factor was identified to influence the EC
implementation.
1. Product Complexity.
– EC2: The manufacturing complexity of the ring was underestimated and
increased the effort and time required in its manufacturing processes.
2. Product innovation.
– EC2: The adoption of the new solution led to unforeseen complications in the
project and subsequent projects.
– EC5: Since it was the first time this solution was offered, it was uncertain the
cost and the hours needed for development.
3. Product Customization.
– EC1, EC2, EC3: The high number in engineering hours is an indication of a high
level of product customization.
– EC5: The solution required the development of customized software in addition
to the customized propulsion equipment. There was no access to the number of
engineering hours used in customizing the control panel. Early in the project, the
specifications were unknown for the engines and the power management system.
4. EC timing.
– EC1: The timing of the change had the greatest influence on the negative effects
since the ship was already built, and the thrusters and the tanks were already
installed. Hence, the development of pressurized tanks consisted of a more
complex solution and had to be implemented by the ship-owner on site.
– EC4: In this case a positive circumstance, the change was identified early before
the project was issued for production and avoided large increments in cost.
– EC6: The mistake was identified once the equipment was produced and ready
for delivery. This increased the negative effects considerably.
5. EC propagation.
– EC1: The EC led to other changes in the ship, the tanks required a pressurized
air source which had to be made available where the tanks were placed.
– EC2, EC6: The implementation of the ring caused additional changes in the
motor frame. Also, technical documentation became outdated.
6. Intra and cross-organizational integration.
– EC3: The lack of efficient information flow internally and externally led to
misunderstandings.
– EC4: Poor communication with the customer lead to mistakes in the
specifications.
Engineering Changes in the Engineer-to-Order Industry 667
5 Discussion
6 Conclusion
References
1. Mello, M.H.: Coordinating an engineer-to-order supply chain: a study of shipbuilding
projects. Norwegian University of Science and Technology, Faculty of Science and
Technology, Department of Production and Quality Engineering, Trondheim (2015)
2. Adrodegari, F., et al.: Engineer-to-order (ETO) production planning and control: an
empirical framework for machinery-building companies. J. Prod. Plann. Control 26, 910–
932 (2015)
3. Semini, M., et al.: Strategies for customized shipbuilding with different customer order
decoupling points. J. Eng. Marit. Environ. 228(4), 362–372 (2014)
4. Stavrulaki, E., Davis, M.: Aligning products with supply chain processes and strategy. Int.
J. Logistics Manage. 21(1), 127–151 (2010)
5. Terwiesch, C., Loch, C.H.: Managing the process of engineering change orders: the case of
the climate control system in automobile development. J. Prod. Innov. Manage 16(2), 160–
172 (1999)
6. Jarratt, T., Clarkson, J., Eckert, C.: Engineering change. In: Clarkson, J., Eckert, C. (eds.)
Design process Improvement, pp. 262–285. Springer, London (2005)
7. Wänström, C., Jonsson, P.: The impact of engineering changes on materials planning.
J. Manuf. Technol. Manage. 17(5), 561–584 (2006)
8. Lin, Y., Zhou, L.: The impacts of product design changes on supply chain risk: a case study.
Int. J. Phys. Distrib. Logistics Manage. 41(2), 162–186 (2011)
9. Hamraz, B., Caldwell, N.H.M., Clarkson, P.J.: A holistic categorization framework for
literature on engineering change management. J. Syst. Eng. 16(4), 473–505 (2013)
670 L. F. Hinojos A. et al.
10. Wasmer, A., Staub, G., Vroom, R.W.: An industry approach to shared, cross-organisational
engineering change handling-the road towards standards for product data processing.
J. Comput. Aided Des. 43(5), 533–545 (2011)
11. Zennaro, I., et al.: Big size highly customised product manufacturing systems: a literature
review and future research agenda. Int. J. Prod. Res. 57, 1–24 (2019)
12. Iakymenko, N., et al.: Managing engineering changes in the engineer-to-order environment:
challenges and research needs. IFAC-PapersOnLine 51(11), 144–151 (2018)
13. Eckert, C., Clarkson, P., Zanker, W.: Change and customisation in complex engineering
domains. J. Res. Eng. Des. 15(1), 1–21 (2004)
14. Reidelbach, M.A.: Engineering change management for long-lead-time production. Prod.
Inventory Manage. J. 32(2), 84 (1991)
15. Tavčar, J., Duhovnik, J.: Engineering change management in individual and mass
production. J. Robot. Comput. Integr. Manuf. 21(3), 205–215 (2005)
16. Jarratt, T., et al.: Engineering change: an overview and perspective on the literature. J. Res.
Eng. Des. 22(2), 103–124 (2011)
17. Hicks, C., McGovern, T., Earl, C.F.: Supply chain management: a strategic issue in engineer
to order manufacturing. Int. J. Logistics 65(2), 179–190 (2000)
18. Fricke, E., et al.: Coping with changes: causes, findings, and strategies. J. Syst. Eng. 3(4),
169–179 (2000)
Impact of Carbon Price on Renewable Energy
Using Power Market System
1 Introduction
The ambition of the Paris Agreement on climate change is to keep a global temperature
rise this century below 2 °C compared to pre-industrial level, which changes the path
of the energy sector development. In 2016, 42% of global CO2 was from the electricity
and heat generation [1]. Therefore, this implies that the power sectors have great
potential for limiting global warming to no more than 2 °C. Many approaches, such as
increasing the share of renewable energy and introduce the carbon prices into power
sectors, have been developed to reduce the emission from power sectors. Introducing
the carbon price into the energy sector is one of the important approaches to alleviate
the carbon emission, and hence achieving decarbonization goal [2, 3]. To achieve low
emissions scenarios, an implicit carbon shadow price is usually assumed, and this price
can also be used as policy instruments [3]. The carbon prices vary with different
scenarios for climate change mitigation and adaptation, especially in the long run [3–6].
The Shared Socioeconomic Pathways (SSPs) are established by the scientific
community and are part of a new scenario framework [7]. These pathways describe five
different development trends in future by considering different scenarios for climate
change projections, challenges for mitigation and adaptation to climate change,
socioeconomic conditions and policies [7–10], and they are used to facilitate a har-
monized framework for integrated analysis for interdisciplinary research of climate
impact, and the aim of these pathways is to investigate future changes in different
sectors or countries [7, 11].
In this paper, we investigate the impact of carbon prices on the environment and
sustainable energy development based on the Northern European power system. The
power system model used in this paper is the net transfer capacity-based model (NTC).
These carbon prices are based on different pathways in the SSPs framework. Scenarios
with possibility to achieve the 2 °C target are considered, and therefore, only four
scenarios, i.e., SSP1, SSP2, SSP2, and SSP5, with different carbon prices are used. The
rest of the paper is organized as follows. Section 2 introduces the data and method-
ology, followed by results and discussions in Sect. 3. Conclusions in Sect. 4 ends the
paper.
previous year’s hydro reservoir level is used as the initial condition of the next year’s
hydro reservoir level. Environmental costs for thermal units are equal to the environ-
mental tax multiplying by the total amount of emissions within the planning periods.
The environmental variables, i.e., emission factor, energy efficiencies, and energy
conversion factors, originates from the International Energy Agency (IEA) [14]. The
main outputs for the model are the spot power prices, mixed production and the amount
of carbon emissions. The detailed desperation of the numerical model can be found in
[2] and the optimization is conducted using GAMS [15].
500
US$2005/t CO2
400
300
200
100
0
2015 2020 2025 2030 2035 2040 2045 2050 2055
Year
Fig. 1. Carbon prices under SSP1, SSP2, SSP4, and SSP4 for achieving 2 °C target
674 X. Hu et al.
In this section, we investigate these four scenarios and analyze the results in terms of
economic and environmental perspectives. Figure 2 shows the annual power prices
which is equal to average all countries’ prices by year, together with the carbon prices
for each scenario. It can be observed that power prices are increased with the rise in
carbon prices. This reflects that one of the carbon prices’ key role in economic per-
formance in the power system is to regulate power price.
400.00 500.00
Euro/Mwh (power prices)
prices)
200.00
100.00 100.00
0.00 0.00
2020 2025 2030 2035 2040 2045 2050
Year
SSP1_Power SSP2_Power SSP4_Power
SSP5_Power SSP1_Carbon SSP2_Carbon
SSP4_Carbon SSP5_Carbon
Fig. 2. Annual power prices and carbon prices for each scenario: The annual power prices are
given in solid lines with values shown in the left y-axis, and the carbon prices are shown in
dashed lines with values shown in the right y-axis.
Figure 3 illustrates the energy mix for each scenario. We can observe that the
variations are primarily with gas and coal. The gas production is increased with the rise
in the carbon prices, while the coal production including hard coal and lignite coal is
decreased. The reason is that with the increasing carbon prices over time, the gas power
with low carbon prices is increased to replace power production with high carbon
prices, such as hard coal and lignite coal. The wind power continues to increase due to
the cheaper power prices. The hydropower production is stable due to the same
reservoir level for each year. The reservoir level assumptions for future years could be
an interesting topic to be investigated for future study.
Impact of Carbon Price on Renewable Energy 675
1200.00
Production (TWh) 1000.00
800.00
600.00
400.00
200.00
0.00
SSP1 2020
SSP1 2025
SSP1 2030
SSP1 2035
SSP1 2040
SSP1 2045
SSP1 2050
SSP2 2020
SSP2 2025
SSP2 2030
SSP2 2035
SSP2 2040
SSP2 2045
SSP2 2050
SSP4 2020
SSP4 2025
SSP4 2030
SSP4 2035
SSP4 2040
SSP4 2045
SSP4 2050
SSP5 2020
SSP5 2025
SSP5 2030
SSP5 2035
SSP5 2040
SSP5 2045
SSP5 2050
Scenarios
The environmental impact, i.e., the total amount of carbon emission, is shown in
Fig. 4. From this figure, we can notice that before the year 2030, the highest carbon
prices scenario, i.e., SSP5, has the lowest total amount of carbon emission, which is
opposite to the lower carbon prices scenario, i.e., SSP1. This implies that the carbon
prices before the year 2030 have a positive impact on the total amount of carbon
emission. However, the total amount of carbon emission converges to the same amount
around 220 Mton. This indicates that carbon prices have limited impacts on the total
400.00
350.00
300.00
250.00
Mton
200.00
150.00
100.00
50.00
0.00
2020 2025 2030 2035 2040 2045 2050
Year
SSP1 SSP2 SSP4 SSP5
amount of carbon emission in the long-term. This result illustrates that further
increasing carbon prices might not have any impacts or may only have a few influences
on carbon emission from a long-term perspective.
In this paper, the environmental impact of carbon prices based on SSPs Scenarios on
the Northern European power system is investigated. Four scenarios are conducted.
The fact that the carbon prices play an important role in the power prices is illustrated,
such as the higher carbon prices, the higher power prices. Within the increasing carbon
prices framework, coal production including hard coal and lignite coal is decreased,
which is opposite to the gas production. This could be explained by the low gas carbon
prices and high coal carbon prices. In addition, renewable production, for instance,
wind power production continues to increase as carbon price rises. Furthermore, our
simulation results also illustrate that further increasing carbon prices might have few
influences on carbon emission in long-term. There is a potential limitation in our
simulation that the reservoir level is similar for each year, which leads to stable
hydropower production. This could be an interesting topic for further study.
References
1. IEA, Birol, F. (ed.): CO2 Emissions from Fuel Combustion Highlights. International Energy
Agency, France (2016)
2. Cheng, X., Korpås, M., Farahmand, H.: The impact of electrification on power system in
Northern Europe. In: 2017 14th International Conference on the European Energy Market
(EEM). IEEE (2017)
3. Guivarch, C., Rogelj, J.: Carbon price variations in 2 °C scenarios explored (2017)
4. Creti, A., Jouvet, P.-A., Mignon, V.: Carbon price drivers: Phase I versus Phase II
equilibrium? Energy Econ. 34(1), 327–334 (2012)
5. Feng, Z.-H., Zou, L.-L., Wei, Y.-M.: Carbon price volatility: evidence from EU ETS. Appl.
Energy 88(3), 590–598 (2011)
6. Chevallier, J.: A model of carbon price interactions with macroeconomic and energy
dynamics. Energy Econ. 33(6), 1295–1312 (2011)
7. Riahi, K., et al.: The Shared Socioeconomic Pathways and their energy, land use, and
greenhouse gas emissions implications: an overview. Glob. Environ. Change Hum. Policy
Dimensions 42, 153–168 (2017)
8. O’Neill, B.C., et al.: A new scenario framework for climate change research: the concept of
shared socioeconomic pathways. Clim. Change 122(3), 387–400 (2014)
9. Ebi, K.L., et al.: A new scenario framework for climate change research: background,
process, and future directions. Clim. Change 122(3), 363–372 (2014)
10. Van Vuuren, D.P., et al.: A new scenario framework for climate change research: scenario
matrix architecture. Clim. Change 122(3), 373–386 (2014)
11. Hu, X.P., Iordan, C.M., Cherubini, F.: Estimating future wood outtakes in the Norwegian
forestry sector under the shared socioeconomic pathways. Glob. Environ. Change Hum.
Policy Dimensions 50, 15–24 (2018)
Impact of Carbon Price on Renewable Energy 677
12. Farahmand, H.: Integrated power system balancing in Northern Europe-models and case
studies, p. 150 (2012)
13. Farahmand, H., et al.: Possibilities of Nordic hydro power generation flexibility and
transmission capacity expansion to support the integration of Northern European wind power
production: 2020 and 2030 case studies. SINTEF Energy Research (2013)
14. IEA: World Energy Outlook 2011 (2011). www.worldenergyoutlook.org/weo2011. Acces-
sed 12 Feb 2019
15. GAMS: General Algebraic Modeling System (GAMS), Washington, DC, USA (2017)
16. Bauer, N., et al.: Shared socio-economic pathways of the energy sector - quantifying the
narratives. Glob. Environ. Change Hum. Policy Dimensions 42, 316–330 (2017)
17. O’Neill, B.C., et al.: The roads ahead: narratives for shared socioeconomic pathways
describing world futures in the 21st century. Glob. Environ. Change Hum. Policy
Dimensions 42, 169–180 (2017)
18. Kriegler, E., et al.: Fossil-fueled development (SSP5): an energy and resource intensive
scenario for the 21st century. Glob. Environ. Change Hum. Policy Dimensions 42, 297–315
(2017)
19. Fujimori, S., et al.: SSP3: AIM implementation of Shared Socioeconomic Pathways. Glob.
Environ. Change Hum. Policy Dimensions 42, 268–283 (2017)
20. Fricko, O., et al.: The marker quantification of the Shared Socioeconomic Pathway 2: a
middle-of-the-road scenario for the 21st century. Glob. Environ. Change Hum. Policy
Dimensions 42, 251–267 (2017)
21. Calvin, K., et al.: The SSP4: a world of deepening inequality. Glob. Environ. Change Hum.
Policy Dimensions 42, 284–296 (2017)
Author Index
A E
Aleksandrova, Olga, 600 Eleftheriadis, Ragnhild J., 373, 608
Alfnes, Erlend, 654, 662
Alonso-Ramos, Victor, 402
F
Antequera-Garcia, Gema, 402
Feng, Bowen, 342
Aschehoug, Silje, 358
Feng, Xiangcai, 480
Aukrust, Trond, 98
Feng, Xiong, 451
Azarian, Mohammad, 258
Fernandez-Gonzalez, Carmen, 402
Fordal, Jon Martin, 317
B
Ban, Shuhao, 517
Barnard, Taylor, 396 G
Batu, Temesgen, 106 Gamme, Inger, 358
Berg, Olav Åsebø, 98 Gao, Lingyan, 242, 275
Bernhardsen, Thor Inge, 317 Gao, Xiue, 283, 379
Gao, Zenggui, 267
C Ge, Yang, 37, 59, 176, 309
Cao, Jiejie, 126 Ghosh, Tamal, 283, 379
Chelishchev, Petr, 434 Guan, Xin, 292
Chen, Bo, 283, 379, 410 Gui-qin, Li, 203
Chen, Shifeng, 283, 379 Guo, Lanzhong, 44, 52, 142, 169, 176, 309
Cheng, Bin, 480
Cheng, Xiaomei, 671 H
Chirici, Lapo, 639 Hinojos A., Luis F., 662
Hong, Zhenyu, 185
D Hovig, Even Wilberg, 98, 466
Dacal-Nieto, Angel, 402 Hu, Chaobin, 151, 160
Deng, Jiaming, 517 Hu, Xiangping, 3, 11, 134, 418, 427, 671
Deng, Xuechao, 234 Huang, Junjie, 67
Dong, Jinzhong, 151 Huang, Qi, 242, 342, 457
Dong, Mengyao, 267
Dou, Yan, 44, 309, 523
Drobintsev, Pavel, 600 I
Du, Zhipeng, 219 Iakymenko, Natalia, 662
T
Y
Tan, Chao, 480
Yan, Xiupeng, 504
Tang, Hehui, 442
Yang, Ge, 52
Tang, Xiuying, 480
Ye, Gu, 203
Tian, Ran, 151, 160
Ye, Zhiwen, 67
Tian, Yonglin, 410
Yin, Ranguang, 349
Yu, Hao, 250, 258, 567, 577
V
Yu, Haoshui, 418
Voinov, Nikita, 600
Yu, Tao, 227
Yuan, Jin, 72, 81, 349, 379, 495, 593
W
Wan, Xiang, 242, 275, 333, 342, 457, 473, 585
Wang, Chao, 37 Z
Wang, Chen, 646 Zhang, Baodi, 134
Wang, Hanlin, 227 Zhang, Baosheng, 418
Wang, Jianhua, 535 Zhang, Guichang, 185
Wang, Kesheng, 283, 379, 457, 473, 639, 646 Zhang, Guowei, 366
Wang, Sen, 333, 511 Zhang, Haishu, 89
Wang, Weicong, 29 Zhang, Haohan, 366, 511
Wang, Yi, 283, 333, 379, 396, 633, 639, 646 Zhang, Li, 511
Wu, Fang, 275, 473 Zhang, Ping, 593
Wu, Jian, 59, 126, 169, 176 Zhang, Ronghua, 388
Wu, Pengfei, 585 Zhang, Shijin, 20
Zhang, Tianshu, 283
X Zhang, Xiangyu, 457
Xi, Zhang, 325 Zhang, Xuedong, 3
Xia, Hong, 234 Zhao, Weixing, 552
Xie, Wenxue, 379 Zhao, Zhiping, 37
Xin, Wang, 325 Zhou, Faqi, 517
Xin, Zhenbo, 72 Zhou, Maoheng, 219, 234
Xing, Zhiwei, 185 Zhu, Qiuyu, 219, 234
Xu, Jingjing, 488 Zou, Liangliang, 81, 495
Xu, Weigang, 517 Zou, Wei, 585