Industrial Engineering in The Industry 4.0 Era: Fethi Calisir Hatice Camgoz Akdag Editors
Industrial Engineering in The Industry 4.0 Era: Fethi Calisir Hatice Camgoz Akdag Editors
Industrial Engineering in The Industry 4.0 Era: Fethi Calisir Hatice Camgoz Akdag Editors
Fethi Calisir
Hatice Camgoz Akdag Editors
Industrial
Engineering in the
Industry 4.0 Era
Selected Papers from the Global Joint
Conference on Industrial Engineering
and Its Application Areas, GJCIE 2017,
July 20–21, Vienna, Austria
Lecture Notes in Management and Industrial
Engineering
Series editor
Adolfo López-Paredes, Valladolid, Spain
This book series provides a means for the dissemination of current theoretical and
applied research in the areas of Industrial Engineering & Engineering Management.
The latest methodological and computational advances that both researchers and
practitioners can widely apply to solve new and classical problems in industries and
organizations constitute a growing source of publications written for and by our
readership.
The aim of this book series is to facilitate the dissemination of current research in
the following topics:
Editors
Industrial Engineering
in the Industry 4.0 Era
Selected Papers from the Global Joint
Conference on Industrial Engineering and Its
Application Areas, GJCIE 2017, July 20–21,
Vienna, Austria
123
Editors
Fethi Calisir Hatice Camgoz Akdag
Industrial Engineering Department Management Engineering Department
Istanbul Technical University Istanbul Technical University
Macka/Istanbul Macka/Istanbul
Turkey Turkey
This book compiles extended versions of a selection of the best papers presented at
the Global Joint Conference on Industrial Engineering and Its Application Areas
(GJCIE) 2017 held in Vienna, Austria. They represent a good sample of the current
state of the art in the field of industrial engineering and its application areas.
The papers presented in this book address methods, techniques, studies, and
applications of industrial engineering with the theme of “Industrial Engineering in
the Industry 4.0 Era”. Industry 4.0 is a collective term for the technologies and
concepts of value chain organization, which bring together Cyber-Physical
Systems, the Internet of Things, the Industrial Internet of Things, and the
Internet of Services. These systems allow intelligent products, processes, and ser-
vices to communicate with each other and with people in real time over a global
network. This challenges the way that we educate engineers and the way that we
manage companies. This book will shed new light on the role of industrial engi-
neering in this endeavor. Contributions have been arranged in three parts:
• Industrial Engineering
• Engineering and Technology Management
• Healthcare Systems Engineering and Management
We would like to express our gratitude to all the contributors, reviewers, and
international scientific committee members who have aided in publication of this
book. We would also like to express our gratitude to Springer for their full support
during the publishing process. Last but not least, we gratefully acknowledge the
sponsors (ITU Ari Teknokent, Aselsan, and Entertech) of GJCIE 2017.
v
Contents
vii
viii Contents
Keywords Grey related analysis Data envelopment analysis Fuzzy numbers
Multiple attribute decision making
Introduction
M. S. Pakkar (&)
Faculty of Management, Laurentian University, Sudbury ON P3E 2C6, Canada
e-mail: [email protected]
The purpose of this short paper is to present an extended version of the Wu and
Olson (2010) model by incorporating some additional features to avoid the afore-
mentioned deficiencies for a fuzzy multiattribute grey related analysis methodology.
þ þ
where yij ¼ ½y
ij ; yij , yij yij , is an interval number representing the value of
attribute Cj ðj ¼ 1; 2; . . .; nÞ for alternative Ai ði ¼ 1; 2; . . .; mÞ. Then alternative Ai is
þ þ þ
characterized by a vector Yi ¼ ½y i1 ; yi1 ; ½yi2 ; yi2 ; . . .; ½yin ; yin of attribute values.
The term Yi can be translated into the comparability sequence Ri ¼
þ þ þ
½ri1 ; ri1 ; ½ri2 ; ri2 ; . . .; ½rin ; rin by using the following equations:
" #
h i y yijþ n o
rij ; rijþ þ
¼ max y1jþ ; y2jþ ; . . .; ymj
þ
ij
¼ þ ; þ 8j; yjðmaxÞ ð2Þ
yjðmaxÞ yjðmaxÞ
To measure the degree of similarity between rij ¼ ½rij ; rijþ and u0j ¼ ½u þ
0j ; u0j
for each attribute, the grey related coefficient, nij , can be calculated as follows:
A Note on Fuzzy Multiattribute Grey Related Analysis Using DEA 5
hh ii h i h i h i
þ þ þ þ
mini minj u ; u
0j 0j r ij ; r ij þ q max i max j u ; u
0j 0j r ij ; r ij
nij ¼ h i h i h i h i
þ þ
u0j ; u0j rij ; rijþ þ q maxi maxj u þ
0j ; u0j rij ; rij
ð6Þ
þ þ
while the distance between u0j ¼ ½u 0j; u0j and rij ¼ ½rij ; rij is measured by
u0j rij ¼ max u r ; u þ r þ q 2 ½0; 1 is the distinguishing coeffi-
0j ij 0j ij
cient, generally q ¼ 0:5. It should be noted that the final results of GRA for
multiple attribute decision making problems are very robust to changes in the
values of q. Therefore, selecting the different values of q would only slightly
change the rank order of alternatives (see Kuo et al. 2008). Now, let k be the index
for the alternative under assessment (known as a decision making unit in the DEA
terminology) where k ranges over 1, 2,…, m. To find an aggregated measure of
similarity between alternative Ak , characterized by the comparability sequence Rk ,
and the ideal alternative A0 , characterized by the reference sequence U0 , over all the
attributes, the grey related grade can be computed as follows:
X
n
Ck ¼ max wj nkj
j¼1
X
n
ð7Þ
s:t: wj nij 1 8i;
j¼1
wj 0 8j;
where Ck is the grey related grade for alternative under assessment Ak and wj is the
weight of attribute Cj ðj ¼ 1; 2; . . .; nÞ. The first set of constraints assures that if the
computed weights are applied to a group of m alternatives, ði ¼ 1; 2; . . .; mÞ, they
do not attain a grade of larger than 1. The process of solving the model is repeated
to obtain the optimal grey related grade and the optimal weights required to attain
such a grade for each alternative. It should be noted that the grey related coefficients
are normalized data. Consequently, the weights attached to them are also
normalized.
Model (7) assesses the performance of each alternative based on the ratio of its
distance from the best-practice frontier that is constructed by the best alternatives.
Therefore, the weights obtained for each alternative are the most favorable weights
in comparison to the other alternatives. However, as mentioned before, the disad-
vantage of model (7) is that some alternatives may obtain a grey related grade of 1
6 M. S. Pakkar
because of assigning zero values to the weights of some attributes and neglecting
the importance of these attributes in a decision-making process. To avoid this issue,
we extend a similar model as follows:
X
n
C0k ¼ max w0j nkj
j¼1
X
n
ð8Þ
s:t: w0j nij 1 8i;
j¼1
where e0min is the minimum weight bound on the attribute weights which can be
estimated by solving the following model (Wu et al. 2012):
maxe0
X
n
s:t: w0j nij 18i; ð9Þ
j¼1
The main idea of model (9) is to maximize the weighted grey related coefficients.
Obviously, e0 is a positive variable. Solving model (9) respectively, for each
alternative, we can obtain me0 values. Then, we select the minimum one, i.e.,
e0min ¼ minfe01 ; . . .; e0m g, as the lower bound for the attribute weights.
On the other hand, a similar model can be developed in order to assess the
performance of each alternative under the least favorable weights as follows:
X
n
C00k ¼ min w00j nkj
j¼1
X
n
ð10Þ
s:t: w00j nij 1 8i;
j¼1
Here, we seek the worst weights in the sense that the objective function in model
(10) is minimized. Each alternative is compared with the worst alternatives and is
assessed based on the ratio of distance from the worst-practice frontier. The first set
of constraints assures that the computed weights do not attain a grade of smaller
than 1. The second set of constraints imposes the minimum weight bound on the
attribute weights as estimated by model (9). It is worth pointing out that the
worst-practice frontier approach is not a new approach in the DEA literature.
Conceptually, it is parallel to the worst possible efficiency concept as discussed in
Zhou et al. (2007) and Takamura and Tone (2003).
A Note on Fuzzy Multiattribute Grey Related Analysis Using DEA 7
In order to combine the grey related grades obtained from models (8) and (10),
i.e., the best and worst sets of weights, the linear combination of corresponding
normalized grades is recommended as follows:
In this section we present the application of the proposed approach for nuclear
waste dump site selection. The multi-attribute data, adopted from Wu and Olson
(2010), are presented in Table 1. There are twelve alternative sites and 4 perfor-
mance attributes. Cost, Lives lost, and Risk are undesirable attributes and Civic
improvement is a desirable attribute. Cost is in billions of dollars. Lives lost reflects
expected lives lost from all exposures. Risk shows the risk of catastrophe (earth-
quake, flood, etc.) and Civic improvement is the improvement of the local com-
munity due to the construction and operation of each site. Cost and Lives lost are
crisp values as outlined in Table 1, but Risk and Civic improvement have fuzzy data
for each nuclear dump site. We use the processed data as reported in Wu and Olson
(2010). First the trapezoidal fuzzy data are used to express linguistic data in
Table 1. Using the a-cut technique, the raw data is expressed in fuzzy intervals as
shown in Table 2. These data are turned into the comparability sequence by using
Eqs. (2) and (3). Each attribute is now on a common 0–1 scale where 0 represents
the worst imaginable attainment on an attribute, and 1.00 the best possible
attainment.
Using Eq. (6), all grey related coefficients are computed to provide the required
(output) data for models (7), (8), (9) and (10) as shown in Table 3. Note that grey
related coefficients depend on the distinguishing coefficient q, which here is 0.80.
The minimum weight bound for the attribute weights, using model (9), is equal to
e0min ¼ 0:1418 that belongs to Anaheim. Table 4 presents the results obtained from
models (7), (8) and (10) as well as the corresponding composite grades at k ¼ 0:5.
If decision makers have no strong preference, k ¼ 0:5 would be a fairly neutral and
reasonable choice. It can be seen from Table 4, Rock Sprgs with a compromise
8 M. S. Pakkar
grade of 1 stands in the first place while seven and four alternatives are ranked in
the first position by models (7) and (8), respectively. This indicates that the pro-
posed approach can significantly improve the degree of discrimination among
alternatives. It is worth noting that, although Rock Sprgs has the highest compro-
mise grade (=1), it does not have the highest grey related coefficient with respect to
each attribute (Table 3). It is likely due to the fact that Rock Sprgs not only has
relatively high values of grey related coefficients but also has a better combination
among the different attributes.
A Note on Fuzzy Multiattribute Grey Related Analysis Using DEA 9
Table 3 Results of grey related coefficients for nuclear waste dump site selection
Site Cost Lives lost Risk Civic
Nome 0.9383 0.6281 0.4578 0.4872
Newark 0.4444 0.4444 1 1
Rock Sprgs 0.8352 0.8352 0.7917 0.7917
Duquesne 0.6847 0.8352 0.6032 0.6032
Gary 0.6281 0.5033 0.7917 1
Yakima 0.6847 0.5033 0.4872 0.6032
Turkey 0.8837 0.539 0.4872 0.7917
Wells 0.9383 1 0.6032 0.6032
Anaheim 0.472 0.4578 0.4578 0.4578
Epcot 0.5033 0.472 1 0.4578
Duckwater 0.5802 0.539 0.6032 0.4872
Santa Cruz 0.5033 0.5033 0.4578 0.4578
Table 4 Results of grey related grades obtained from models (7), (8) and (10), and the
corresponding compromise grades
Site Ck C0k C00k Dk ðk ¼ 0:5Þ
Nome 1.0000 (1) 0.9102 (7) 1.0944 (9) 0.4592 (8)
Newark 1.0000 (1) 1.0000 (1) 1.1048 (7) 0.5710 (5)
Rock Sprgs 1.0000 (1) 1.0000 (1) 1.7382 (1) 1.0000 (1)
Duquesne 0.8921 (8) 0.8755 (8) 1.3594 (3) 0.5982 (4)
Gary 1.0000 (1) 1.0000 (1) 1.2262 (4) 0.6532 (3)
Yakima 0.7855 (9) 0.7565 (9) 1.1088 (6) 0.2895 (9)
Turkey 1.0000 (1) 0.9448 (5) 1.1688 (5) 0.5499 (6)
Wells 1.0000 (1) 1.0000 (1) 1.4187 (2) 0.7836 (2)
Anaheim 0.5735 (12) 0.5715 (12) 1.0000 (12) 0.0000 (12)
Epcot 1.0000 (1) 0.9441 (6) 1.0833 (10) 0.4912 (7)
Duckwater 0.7351 (10) 0.7154 (10) 1.0991 (8) 0.2350 (10)
Santa Cruz 0.5943 (11) 0.5937 (11) 1.0109 (11) 0.0333 (11)
*The site ranks are given in parentheses
Conclusion
In this short paper, we present an extended version of the DEA model, introduced in
Wu and Olson (2010) to obtain the weights of attributes in a fuzzy GRA
methodology. The proposed model can lead to grey related grades with higher
discrimination power since it uses two sets of weights that are most favorable and
least favorable for each alternative. An illustrative example of a nuclear waste dump
10 M. S. Pakkar
site selection is presented to compare our model with the Wu and Olson’s model.
The interested readers who seek the further studies on the combined methodologies
with DEA and GRA may refer to Pakkar (2016a, b, 2017, 2018).
References
Kuo Y, Yang T, Huang GW (2008) The use of grey relational analysis in solving multiple attribute
decision-making problems. Comput Ind Eng 55(1):80–93
Pakkar MS (2016a) An integrated approach to grey relational analysis, analytic hierarchy process
and data envelopment analysis. J Centrum Cathedra 9(1):71–86
Pakkar MS (2016b) Multiple attribute grey relational analysis using DEA and AHP. Complex
Intelligent Syst 2(4):243–250
Pakkar MS (2017) Hierarchy grey relational analysis using DEA and AHP. PSU Res Rev 1 (2) (in
press)
Pakkar MS (2018) A fuzzy multi-attribute grey relational analysis using DEA and AHP. In Xu J,
Gen M, Hajiyev A, Cooke FL (eds) Proceedings of the eleventh international conference on
management science and engineering management (Lecture notes on multidisciplinary
industrial engineering) Springer (in press)
Takamura Y, Tone K (2003) A comparative site evaluation study for relocating Japanese
government agencies out of Tokyo. Socio-Econ Plan Sci 37(2):85–102
Wu DD and Olson DL (2010) Fuzzy multiattribute grey related analysis using DEA. Comput Math
Appl 60(1):166–174
Wu J, Sun J, Liang L (2012) Cross efficiency evaluation method based on weight-balanced data
envelopment analysis model. Comput Ind Eng 63(2):513–519
Zhou P, Ang BW, Poh KL (2007) A mathematical programming approach to constructing
composite indicators. Ecol Econ 62(2):291–297
Storage and Retrieval Machine
with Elevated Track Opens New
Applications in Warehouse Technology
and Material Supply
Reinhard Koether
Abstract An innovative Storage and Retrieval (S/R-) machine can drive around
obstacles like conveyor lines or escape routes in the upper and lower part of a
warehouses shelf. This innovative S/R machine can also drive around a second S/R
machine in its warehouses aisle, so that one or two of these machines can operate
independently in one aisle. These improvements open new applications in ware-
house planning, management and production:
• The dimensions of the warehouse (height, length width) can be easier adjusted
to the existing building development and building regulations.
• The handling capacity can be upgraded aisle by aisle, which allows to scale the
handling capacity to the increasing demand.
• The warehouse can be integrated in the manufacturing or assembly area and
integrates in house transport into storage.
These improvements enhance storage capacity, save space and expand handling
capacity, thus saving investment and cost compared to conventional warehouse
technology.
The best warehouse is no warehouse. This rule of lean management and lean
process design is almost impossible to realize in real life material supply.
Warehouses are still needed
• to level out batch supply and continuous consumption,
• to secure supply,
R. Koether (&)
Munich University of Applied Sciences, 82131 Munich, Germany
e-mail: [email protected]
• to guarantee supply,
• to enable economies of scale in centralized manufacturing and decentralized
consumption by worldwide customers.
But warehouses need investment in warehouse technology, space and inventory
and they generate operation cost for handling and warehouse administration. In
standard warehouses like a storage for palletized goods the palettes are handled,
stored and retrieved by
• high bay fork trucks with driver or by
• automated Storage and Retrieval Machines (S/R-Machines) that are guided by
tracks on the floor and on top of the shelf.
The S/R machines are faster and automated but they are less flexible in use than
fork trucks. In almost any case an S/R machine is configured to a specific ware-
house location.
The number of S/R-machines determine the handling capacity of the warehouse
system, measured in double cycles per hour (= numbers of storage and retrieval
operations per hour). As only one S/R machine can be operated in an aisle of the
warehouse, the desired handling capacity determines the minimum number of aisles
and in consequence the width of the warehouses layout.
In any case the warehouse is like an impermeable block that does not allow
crossway traffic. Therefore, the warehouse is located in a separate building or at the
border area of a production building. This is to avoid fixed points and obstacles in
future changes of the layout.
An innovative S/R machine changes these rules and additionally allows a better
use of space in a warehouse facility: The innovative S/R-machine is guided by an
upper and a lower track, which is fixed on the storage rack (Fig. 1). A telescopic
mast is used to access the bottom and top areas of the shelf. A conventional load
handling device on the telescopic mast stores and retrieves the load units from the
shelves on both sides of the aisle.
The telescopic mast of the innovative S/R machine allows driving around obstacles
in the aisle or in the shelves (Fig. 2). Such obstacles can be conveyor lines on the
floor or on higher level, they can be traverses of the building, pipe lines for ven-
tilation or power supply. In conventional warehouses these high obstacles would
limit the accessible height of the S/R machine on the full length of the aisle, but not
only at the point of installation like for the innovative S/R machine. This function
improves the space utilization of the warehouse building. Compared to conven-
tional warehouses the alternative warehouse saves investment in floor space and
building volume.
Storage and Retrieval Machine with Elevated Track Opens … 13
Fig. 2 The innovative S/R machine (center) can drive around obstacles in a warehouses shelf like
an aisle for escape route and material transport (bottom left), an overhead conveyor (top center) or
the cross conveyor to connect the marshalling area with the storage area (bottom right)
The obstacle to drive around can also be a dynamic obstacle, in the aisle of a
warehouse: Another S/R machine can be passed if one S/R machine has lowered its
14 R. Koether
load handling device, while the other S/R machine in the same aisle lifts its load
handling device (see Fig. 1). Two S/R machines in one aisle run on tracks fixed on
the racks on both sides of the aisle, one on the left shelf, the other on the right shelf.
As the two S/R machines can operate in one aisle independently, except when the
pass, the handling capacity in one aisle can be almost doubled.
In warehouse planning, the room (length width height) determines toge-
ther with the width of the aisle the storage capacity measured in number of bin
locations. The width of the aisle is wider for fork trucks and smaller for S/R
machines. The number of fork trucks or S/R machines determines the handling
capacity measured in double cycles per hour (= number of storage and retrieval
operations per hour). As in conventional warehouses only one or less S/R machines
or fork trucks can operate in one aisle, the number of aisles determines the maxi-
mum handling capacity. So in conventional warehouses handling capacity can only
be enlarged by adding aisles which can conflict with the footprint of the space
available.
The innovative S/R machine enlarges flexibility in warehouse planning. As S/R
machines can operate independently in one aisle, the handling capacity is no longer
connected with the number of aisles. Consequently, it is much easier to adjust the
warehouses dimensions to the footprint of the available space. The number of aisles
can be designed to the space available, no longer to the handling capacity needed.
In the fictive example shown in Fig. 3, the same storage capacity and handling
capacity could be attained with six aisles or three aisles, if two S/R machines can
work in one aisle independently.
If an upgrade of handling capacity is needed after years of operation, the han-
dling capacity can be even enlarged after start of operation. The additional S/R
Fig. 3 Two innovative S/R machines can operate in one warehouses aisle independently, thus
enhancing flexibility in warehouse
Storage and Retrieval Machine with Elevated Track Opens … 15
machines can be added aisle by aisle and the handling capacity can be scaled. In
contrast adding handling capacity to an existing conventional warehouse means
greater measures on the warehouse building.
The function to drive around obstacles opens a further opportunity for high han-
dling capacity, which is needed for intensive order picking (see Koether 2014) or
any other high turnover rate. In a conventional warehouse the cross conveyor that
connects the aisles of the warehouse to the marshalling area is installed at the front
of the warehouses racks. The innovative S/R machine can jump over conveyors like
the cross conveyor. So this conveyor can be placed in the middle of the racks
(Fig. 4), with two effects:
1. The average distance from the cross conveyor to the shelf space is shorter,
allowing shorter driving time and speeding up the S/R machine’s handling
operations.
2. The warehouse can be divided in two virtual blocks on the right and left side of
the cross conveyor. In every aisle now up to four S/R machines can operate
independently, two on each side.
Again this feature can scale the number of S/R machines operating in one aisle
from one to four S/R machines and the number of S/R machine in a specific aisle
can be chosen independently from other aisles.
Fig. 4 With the central cross conveyor in every aisle 2 S/R machines can operate left of the
conveyor and 2 S/R machines right of the conveyor independently
16 R. Koether
The highest handling capacity is achieved by four S/R machines in one aisle,
which are connected to the marshalling area by a central cross conveyor. To avoid
bottlenecks in the supply of goods to store (input palettes) and the collection of
stored items (output palettes) a synchronized handling process of the cross con-
veyor was developed (see Fig. 5).
In the marshalling area the input palettes are placed on the cross conveyor
(1 Begin) and then driven halfway into the shelf area (2). Input palette I1 is pushed
on the empty transfer place to the left, while simultaneously output palette O5 is
loaded on the cross conveyor from the right (3). In the next step the palettes on the
cross conveyor move forward and at the same time palettes I1 and O6 move one
position to the left (4). In the 5th step input palettes I2, I4, I5 and I7 are pushed to
the transfer places on the left and right side of the cross conveyor und the output
palettes O1, O4, O6 and O8 are loaded on the cross conveyor, so that in step 5 8
palettes are moved simultaneously. The row of input and output palettes on the
cross conveyor is moved by one position. At the same time output palettes O2, O3
and O7 approach the cross conveyor and input palettes I2, I5 and I7 leave the
transfer places next to the cross conveyor (6). Then similar to step 5 6 palettes are
loaded and unloaded simultaneously: I3, I6 and I8 are pushed from the conveyor to
the respective transfer places while O2, O3 and O7 are loaded on the cross con-
veyor. Only output palettes are now sitting on the cross conveyor (7). They are
moved in one chain to the marshalling area (8 End).
The feature of multiple S/R machines operation in one aisle allows saving floor
space for warehouses with high turnover rate, because the number of aisles can be
Storage and Retrieval Machine with Elevated Track Opens … 17
determined by the storage spaces needed, not by the handling capacity. In addition,
the handling capacity of existing warehouses can be enlarged without building
measures.
The aspect of floor space which is needed per aisle to achieve the handling
capacity is even greater for manually operated warehouses with fork trucks. As fork
trucks offer less handling capacity per device than S/R machines, even more aisles
would be needed for a certain handling capacity. In addition, the aisle width for fork
truck operation must be larger which consumes even more floor space and building
volume. The investment for floor space and for the building can be saved. These
savings allow paying off the larger investment for the innovative S/R machine
compared to conventional ones or compared to fork trucks.
Fig. 6 Layout of assembly with adjacent warehouse (exemplary sketch). Legend: 1 Material
supply zone, 2 assembly line, 3 working area 4 drive way for internal transport and for escape
route, 5 loading docks, 6 floor transfer of assembly line, 7 overhead transfer of assembly line, 8
preassembly line, 9 loading docks for just in time (JIT) supply, 10 conventional warehouse for
assembly material
18 R. Koether
Fig. 7 Layout of assembly with integrated warehouse (exemplary sketch). Legend: 1 Material
supply zone, 2 assembly line, 3 working area 4 drive way for internal transport and for escape
route, 5 loading docks, 6 floor transfer of assembly line, 7 overhead transfer of assembly line, 8
preassembly line, 9 loading docks for just in time (JIT) supply, 12 integrated warehouse for
assembly material, 13 material supply zone, material placed by innovative S/R machine, 14 floor
transfer of assembly line over passed by innovative S/R machine, 15 escape route crossing shelves
container from shelves of either side of the aisle and relocate it to the material zone
of the assembly line.
Every second driveway still exists to connect the escape routes (15) with the
drive ways (4) and to enable conventional transportation for all material that is not
stored in the warehouse sections with innovative S/R machine. Such parts can be
packed in oversized or special containers, they can be sequenced or preassembled
items or they can be parts that are delivered Just In Time (JIT) form outside or
inside suppliers. So every workplace and every section of the assembly line has
access to the driveway.
As two innovative S/R machines can operate independently in one aisle one
machine can handle palettes, the other one small part containers. Thus it is possible
to supply palletized material and small parts directly from the warehouse without
intermediate handling.
The warehouse sections in the integrated layout (Fig. 7) are part of the assembly
layout structure. This structure typically can be reused for modernized, upgraded
and new products. To match the layout and the number of assembly stations with
the planned volume the layouts in Figs. 6 and 7 are configured by modular elements
of area. These modules can be copied and inserted (see Fig. 8). To enlarge the
length of the assembly lines, the elements of vertical area module in column B and
C can be copied, multiplied and pasted like column BCa, BCb, BCc and so on. To
insert more assembly lines the elements of horizontal area module between the
20 R. Koether
pillars in row 1 and 2 can be copied. The sequence of the assembly hall naves from
top to bottom would then be: 1,2, 1,2, 1,2, .. 3, 4.
A short glance at Figs. 6 and 7 already shows that the assembly line with
integrated warehouse demands less floor space than the conventional layout with
separated assembly and warehouse zones. Less floor space means less investment
for land and for the building. Furthermore, the simplified and automated process
(Table 1), saves time and handling cost.
Conclusion
The innovative S/R machine described is applied for a patent. Together with an
industrial partner a detailed design and control concept will be developed in the
near future.
References
Koether R, Zwei Regalbediengeräte in einer Regalgasse. Animation available on you tube: https://
youtu.be/vvmWM-ZUiOI
Storage and Retrieval Machine with Elevated Track Opens … 21
Keywords Nurse scheduling Integer program Shift preferences
Multi-objective optimization Goal programming
Introduction
Healthcare systems have experienced drastic changes in the last few decades as a
result of increasing population and technological developments that both increased
life expectancy and incurred additional costs. There have also been budget cuts that
force hospitals to use their resources more efficiently. Healthcare organizations have
to work twenty-four hours a day for every day of the year and shift work is used in
healthcare organizations in order to provide continuous service for patients.
Literature Review
first determines the day-of-week schedules and then the time-of-day schedules.
Arthur and Ravindran (2015) proposed a goal programming model solved similarly
in two phases, first by assigning day-on/day-off patterns using goal programming
and then by assigning shifts using a heuristic. Berrada et al. (1996) treated the
problem as a multi-objective model with hard and soft constraints, where soft
constraints are used to define goals, and they solved the problem by goal pro-
gramming and tabu search. Azaiez and Al Sharif (2005) developed a 0-1 goal
program with five goals and solved the monthly scheduling problem by sub-
grouping the nurses and workloads into manageable sizes. Wright and Mahar
(2013) compare centralized and decentralized nurse scheduling across two
multi-unit hospitals and show that the centralized model performs better in terms of
scheduling cost and schedule desirability.
The literature on nurse scheduling in surgical suites can be considered scarce
compared to nurse scheduling in other hospital units. Belien and Demeulemeester
(2008) solve an integrated nurse and surgery scheduling problem using column
generation. The daily surgery assignment of nurses is modeled as a multi-objective
integer programming model and solved using a solution pool method and a variant
of goal programming in Mobasher et al. (2011). Similarly, a nurse scheduling
model with the objectives of minimizing labor costs, patient dissatisfaction, nurse
idle time, and maximizing job satisfaction is presented in Lim et al. (2012).
Although there is a considerable amount of literature on nurse scheduling, there
are only a few studies that are based on nurse preferences to improve job satis-
faction and that also have multiple goals for equitable shift assignment among
nurses. In this study, we give nurse preferences the highest priority after the
demand-related hard constraints and consider multiple goals to ensure fair workload
distribution. Next, we introduce our multi-objective integer programming model.
Model Development
4. Regular nurses are classified into three groups based on their specialty:
(1) cardiovascular surgery, (2) general surgery, and (3) neurosurgery.
5. Minimum staff level requirements must be satisfied.
6. Each nurse has to work at least 24 h and at most 72 h per week.
7. Nurses cannot work for more than 6 consecutive working days.
8. A nurse should not work more than 3 consecutive night shifts.
9. Each nurse works at most one shift a day. This is especially a necessary con-
straint for a surgical suite because of the high service level expectations and
more arduous workload than other hospital units.
Assumptions similar to the assumptions 1, 2, 5, 6, and 9 can be observed in the
literature. However, the assumptions regarding classes of nurses (3, 4), shift
durations (2), and hospital-specific work regulations (6, 7, 8) above are made based
on an interview with the head nurse of a surgical unit at a private hospital.
The notation used in the model formulation are explained below.
Sets
I set of nurses
S set of specialties
Isset of nurses in specialty s 2 S
J set of shifts in a day, where j ¼ 1 for day shift, j ¼ 2 for evening shift, and
j ¼ 3 for night shift
T set of days in the scheduling period
W set of weeks in the scheduling period
Parameters
1; if nurse i 2 I prefers to work shift j 2 J on day t 2 T
pijt ¼
0; otherwise
Decision Variables
1; if nurse i 2 I is assigned to shift j 2 J on day t 2 T
xijt ¼
0; otherwise
28 E. A. Aktunc and E. Tekin
Model Formulation
! !
X
R XX X
jI j X X
Min cri xijt þ ci xijt
i¼1 j2J t2T i¼R þ 1 j2J t2T ð1Þ
Subject to
X
xijt NRsjt ; 8s 2 S; j 2 J; t 2 T ð2Þ
i2Is
X
jI j
xijt NIjt ; 8j 2 J; t 2 T ð3Þ
i¼R þ 1
X
7w X
xijt hL =8; 8i 2 I; w 2 W ð4Þ
t¼7ðw1Þ þ 1 j2J
X
7w X
xijt hU =8; 8i 2 I; w 2 W ð5Þ
t¼7ðw1Þ þ 1 j2J
X
xijt 1; 8i 2 I; t 2 T ð6Þ
j2J
X kX
þ6
xijt 6; 8i 2 I; k ¼ 1; 2; . . .; jT j 6 ð8Þ
j2J t¼k
kX
þ3
xi3t 3; 8i 2 I; k ¼ 1; 2; . . .; jT j 3 ð9Þ
t¼k
X
xi3t 3; 8i 2 I ð10Þ
t2T
! !
X X X
1 xijt þ xijðt þ 1Þ þ 1 xijðt þ 2Þ 2; 8i 2 I; t ¼ 1; 2; . . .; jT j 2
j2J j2J j2J
ð13Þ
!
X X X
xijt þ 1 xijðt þ 1Þ þ xijðt þ 2Þ 2; 8i 2 I; t ¼ 1; 2; . . .; jT j 2 ð14Þ
j2J j2J j2J
The initial objective function (1) minimizes the total cost of nurses assigned to
shifts. Constraints (2) and (3) ensure that the required numbers of regular nurses and
intern nurses are met, respectively, for each shift on each day. Constraints (4) and
(5) bound the total weekly hours assigned to a nurse using the minimum and
maximum allowed working hours, respectively. Constraint (6) avoids the assign-
ment of more than one shift per day to a nurse. Assigning a day shift followed by a
night shift or a night shift right before a day shift is prevented by constraint (7).
According to constraints (8) and (9), a nurse can work for at most 6 consecutive
days and can be assigned at most 3 consecutive night shifts, respectively. Constraint
(10) ensures that each nurse is assigned at least 3 night shifts in a month. Constraint
(11) avoids shift assignments on days that are not preferred by a nurse, in other
words, over-assignment. Constraint (12) requires the total night shifts assigned to
be at most as many as the total day and evening shifts assigned to a nurse.
Constraints (13) and (14) avoid the “0-1-0” or “1-0-1” types of assignments where a
day on would be between two days off or a day off would be between two days on,
which are both undesired cases from the nurse’s perspective. Constraint (15) defines
the binary decision variables.
In this model, constraints (2)–(10) are hard constraints that cannot be violated
and constraints (11)–(14) are soft constraints that can be violated at a cost in order
to obtain a feasible schedule. In the following section, the goal programming model
is formulated by incorporating the penalty of violating these soft constraints in the
objective function.
Solution Methodology
In the proposed model, soft constraints are (11)–(14). In order to penalize the
violation of these soft constraints, appropriate decision variables should be added to
the model as explained below.
30 E. A. Aktunc and E. Tekin
where a is the penalty cost for assignment of a shift that a nurse does not prefer and
b is the penalty cost for not assigning a shift that a nurse prefers, such that a b
because over-assignment is even more undesirable than under-assignment.
Goal 2: Soft constraint (12) is modified as follows to allow for deviations:
!
X X X
xi3t xi1t þ xi2t diþ þ di ¼ 0 8i 2 I ð18Þ
t2T t2T t2T
where diþ 2 f0; 1g; 8i 2 I, and di 2 f0; 1g; 8i 2 I, are the positive and negative
deviations from the goal of assigning less night shifts than day and evening shifts in
total. The positive deviations from this goal is penalized by adding the following
objective function:
X
Minimize diþ ð19Þ
i2I
The objective function (24) is the weighted sum of five different objectives:
(i) minimizing the total cost of nurses assigned to shifts as in (1), (ii–v) minimizing
the total violation of soft constraints as shown in (17), (19), (21), and (23),
respectively.
Next, we describe our case study and present the computational results of our
proposed model in terms of various performance measures.
Computational Results
In this section, we provide a numeric example for illustrating the nurse scheduling
model presented above. This example is based on an actual surgical unit of a private
hospital in terms of the number of available nurses and the minimum required total
number of nurses; however, due to the lack of data regarding the number of
surgeries of each type performed, the minimum requirement for each surgery type is
estimated based on an interview with the surgical unit nurses. Consider a surgical
suite in which 30 nurses are employed, 5 of whom are intern nurses who are still in
training. The other 25 nurses are regular nurses with more experience and they have
32 E. A. Aktunc and E. Tekin
higher priority in terms of shift preferences and taking time off work. Regular
nurses are split into three groups based on their specialties: 10 nurses are in car-
diovascular surgery ðs ¼ 1Þ, 8 nurses are in general surgery ðs ¼ 2Þ, and 7 nurses
are in neurological surgery ðs ¼ 3Þ. The planning horizon is four weeks (28 days,
including the weekends). The minimum required number of regular nurses of each
specialty and intern nurses in each shift are provided in Table 1. These numbers are
assumed for each day since there are as many elective surgeries on the weekends as
on the weekdays. The cost of a regular nurse per shift is cri ¼ 3; i ¼ 1; 2; . . .; 25 and
the cost of an intern per shift is ci ¼ 2; i ¼ 26; 27; . . .; 30.
In order to measure the performance of the model solutions, we define the
following terms (25)–(27). Let x be the number of nurses who are over-assigned,
i.e., that are assigned a shift they did not prefer to work, as shown below.
X
x¼ 1 ð25Þ
i2I:9oijt ¼1
Let qo be the ratio of the total number of over-assigned shifts to the total number
of assigned shifts, i.e., the ratio of shifts assigned that nurses do not prefer to work
on, as shown below.
P
i;j;t oijt
qo ¼ P ð26Þ
i;j;t xijt
Let qu be the ratio of the total number of under-assigned shifts to the maximum
possible number of under-assigned shifts, i.e., the total number of shifts that nurses
prefer to work on, as shown below.
P
i;j;t uijt
qu ¼ P ð27Þ
i;j;t pijt
The optimization model is formulated using GAMS 24.6 and solved using
CPLEX 12.6 software on a 2.20 GHz Windows laptop computer with 6 GB RAM.
Base Case Scenario 0
In our Base Case Scenario 0, all shift preferences are set to zero, i.e.,
pijt ¼ 0; 8i 2 I; j 2 J; t 2 T. Assuming that a ¼ b ¼ 1, the optimal value for the
objective function (24) in this scenario is z ¼ 1940; where the total number of
Nurse Scheduling with Shift Preferences in a Surgical Suite … 33
P P
shifts assigned is i;j;t xijt ¼ i;t oijt ¼ 506. In this case, the proportion of
over-assignment (out of 2520 shifts) is qo ¼ 100%.
Base Case Scenario 1
The other extreme is having all nurses available on all shifts over the planning
horizon, i.e., pijt ¼ 1; 8i 2 I; j 2 J; t 2 T. Assuming that a ¼ b ¼P 1, the optimal
value for the objective function (24) is z ¼ 3451; where, again, i;j;t xijt ¼ 506,
P
and i;t uijt ¼ 2014. In this case, since there cannot be any over-assignment,
qo ¼ 0%.
These base case scenariosP show us the limits on the objective function value and
the limits on the quantity i;j;t oijt , as well as qo , when a ¼ b ¼ 1 at the two
extreme nurse preference values.
Four different Preference Scenarios (PS) are developed to test the performance of
the scheduling model. These scenarios, PSn (n = 1, 2, 3, 4), are designed such that
n*20% of all shifts in a month are preferred by nurses. Therefore, PS1 is the most
restrictive one out of these scenarios. The problem is solved for these scenarios with
different penalty cost, a, values for not meeting the nurse preferences and all the
problem instances are solved to optimality within at most 1000 s. We present the
optimal solutions of the proposed model for the four scenarios for various a values
in Table 2, 3 and 4 below.
In all of the problem instances, it is observed that constraint (12), regarding the
P
“0-1-0” type of assignment, is never violated, i.e., i;t dai;tþ ¼ 0 in all cases.
When PS1 is used, at any a level the same optimal schedule is obtained. In this
solution, 78.5% of assignments (397 of 506) are over-assignments made for all 30
nurses as can be seen from Table 2. When PS4 is assumed, the ratio of
over-assigned shifts is reduced to 13.2% (67 of 506) and the number of nurses
affected by over-assignment is reduced to 18. In PS4, there is also no “1-0-1” type
of assignment in the optimal solution.
It is observed that the ratio of over-assigned shifts is significantly reduced to
37.6% for a ¼ 1 when PS2 is assumed and the minimum possible value for this
ratio is 34.2% in this scenario as shown in Table 3. The number of nurses affected
by over-assignment, x, is reduced to 27 as a is increased. The ratio, qo , is further
reduced to 22.4% when PS3 is used and the minimum possible value becomes
16.3% in this scenario as shown Table 4. The number of nurses affected by
over-assignment is reduced to 25 as a is increased.
Additionally, in PS2, at a ¼ 4 and a ¼ 5, three nurses are assigned 8 more night
shifts than their total day and evening shifts in the optimal schedule, i.e.,
P þ
i2I di ¼ 24. The sum of these positive slack variables increases in PS3 to 43 and
46 for the same a values, respectively. Also, when PS3 is used, the number of
“1-0-1” type assignments made increases as a increases, whereas, in PS2, this
number is fixed at 2 for all a values.
In Fig. 1, it is shown that the impact of increasing the penalty cost for
over-assignments, a, diminishes as the number of preferred shifts is increased in the
4000
α=5
3500 α=4
α=3
3000 α=2
z*
α=1
2500
2000
PS1 PS2 PS3 PS4
Fig. 1 The optimal objective function values for the four preference scenarios at different a levels
Nurse Scheduling with Shift Preferences in a Surgical Suite … 35
0,8 PS1
0,6
0,4
PS2
0,2 PS3
PS4
0
1 2 3 4 5
α
Fig. 2 The ratio of over-assignment for the four preference scenarios at different a levels
Conclusions
In this study, we focus on the nurse scheduling problem in a surgical suite where
cardiovascular, general, and neurological surgeries are performed. Shift preferences
of nurses are given the highest priority while meeting the demand-related con-
straints. We develop a multi-objective integer programming model with hard and
soft constraints and solve the model using goal programming. The model is for-
mulated to produce the best possible schedule in terms of meeting nurse preferences
and fair distribution of workload. We illustrate the performance of the proposed
model with an example under various shift preference scenarios. Computational
results show that the multi-objective nature of the model leads to higher job sat-
isfaction for nurses in terms of the performance measures evaluated, by avoiding
the assignment of shifts that are not preferred, by avoiding isolated days on or off,
and by avoiding disproportionate night shift assignments. The proposed model can
easily be used in practice to produce the best possible nurse schedule given a certain
shift preference scenario by adjusting the penalty cost, a.
This model can be extended such that stochastic demand is used as input rather
than deterministic demand so as to include emergency surgeries. Our model con-
siders a wide set of constraints already, however, there are other scheduling con-
straints in the literature that can easily be added to this model such as constraints
dealing with annual vacations or minimum number of weekend days off.
Surgical units are especially important for hospital management due to the high
level of medical care provided and the high revenues generated through the
36 E. A. Aktunc and E. Tekin
operations. The efficiency and quality of surgical suite operations can be improved
by assuring the job satisfaction of nurses who are one of the essential resources.
Fair distribution of workload among nurses in terms of shift schedule and minimum
possible deviation from the shift preferences of each nurse, as demonstrated in this
study, can increase job satisfaction of surgical unit nurses. We believe that hospitals
would benefit from adopting such staff schedules in terms of not only job satis-
faction of the surgical unit nurses, but also the quality of medical care provided
which would result in higher credibility in return.
References
Arthur JL, Ravindran A (2015) A multiple objective nurse scheduling model. AIIE Trans 13
(1):55–60
Azaiez MN, Al Sharif SS (2005) A 0-1 goal programming model for nurse scheduling. Comput
Oper Res 32(3):491–507
Baker KR (1976) Workforce allocation in cyclical scheduling problems: a survey. Oper Res Q
(1970–1977) 27(1):155–167
Belien J, Demeulemeester E (2008) A branch-and-price approach for integrating nurse and surgery
scheduling. Eur J Oper Res 189(3):652–668
Bechtold S, Brusco M, Showalter M (1991) A comparative evaluation of labor tour scheduling
methods. Decis Sci 22(4):683–699
Berrada I, Ferland JA, Michelon P (1996) A multi-objective approach to nurse scheduling with
both hard and soft constraints. Socio-Econ Plann Sci 30(3):183–193
Bradley D, Martin J (1991) Continuous personnel scheduling algorithms: a literature review. J Soc
Health Syst 2(2):8–23
Burke EK, De Causmaecker P, Berghe GV, Van Landeghem H (2004) The state of the art of nurse
rostering. J Sched 7(6):441–499
Cheang B, Li H, Lim A, Rodrigues B (2003) Nurse rostering problems—a bibliographic survey.
Eur J Oper Res 151(3):447–460
Ernst AT, Jiang H, Krishnamoorthy M, Sier D (2004) Staff scheduling and rostering: a review of
applications, methods and models. Eur J Oper Res 153(1):3–27
Lim GJ, Mobasher A, Côté MJ (2012) Multi-objective nurse scheduling models with patient
workload and nurse preferences. Management 2(5):149–160
Marques I, Captivo ME, Vaz Pato M (2015) A bicriteria heuristic for an elective surgery
scheduling problem. Health Care Manage Sci 18(3):251–266
Mobasher A, Lim G, Bard JF, Jordan V (2011) Daily scheduling of nurses in operating suites. IIE
Trans Healthc Sys Eng 1(4):232–246
Oulton JE (2016) The global nursing shortage: an overview of issues and actions. Policy Polit Nurs
Pract 7(3):34–39
Ozkarahan I, Bailey JE (1988) Goal programming model subsystem of a flexible nurse scheduling
support system. IIE Trans 20(3):306–316
Sitompul D, Radhawa S (1990) Nurse scheduling: a state-of-the-art review. J Soc Health Syst 2
(1):62–72
Tien JM, Kamiyama A (1982) On manpower scheduling algorithms. Siam Rev 24(3):275–287
Van den Bergh J, Beliën J, De Bruecker P, Demeulemeester E (2013) Personnel scheduling: a
literature review. Eur J Oper Res 226(3):367–385
Wright DP, Mahar S (2013) Centralized nurse scheduling to simultaneously improve schedule cost
and nurse satisfaction. Omega 41(6):1042–1052
Implementing EWMA Yield Index
for Product Acceptance Determination
in Autocorrelation Between Linear Profiles
Yeneneh Tamirat
Keywords Yield index Acceptance sampling plans Exponentially weighted
moving average Autocorrelation between linear profiles
Introduction
Y. Tamirat (&)
Department of Business Administration, Asia University, Taichung 41354, Taiwan
e-mail: [email protected]
plan consists of the sample size to be used and the associated acceptance or
rejection criteria.
A profile occurs when a critical-to-quality characteristic is functionally depen-
dent on one or more independent variables. Thus, instead of observing a single
measurement on each unit or product we observe a set of values over a range that,
when plotted, takes the shape of a curve (Montgomery 2013). The curve explains
the possible effect on the dependent variable that might be caused by different levels
of the independent variable. A review of research topics on the monitoring of linear
profiles is provided by Woodall (2007). Noorossana et al. (2011a, b) provided an
inclusive review of profile monitoring. With the assumption that the process data
are uncorrelated, many studies have been done by researchers on the monitoring of
simple linear/nonlinear profiles (see Li and Wang 2010; Noorossana et al. 2010;
Noorossana et al. 2011a, b; Chuang et al. 2013; Ghahyazi et al. 2014). For simple
nonlinear profiles and linear profiles, a process-yield index SpkA with a lower
confidence bound is proposed by Wang and Guo (2014) and Wang (2014),
respectively. However, process data in continuous manufacturing processes are
often autocorrelated. In the presence of autocorrelation between profiles, Wang and
Tamirat (2014) proposed a process-yield index SpkA;ARð1Þ and its approximate lower
confidence bound (LCB).
In a highly competitive environment, acceptance sampling plans must be
appropriately applied. For example, when the required fraction defective is very
low, the sample size taken must be very large in order to adequately reflect the
actual lot quality, to tackle this problem variable sampling plans based on capability
indices have been developed by various authors including Pearn and Wu (2006,
2007), Wu and Pearn (2008), and Wu and Liu (2014). However, the sample size
required by process capability based plans would be very large. For example, for
auto correlated profiles with a given q = 0.5, and n = 4 it requires 1046 profiles at a
consumer and producer risk of 0.05 and 0.10 respectively.
To improve the inspection efficiency, the accumulated quality history from
previous lots should be included. The exponentially weighted moving average
(EWMA) statistic has been widely used in quality control charts, which consider the
present and past information. The weights decline geometrically with the time of
the observations. This EWMA statistic is known to be efficient at detecting a small
shift in the process ((Hunter 1986; Lucas and Saccucci 1990; Čisar and Čisar 2011;
Montgomery 2013). The EWMA statistic based on yield index was first introduced
in an acceptance sampling by Aslam et al. (2013). Yen et al. (2014) developed a
variable sampling plan based on the EWMA yield index Spk and Aslam et al. (2015)
applied the EWMA statistic to the quality characteristic itself based on the mean
and standard deviation to develop an acceptance sampling plan. However, the
proposed methods consider only a single quality characteristic and cannot be
applied to profile data. Furthermore, process autocorrelations may affect the per-
formance of the process yield index. Based on our knowledge, there is no work on
the sampling plans based on the yield index for autocorrelation between linear
Implementing EWMA Yield Index for Product … 39
profiles. The main purpose of this paper is to develop a variable sampling plan
based on yield index SpkA;ARð1Þ to deal with lot sentencing of auto correlated profiles.
In this study we propose a new method for economic appraisal of materials. In
the presence of autocorrelation between linear profiles, we present a variable
acceptance sampling plan using the EWMA statistic with yield index. Taking into
account the acceptable quality level at the producer’s risk and the lot tolerance
percent defective at the consumer’s risk, a non-linear optimization method is pro-
posed to determine the number of profiles required for inspection and the corre-
sponding acceptance or rejection criteria. The rest of this paper is organized as
follows. In the next section, a yield index SpkA;ARð1Þ is summarized.
Section “Proposed Sampling Plan” describes the proposed sampling plan based on
the EWMA statistic. Finally, we offer a conclusion and suggestions for future
studies.
In this section, we review the yield index for autocorrelation between linear profiles.
The first order autocorrelation between linear profiles is modeled by
yij ¼ a þ bxi þ eij
i ¼ 1; 2; . . .; n and j ¼ 1; 2; . . .; k ð1Þ
eij ¼ qeiðj1Þ þ aij
where yij is the response value at the ith level of the independent variable from the
jth profile, xi is the ith level of the independent variable, n is the number of levels
for the independent variable, k is the number of profiles, eij denotes correlated
random error, a is the intercept of linear profiles, b is the slope of linear profiles, q
denotes the autocorrelation coefficient, and aij N ð0; r2 Þ.
The process yield at the ith level of the independent variable can be derived by
the process yield index proposed by Boyles (1994). This index is useful to describe
the relationship between manufacturing specifications and actual process perfor-
mance and is defined as follows:
1 1 1 USLi li 1 li LSLi
Spki ¼ U U þ U
3 2 ri 2 ri
ð2Þ
1 1 1 1 Cdri 1 1 þ Cdri
¼ U U þ U
3 2 Cdpi 2 Cdpi
where USLi and LSLi are the upper and lower specification limits of the response
variable at the ith level of the independent variable, li and ri are the process mean
and the standard deviation at the ith level of the independent variable,
Cdri ¼ ðli mi =di Þ, Cdpi ¼ ri =di , mi ¼ USLi þ LSLi =2, di ¼ USLi LSLi =2, U is
40 Y. Tamirat
where
h i h ^ ^ i
^Spki ¼ 1 U1 1 þ Cdri
USLi yi
þ 12 U yi LSL 1 1 1 1Cdri
2U ¼ U U þ U
1 i 1
3 Si Si 3 2 ^
C 2 ^
C
,
dpi dpi
where
di 1 Cdri 1 þ Cdri
ai ¼ pffiffiffi ð1 Cdri Þ/ þ ð1 þ Cdri Þ/ ;
2ri Cdpi Cdpi
1 Cdri 1 þ Cdri
bi ¼ / / ;
Cdpi Cdpi
2 Xk1
f ¼1 ðk iÞqi
kðk 1Þ i¼1
2Xk1
g ¼ 1þ ðk iÞqi
k i¼1
" #2
X
k1
1 X
k1
2Xk1 X
ki
F ¼ kþ2 ðk iÞq þ 2 k þ 2
2i
ðk iÞq
i
ðk i jÞqi q j
i¼1
k i¼1
k i¼0 j¼0
An acceptance sampling plan must consider two levels of quality such as the
acceptable quality level (AQL) and the lot tolerance proportion defective (LTPD).
The AQL is also called the quality level desired by the consumer. The producer’s
risk (a) is the risk that the sampling plan will fail to verify an acceptable lot’s
quality. The LTPD is also called the worst level of quality that the consumer can
tolerate. The probability of accepting a lot with LTPD quality is the consumer’s risk
(b). An operating characteristic (OC) curve depicts the discriminatory power of an
acceptance sampling plan. Thus, its designed plan parameters are determined by the
OC curve, which must pass through the two designated points (AQL, 1 − a) and
(LTPD, b).
In some situations, the accumulation of quality history from previous lots is
available. We proposed the variable sampling plan using the EWMA statistic. The
sampling procedure is described as follows:
Step 1: Choose the producer’s risk ðaÞ and the consumer’s risk ðbÞ. Select the
process capability requirements (CAQL, CLTPD) at two risks respectively.
Step 2: Select a random number of profiles k at the current time t and collect the
preceding acceptance lots with their yield index values. Then, we compute the
following EWMA sequence, say Zt for t = 1, 2, 3, …,T.
where k is a smoothing constant and ranges from 0 and 1. The choice of its optimal
value is based on minimizing the sum of the square errors,
P 2
SSE ¼ T Zt S^pkA;ARð1Þ;t , where Z2 ¼ S^pkA;ARð1Þ;1 (Hunter 1986). To find the
t¼2
optimal k value, a simple R program using the DEoptim algorithm is developed
(Ardia et al. 2011).
Step 3: Accept the lot from the supplier if Zt c, where c is the critical value;
otherwise reject it.
The OC function of our proposed plan is derived as follows:
0 1
P
B c SpkA;ARð1Þ þ ni¼1 12n/ð3S ½ai ð1f Þ C
B Zt E ðZt Þ pkA;ARð1Þ Þ
C
B
PðZt cÞ ¼ PBsffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
h iffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
h iffi C
P P C
@ n kai F2 þ b2i g 2
n 2
ka F
i
2 þ bi g
2 A
k i¼1 ð k1 Þ k i¼1 ð k1 Þ
2k 36n2 k/ð3SpkA;ARð1Þ Þ2 2k 36n2 k/ð3SpkA;ARð1Þ Þ2
ð6Þ
X
n
½ai ð1 f Þ
EðZt Þ ¼ SpkA;ARð1Þ þ
i¼1
12n/ð3SpkA;ARð1Þ Þ
and h i
Pn ka2i F
þ b2i g
k i¼1 ðk1Þ2
VarðZt Þ ¼ :
2 k 36n2 k/ð3SpkA;ARð1Þ Þ2
The parameters of our proposed plan can be determined through the non-linear
optimization problem given in Eq. (9), where the number of profiles (k) and critical
value (c) are decision variables. For a particular sampling plan, the producer is
interested in finding the probability that a type I error can be committed. Using
Eq. (9), the producer is able to find a sampling plan which guarantees that the lot
acceptance probability is larger than the desired confidence level, 1 a, at the lot
acceptable quality level (CAQL). Concurrently the consumer desires that, based on
sample information, the probability that a bad (quality) population will be accepted
is smaller than the risk at the lot tolerance proportion defective(CLTPD). That is,
SpkA;ARð1Þ ¼ CAQL for the producer and SpkA;ARð1Þ ¼ CLTPD for the consumer.
Minimize k
ð9aÞ
Subject to
0 1
Pn
Bc C ½ai ð1f Þ C
B AQL þ i¼1 12n/ð3CAQL Þ C
B
1 UB sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
iffi C1 a ð9bÞ
@ Pn h ka2 F C
A
k i¼1 ðk1i Þ2 þ b2i g
2k 36n2 k/ð3CAQL Þ2
Implementing EWMA Yield Index for Product … 43
0 1
Pn
Bc C ½ai ð1f Þ C
B LTPD þ i¼1 12n/ð3CLTPD Þ C
1 UB
B s ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
h iffi Cb
C ð9cÞ
@ Pn ka2i F 2 A
k i¼1 ðk1Þ2 i þ b g
2k 36n2 k/ð3CLTPD Þ2
Given q; k; CAQL ; CLTPD ; a, and b as inputs, we evaluate the constraints (9b) and
(9c), where the objective function is to minimize the number of profiles. A search
procedure is considered to determine the plan parameters. First, 10,000 combina-
tions of k and c are randomly generated, where k ranges from 2 to 3000 and
c follows a uniform distribution from CAQL to CLTPD. The above procedure is
repeated 1,000 times to determine the optimal parameters.
To investigate the performance of the proposed method, a computer program
written in R language is used. In Tables 1 and 2, we tabulate sampling plan parameters
for various combinations of two quality levels (CAQL, CLTPD) at a ¼ 0:05 and
b ¼ 0:10. The sampling parameters are found under a given k ¼ 0:10; 0:20; 0:50, and
Table 1 Plan parameters using the single sampling plan on EWMA yield index under various
ðk; CAQL ; CLTPD Þ at a ¼ 0:05; b ¼ 0:10; q ¼ 0:5 and n ¼ 4
k ¼ 0:1 k ¼ 0:2 k ¼ 0:5 k ¼ 1:0
CAQL ¼ 1:33 CAQL ¼ 1:33 CAQL ¼ 1:33 CAQL ¼ 1:33
CLTPD k c k c k c k c
1.15 36 1.2534 149 1.2348 1313 1.2295 2560 1.2347
1.10 15 1.2550 57 1.2159 499 1.2025 2105 1.2061
1.05 8 1.2691 28 1.2027 228 1.1771 1720 1.1749
1.00 5 1.2771 16 1.1948 118 1.1521 1046 1.1452
k ¼ 0:1 k ¼ 0:2 k ¼ 0:5 k ¼ 1:0
CAQL ¼ 1:5 CAQL ¼ 1:5 CAQL ¼ 1:5 CAQL ¼ 1:5
CLTPD k c k c k c k c
1.25 11 1.4317 42 1.3803 353 1.3620 2120 1.3581
1.20 7 1.4314 22 1.3708 172 1.3366 1518 1.3320
1.15 4 1.4607 13 1.3687 94 1.3128 821 1.3044
1.10 3 1.4643 8 1.3703 58 1.2901 487 1.2771
1.05 3 1.4451 6 1.3708 36 1.2710 306 1.2507
k ¼ 0:1 k ¼ 0:2 k ¼ 0:5 k ¼ 1:0
CAQL ¼ 2:0 CAQL ¼ 2:0 CAQL ¼ 2:0 CAQL ¼ 2:0
CLTPD k c k c k c k c
1.60 4 1.9094 9 1.8580 58 1.7907 493 1.7768
1.55 3 1.9262 6 1.8646 36 1.7712 304 1.7503
1.50 2 1.9317 5 1.8610 25 1.7523 201 1.7231
1.45 2 1.9488 4 1.8494 18 1.7345 140 1.6964
1.40 2 1.8900 3 1.8368 13 1.7249 98 1.6717
44 Y. Tamirat
Table 2 Plan parameters using the single sampling plan on EWMA yield index under various
ðk; CAQL ; CLTPD Þ at a ¼ 0:05; b ¼ 0:10; q ¼ 0:75 and n ¼ 4
k ¼ 0:1 k ¼ 0:2 k ¼ 0:5 k ¼ 1:0
CAQL ¼ 1:33 CAQL ¼ 1:33 CAQL ¼ 1:33 CAQL ¼ 1:33
CLTPD k c k c k c k c
1.15 162 1.2447 696 1.2325 2680 1.2279 2761 1.2268
1.10 64 1.2404 268 1.2109 2176 1.2013 2314 1.2102
1.05 31 1.2498 124 1.1937 1074 1.1752 1920 1.1565
1.00 18 1.2674 67 1.1823 558 1.1491 1243 1.1379
k ¼ 0:1 k ¼ 0:2 k ¼ 0:5 k ¼ 1:0
CAQL ¼ 1:5 CAQL ¼ 1:5 CAQL ¼ 1:5 CAQL ¼ 1:5
CLTPD k c k c k c k c
1.25 48 1.4113 196 1.3726 1679 1.3611 2612 1.3578
1.20 25 1.4248 97 1.3575 816 1.3350 2481 1.3290
1.15 14 1.4499 54 1.3500 444 1.3096 2123 1.2856
1.10 9 1.4710 33 1.3483 263 1.2851 1753 1.2642
1.05 6 1.4851 22 1.3514 165 1.2629 1483 1.2493
k ¼ 0:1 k ¼ 0:2 k ¼ 0:5 k ¼ 1:0
CAQL ¼ 2:0 CAQL ¼ 2:0 CAQL ¼ 2:0 CAQL ¼ 2:0
CLTPD k c k c k c k c
1.60 9 1.9716 34 1.8482 263 1.7856 2210 1.7876
1.55 6 1.9895 22 1.8514 166 1.7632 1430 1.7489
1.50 4 1.9991 15 1.8593 110 1.7425 951 1.7214
1.45 3 1.9793 11 1.8680 77 1.7245 655 1.6959
1.40 2 1.9974 8 1.8658 56 1.7070 459 1.6682
1.0, considering two different autocorrelation coefficients q ¼ 0:5 and 0:75 and
n = 4 levels of the independent variable. The number of profiles required for lot
sentencing with a smoothing parameter k\1 is more economical than the traditional
single sampling plan ðk ¼ 1Þ. The smaller the value of k, the lower the number of
profiles required. In practice, relatively small values of k generally work best when the
EWMA is the most appropriate model.
For instance, when CAQL = 1.5, CLTPD = 1.2, and n = 4, at given values of
a ¼ 0:05, b ¼ 0:10, and q ¼ 0:5, the plan parameters (k and c) obtained with
k ¼ 0:10; 0:20, and 0.50 are (7 and 1.4314), (22 and 1.3708), and (172 and 1.3366),
respectively. In addition, with a given q ¼ 0:75, we found that the plan parameters
(k and c) obtained are (25 and 1.4248), (97 and 1.3575), and (816 and 1.3350),
respectively. Increasing the autocorrelation coefficient significantly increases the
number of profiles required to achieve the desired levels of protection for both
producers and consumers.
Implementing EWMA Yield Index for Product … 45
Conclusion
In this paper, we developed an acceptance sampling plan based on the process yield
index SpkA;AR(1) to deal with lot sentencing for autocorrelation between profiles. Our
proposed method considers the quality history of the previous lot’s information and
the current lot; as a result the sample size required is smaller than the traditional
single sampling plan. With a given k ¼ 1, the sampling plan based on the EWMA
statistic is reduced to a traditional single sampling plan. In addition, we tabulated
the required number of profiles k and the critical acceptance value c for various
combinations of two quality levels (CAQL, CLTPD) at a ¼ 0:05 and b ¼ 0:10 and
with k ¼ 0:10; 0:20; 0:50, and 1.0 and q ¼ 0:5 and 0:75 under n = 4. The proposed
sampling plan provides the alternative for implementing the acceptance sampling
plan.
References
Ardia D, Boudt K, Carl P, Mullen KM, Peterson BG (2011) Differential evolution with DEoptim:
an application to non-convex portfolio optimization. R J 3(1):27–34
Aslam M, Wu CW, Azam M, Jun CH (2013) Variable sampling inspection for resubmitted lots
based on process capability index Cpk for normally distributed items. Appl Math Model 37
(3):667–675
Aslam M, Azam M, Jun CH (2015) A new lot inspection procedure based on exponentially
weighted moving average. Int J Syst Sci 46(8):1392–1400
Boyles RA (1994) Process capability with asymmetric tolerances. Commun Stat-Simul Comput 23
(3):615–635
Chuang SC, Hung YC, Tsai WC, Yang SF (2013) A framework for nonparametric profile
monitoring. Comput Ind Eng 64(1):482–491
Čisar P, Čisar SM (2011) Optimization methods of EWMA statistics. Acta Polytechnica Hungarica
8(5):73–87
Ghahyazi ME, Niaki STA, Soleimani P (2014) On the monitoring of linear profiles in multistage
processes. Qual Reliab Eng Int 30(7):1035–1047
Hunter JS (1986) The exponentially weighted moving average. J Qual Technol 18(4):203–210
Li Z, Wang Z (2010) An exponentially weighted moving average scheme with variable sampling
intervals for monitoring linear profiles. Comput Ind Eng 59(4):630–637
Lucas JM, Saccucci MS (1990) Exponentially weighted moving average control schemes:
properties and enhancements. Technometrics 32(1):1–12
Montgomery DC (2013) Statistical quality control: a modern introduction. Wiley, Singapore
Noorossana R, Eyvazian M, Vaghefi A (2010) Phase II monitoring of multivariate simple linear
profiles. Comput Ind Eng 58(4):563–570
Noorossana R, Saghaei A, Amiri A (2011a) Statistical analysis of profile monitoring. Wiley, New
York
Noorossana R, Vaghefi A, Dorri M (2011b) Effect of non-normality on the monitoring of simple
linear profiles. Qual Reliab Eng Int 27(4):425–436
Pearn WL, Wu CW (2006) Critical acceptance values and sample sizes of a variables sampling
plan for very low fraction of defectives. Omega 34(1):90–101
Pearn WL, Wu CW (2007) An effective decision making method for product acceptance. Omega
35(1):12–21
46 Y. Tamirat
Schilling EG, Neubauer DV (2009) Acceptance sampling in quality control. Chapman & Hall,
Boca Raton, FL
Wang FK (2014) Measuring the process yield for simple linear profiles with one-sided
specification. Qual Reliab Eng Int 30(8):1145–1151
Wang FK, Tamirat Y (2014) Process yield analysis for autocorrelation between linear profiles.
Comput Ind Eng 71(1):50–56
Wang FK, Guo YC (2014) Measuring process yield for nonlinear profiles. Qual Reliab Eng Int 30
(8):1333–1339
Woodall WH (2007) Current research on profile monitoring. Produção 17(3):420–425
Wu CW, Pearn WL (2008) A variables sampling plan based on Cpmk for product acceptance
determination. Eur J Oper Res 184(2):549–560
Wu CW, Liu SW (2014) Developing a sampling plan by variables inspection for controlling lot
fraction of defectives. Appl Math Model 38(9):2303–2310
Yen CH, Aslam M, Jun CH (2014) A lot inspection sampling plan based on EWMA yield index.
Int J Adv Manuf Technol 75(5–8):861–868
Evaluating Airline Network Robustness
Using Relative Total Cost Indices
Abstract This research to the best of our knowledge is the first paper to quantify
airline network robustness in the presence reversible capacity of legs and alternative
flights. In this study, we try to recognize the critical legs via changing the functional
capacity of flights. Besides, we attempt to gauge the behavior of the flight network via
shifting of leg capacities proposing a new leg cost function. In addition, we indicate
how to capture the robustness of airline network in the case of variable flight
capacities. Relative Total Cost Indices have been used to assess air network
robustness in the case of behavior associated with both User-Optimization and
System-Optimization. In this article from the different point of view, the variability of
passenger’s route preferences is the main subject. This paper may shed light on the
robustness of networks in real life not only for the particular case of airlines but also
for systems sharing similar topological properties. The paper presents a numerical
case study with real data from an airline in Turkey for illustration purposes.
Introduction
Networks are complex, typically, large-scale systems, and their formal study has
attracted much interest from a plethora of scientific disciplines (Bazargan 2010).
A broad variety of practices in the real world can be explained as complex or
heterogeneous networks, like the postal networks, energy distribution networks as
P. A. Sarvari (&)
Luxembourg Institute of Science and Technology, Esch-Sur-Alzette, Luxembourg
e-mail: [email protected]
F. Calisir
Industrial Engineering Department, Management Faculty, Istanbul Technical University,
Istanbul, Turkey
e-mail: [email protected]
routing approaches, air carriers and airports deterioration over time, as well as
politic decisions lead to time-consuming and costly connection flights, lack of
flights and poor service quality would effects passenger decision manners
(Bazargan 2010). That is why; we introduce a new procedure for evaluating the
robustness of an airline network based on the RTCI for the transportation system in
the case of leg variation captured through a uniform link capacity ratio. In an air
network, there are hub and spokes that every leg and flight deviation is changing the
whole network robustness.
The paper has been organized as follows; In Section “Relative Total Cost
Index”, the proposition of the relative total cost index is provided. In
Section “Principal and Components of RTCI”, we explain the RTCI that can be
used to evaluate transportation network robustness and which allows either U-O or
S-O travel behavior. In Section “Assessment of Airline network robustness”, for the
first time, we try to assess the airline network robustness by reducing and increasing
of flight capacity by network robustness measure. In Section “Case Study”, a case
study and related discussions on a partial network of an airline have been con-
sidered to analysis, and finally, Section “Conclusion” presents a brief closure.
P Rfa
Min ta ð yÞdy
P0
a2A
s:t: x p ¼ dw ;
ð1Þ
p2pw P
fa ¼ xp :dap ;
p2P
xp 0;
Considering the fc as the lowest ticket price for the ath fight operating by ath
plane and the total cost for a selected flight by the passenger is calculated via
Eq. (3) where a, b and k are the congestion rates (are positive and unique coeffi-
cients for every field and company).
" b #
kfa
ta ¼ fc a ð3Þ
lf ca
From the other hand, there are two types of flights; the transshipment flights and
the connection flights. On condition that the ath plan is flying a direct flight, the cost
function from the system view can be calculated by Eq. (4), otherwise Eq. (5) is
covering the connection flight too, where, f 0 c is the least flight cost for ath plan and
tsc is the transshipment cost (Note: the connections is allowed just for the flights
passing the hub airport).
" b #
_ 0 kfa
ta ¼ f c a þ arc cos t ð4Þ
lf ca
" b #
_ 0 kfa
ta ¼ f c a þ arc 1 cos t þ arc 2 cos t þ tsc ð5Þ
lf ca
where;
nw The number of O-D pairs in the network
kw The cost of the most reasonable or the shortest way
nw The total number of flight demands (between O-D)
52 P. A. Sarvari and F. Calisir
The total cost of the network is named as TC and is specified by Eq. (7);
X_ X
TC ¼ tg ¼ tg fg fg ð7Þ
g2A g2A
Let’s suppose g 2 A, is an arc on the network and W ({g}) is the relative total
cost increase of G, and on the condition of eliminating {g} from the network,
relative total cost increase will be equal to Eq. (8):
TC ðG fggÞ TC ðGÞ
wðfggÞ ¼ ð8Þ
TC ðGÞ
where TC (G) is the total cost of the network G, TC (G − {g}) is the total cost of
the network G − {g}. Because of deriving total cost from U-O and S-O, Eqs. (9)
and (10) can be derived, where Eq. (9) is the relative total cost derived with U-O,
and Eq. (10), is the relative total cost derived with S-O.
Note: Functions mentioned above will distinguish critical nodes. On the con-
dition that the g has been affected by capacity changes then, the relative total cost
indices appear as:
TCðcgÞ TC
wcðgÞ ¼ ð11Þ
TC
TCðagÞ TC
waðgÞ ¼ ð12Þ
TC
To evaluate network robustness let’s decrease legs (flight) carrying capacity with a
fixed rate. Network efficiency measures are capturing under this reduction. If the
original capacity of a leg is cg and cðc 2 ð0; 1Þ is the reduction rate of capacity, ccg
is the leg’s decreased capacity and, cg ccg is reduction measurement of the leg.
The robustness measurement of network G is Rc (Nagurney 2010).
d G demand vector
t Flight cost function
c Flight capacity vector
c Flight capacity reduction rate
e Network performance index when capacity is c
ec Network performance index when capacity decreased to cc
Provided that the performance index of a network with cc capacity approxi-
mately equals to c, then that network will be robust (Nagurney and Qiang 2008a, b).
On the condition of presenting of just one flight between O-D, the assumed con-
junction rate will be b where robustness upper bound will be cb 100%, and then
The robustness measurement of network G is Rc .
h i
cb cbg þ kdwb
Rc ¼ 100% ð14Þ
cb cbg þ kdwb
c ca þ cb þ þ cn
cc þ kcdw
Rc ¼ 100% ð15Þ
cc þ kdw
54 P. A. Sarvari and F. Calisir
To evaluate network robustness let’s increase legs (plane) carrying capacity with a
fixed rate. Network efficiency measures are capturing under this reduction. If the
original capacity of a leg is cg and a; a 1, is the inflation rate of capacity, acg is
the leg’s increased capacity and, acg cg is inflation measurement of the leg
(flight). The network of G robustness or robustness measure can be calculated by
Eq. (16).
ea
Ra ¼ Ra ðG; d; t; c; aÞ ¼ 100% ð16Þ
e
Relative total cost index for network G using U-O and S-O as below (Boyce et al.
2004; Konnov et al. 2007);
c
TCUO TCUO
wcUO ¼ wcUO ðG; d; t; c; cÞ ¼ 100% ð18Þ
TCUO
c
TCSO TCSO
wcSO ¼ wcSO ðG; d; t; c; cÞ ¼ 100% ð19Þ
TCSO
Leg capacity is increasing at a fixed rate, and following this increase, the total cost
of network changes is evaluating. Where leg capacity is cg and a, ða 1Þ is an
inflation rate of capacity and acg cg is the inflated leg (flight) capacity amount.
Equation (20) is presenting the relative total cost index for the network G sing U-O,
a 100% is the desired lower bound.
while 1a
ac þ kdw
wcUO ¼ 1 100% ð20Þ
ac þ kadw
Case Study
The capacity change of some links in the network does not affect the total cost of
that link considerably, but the capacity change of some links can affect the sus-
tainability of that network; Even if this change is very small, even worse, causing
great increases in the total cost of travel. Such sensitive links are called critical
links.
The data for a partial network of an airline in Turkey is illustrated in Table 1;
there are five airports in five different cities and Ankara is the hub node. We are
interested in assessing this network robustness with critical legs upon supply and
demand sets between Istanbul-Antalya and Istanbul-Trabzon. Load factor policy of
firm averagely is 90%. Transshipment cost for every hour 7$ per passenger. All
ticket prices averagely 17$ per path (without taxes). The other information about
the O-Ds is illustrated in Table 1.
In order to solve the revealed problem, we need to use the following steps
considering the formulations mentioned above.
• Step 1. Using leg cost function.
• Step 2. Deriving ƒg for Variational Inequality and trip assignment (Coding can
be provided from the author’s E-mail address).
• Step 3. To identify critical paths through the results.
Table 1 A partial flight data O/D Flight count Plane type Demand
of the airline network
IST-TRZ 5 1, 3, 5 1700
IST-ANK 38 1, 2, 3, 4, 8, 9 12000
IST-KNY 3 2, 3 1400
IST-ANT 10 1, 2, 5, 4 4200
ANK-ANT 2 8 1150
ANK-TRZ 2 4 780
TRZ-ANT 2 6 750
KNY-ANT 1 5 330
56 P. A. Sarvari and F. Calisir
In order to recognize critical links, it is necessary to use the relative total cost
index (RTCI), but first the total cost index needs to be calculated. Using the flow
quantities, the total cost of each link was calculated with five different capacity
reduction and capacity inflation rates and is given in Tables 2 and 5, respectively.
The Relative Total Cost obtained with the help of U-O and S-O models are given
in Tables 3 and 6 accordingly with Eqs. (18) and (19).
In this study, network sustainability referring the U-O is demonstrated in all
dimensions by reducing or increasing the connections capacities. Table 4 illustrates
the robustness variations of the network considering the reductions and inflations in
the whole network (Tables 5 and 6).
We have examined the sustainability of the entire network by using the Relative
Total Cost Indices obtained with U-O and S-O in Table 7. In the user optimality
approach, if the capacities of all connections of the network are increased by 1.2%,
the total net capacity of the network decreases further and therefore the network can
be more sustainable. In the system optimality approach, if the capacities of all
connections of the network are increased by 1.4%, the total cost of the network
decreases more and therefore the network can be more sustainable.
Taking the results of analysis above to catch the robustness conditions of the
network based on the proposed assessment approach leads us to the following
recommendations;
Table 2 Total cost with U-O TC c c¼0 c ¼ 0:2 c ¼ 0:4 c ¼ 0:6 c ¼ 0:8
and the rates of inflation and
reduction of leg capacities IST-TRZ 3567 3590 3605 3700 3945
IST-ANK 4044 4030 4060 3840 3765
IST-KNY 4900 4900 4900 4900 4900
IST-ANT 5442 5545 5625 5664 5619
ANK-ANT 5209 5209 5199 5199 5091
ANK-TRZ 3416 3416 3416 3416 3416
TRZ-ANT 6309 6309 6309 6309 6309
KNY-ANT 2670 2670 2670 2670 2670
TC a a¼1 a ¼ 1:2 a ¼ 1:4 a ¼ 1:6 a ¼ 1:8
IST-TRZ 3567 3472 3357 3158 3158
IST-ANK 4044 4190 4230 4304 4370
IST-KNY 4900 4900 4900 4900 4900
IST-ANT 5442 5012 5230 5307 5411
ANK-ANT 5209 5400 5469 5498 5502
ANK-TRZ 3416 3128 3139 3141 3260
TRZ-ANT 6309 6309 6302 6309 6309
KNY-ANT 2670 2670 2670 2670 2670
Evaluating Airline Network Robustness … 57
Table 3 Relative total cost with U-O and the rates of inflation and reduction of leg capacities
wc c¼0 c ¼ 0:2 c ¼ 0:4 c ¼ 0:6 c ¼ 0:8
IST-TRZ 0 0.0065 0.011 0.037 0.1
IST-ANK 0 −0.004 0.0039 −0.05 −0.069
IST-KNY 0 0 0 0 0
IST-ANT 0 0.0005 0.033 0.04 0.032
ANK-ANT 0 0 −0.002 −0.002 −0.02
ANK-TRZ 0 0 0 0 0
TRZ-ANT 0 0 0 0 0
KNY-ANT 0 0 0 0 0
wa a¼1 a ¼ 1:2 a ¼ 1:4 a ¼ 1:6 a ¼ 1:8
IST-TRZ 0 −0.026 −0.058 −0.11 −0.11
IST-ANK 0 0.036 0.046 0.064 0.08
IST-KNY 0 0 0 0 0
IST-ANT 0 −0.08 −0.039 −0.02 −0.005
ANK-ANT 0 0.036 0.05 0.055 0.056
ANK-TRZ 0 −0.08 −0.081 −0.08 −0.045
TRZ-ANT 0 0 0 0 0
KNY-ANT 0 0 0 0 0
Table 4 Network robustness with whole network capacity changes via U-O using
b ¼ 4; a ¼ 1; k ¼ 0:15 Rc b ¼ 4; a ¼ 1; k ¼ 0:15 Ra
c¼0 0 a¼1 0
c ¼ 0:2 0.988 a ¼ 1:2 1.1432
c ¼ 0:4 1.054 a ¼ 1:4 1.1328
c ¼ 0:6 0.941 a ¼ 1:6 1.1556
c ¼ 0:8 1.014 a ¼ 1:8 1.1437
Table 5 Total cost with S-O and the rates of inflation and reduction of leg capacities
TC c c¼0 c ¼ 0:2 c ¼ 0:4 c ¼ 0:6 c ¼ 0:8
IST-TRZ 3847 3870 3885 3980 4225
IST-ANK 4324 4310 4340 4120 4045
IST-KNY 5180 5180 5180 5180 5180
IST-ANT 5722 5825 5905 5944 5899
ANK-ANT 5489 5489 5479 5479 5371
ANK-TRZ 3696 3696 3696 3696 3696
TRZ-ANT 6589 6589 6589 6589 6589
KNY-ANT 2950 2950 2950 2950 2950
TC a a¼1 a ¼ 1:2 a ¼ 1:4 a ¼ 1:6 a ¼ 1:8
IST-TRZ 335,298 32,984 307,924 293,694 287,378
IST-ANK 380,136 39,805 38,948 400,272 39,767
IST-KNY 4606 4655 4508 4557 4459
IST-ANT 511,548 47,614 482,904 493,551 492,401
ANK-ANT 489,646 5130 5037 511,314 500,682
ANK-TRZ 321,104 29,716 288,876 292,113 29,666
TRZ-ANT 593,046 599,355 579,728 586,737 574,119
KNY-ANT 25,098 25,365 24,564 24,831 24,297
Table 6 Relative total cost with S-O and the rates of inflation and reduction of leg capacities
wc c¼0 c ¼ 0:2 c ¼ 0:4 c ¼ 0:6 c ¼ 0:8
IST-TRZ 0 0,00597 0.009877 0.034572 0.098258
IST-ANK 0 −0.0032 0.0037 −0.0471 −0.0645
IST-KNY 0 0 0 0 0
IST-ANT 0 0.0180 0.031981 0.038797 0.030933
ANK-ANT 0 0 −0.00182 −0.00182 −0.02149
ANK-TRZ 0 0 0 0 0
TRZ-ANT 0 0 0 0 0
KNY-ANT 0 0 0 0 0
wa a¼1 a ¼ 1:2 a ¼ 1:4 a ¼ 1:6 a ¼ 1:8
IST-TRZ 0 −0.0162 −0.0816 −0.1240 −0.1429
IST-ANK 0 0.04712 0.0245 0.0529 0.0461
IST-KNY 0 0.01063 −0.0212 0.0106 −0.0319
IST-ANT 0 −0.0692 −0.0559 −0.0351 0.0374
ANK-ANT 0 0.04769 0.0287 0.0442 0.0225
ANK-TRZ 0 −0.0745 −0.1003 −0.0902 −0.07612
TRZ-ANT 0 0.01063 −0.0224 −0.0829 −0.03191
KNY-ANT 0 0.01063 −0.0212 −0.0106 −0.0319
Evaluating Airline Network Robustness … 59
Conclusion
Systems fit the infrastructure upon which the operating of the economies and
societies count on. Networks that form the solid backbones of the modern age
include transportation networks that support the flows of vehicles from origins to
destinations. This paper provides an approach to the assessment of network
robustness through proper tools that serve in the quantification of network per-
formance and the naming of the importance of network segments, such as nodes
and links. We illustrated how rigorously formed and well-defined system measures
can obtain not only the network topology bearing a particular system, but also the
primary behavior of decision-makers, the resulting issues, and affected expenses in
the presence of demands for resources. In this paper for the first time, we analyzed
leg and flight capacity variations of an airline, proposing a modified leg cost
function from a different perspective. We tried to identify critical legs via changing
functional capacities of air network components. In addition, we demonstrated how
to capture the robustness of airline network in the cases of decreasing and
increasing capacities. Finally, yet importantly, we used Relative Total Cost Indices
(RTCI) to assess air network robustness of the case of behavior associated with both
User-Optimization and System–Optimization while passengers’ route preferences
behaviors were the main subject. Future work will use traffic counts to update the
O/D matrix for catching better results.
References
Abdelghany KF, Abdelghany AF, Ekollu G (2008) An integrated decision support tool for airlines
schedule recovery during irregular operations. Eur J Oper Res 185:825–848
Bazaraa MS, Sherali HD, Shetty CM (2006) Nonlinear programming: theory and algorithms 853
Bazargan M (2010) Airline operations and scheduling, 2nd edn. Ashgate Publishing Company
60 P. A. Sarvari and F. Calisir
Wei D, Deng X, Zhang X, Deng Y, Mahadevan S (2013) Identifying influential nodes in weighted
networks based on evidence theory. Phys Stat Mech Appl 392(10):2564–2575
Wu CL (2006) Improving airline network robustness and operational reliability by sequential
optimisation algorithms. Netw Spat Econ 6(3–4):235–251
Wu B, Yan XP, Wang Y, Wei XY (2016) Quantitative method to human reliability assessment for
maritime accident 16(4):24–30
Yan G, Zhou T, Hu B, Fu Z-Q, Wang B-H (2006) Efficient routing on complex networks. Phys
Rev E Stat Nonlin Soft Matter Phys 73:46108
Zhang J, Bin Cao X, Du WB, Cai KQ (2010) Evolution of Chinese airport network. Phys A Stat
Mech Appl 389(18):3922–3931
A Two-Phase Optimization Approach
for Reducing the Size of the Cutting
Problem in the Box-Production Industry:
A Case Study
Abstract In this study, the cutting problem as one of the main problems within the
box-production industries is discussed. The cutting problem refers to the problem of
dividing a piece of rectangular raw material, which is usually large, into smaller
pieces to produce various products. Cutting problems are NP-hard problems.
Numerous researches offering good solutions to these problems have been con-
ducted over the past few years. In the present study, considering the complexity of
the problem, a model reflecting the nature of the problem is proposed and a new
two-phase solution approach is suggested. Utilizing the proposed method signifi-
cantly reduces the size of the problem and simplifies the applicability of the solution
approach in real life. Furthermore, to evaluate the efficiency and utilization of the
proposed method, its application in a specific company is tested. Finally, the per-
formance of the method is calculated and its use is compared with the company’s
traditional method.
In several industrial applications such as the wood, paper, and glass industries, it is
necessary to cut rectangular raw materials into smaller rectangle pieces with specific
measures such that the amount of waste is minimized (Russo et al. 2014). Up to the
present, numerous researches have been conducted to investigate the best method
for cutting the raw materials. The resulting problems are optimization problems
referred to as bin packing problems, two-dimensional cutting problems (2DCP), or
two-dimensional strip packing problems in the literature. Most of the investigations
of these problems are devoted to cases where the items to be packed have a fixed
orientation and are not rotatable. In other words, a set of rectangular items (prod-
ucts) defined by their width and height is given. Having an unlimited number of
identical rectangular raw materials (objects) of certain width and height, the
objective is to allocate the items to a minimum number of the objects or, identically,
to divide the objects into smaller pieces such that the maximum number of items is
delivered with minimum wastage. With no loss of generality, it is assumed that all
input data are positive integers and the dimension of the items is always less than or
equal to the objects. This problem is NP-hard (Lodi et al. 2002).
Gilmore and Gomory were the first contributors to model two-dimensional
packing problems. They proposed a column generation approach based on the
enumeration of all subsets of items (patterns) such that they can be packed into a
single object (Gilmore and Gomory 1965). Continuing in this line, Beasley associ-
ated the concept of profit for each item to be packed in two-dimensional cutting
problems with the aim of packing the subset of items with the maximum profit into a
single object (Beasley 1985b). Hadjiconstantinou and Christofides (1995) proposed
a similar model for this problem. The function of both modes is to provide upper
bounds that benefit the Lagrangian relaxation and sub-gradient optimization method.
Later, Scheithauer and Terno (1996) introduced raster points constituting a subset of
the discretization points. These raster points are capable of being used in an exact
dynamic programming algorithm without losing the optimality (Beasley 1985a).
Working on Beasley’s idea, Cintra et al. (2008) proposed an exact dynamic pro-
gramming procedure that simplifies the computation of the knapsack function and
provides an efficient procedure for the computation of the discretization points.
Additionally, the number of discretization points introducing an idea which partially
recalls the raster points in reduces in their approach. Kang and Yoon (2011) sug-
gested a branch and bound algorithm for Unconstrained Two Dimensional Cutting
Problems (U2DCP), which is amongst the best algorithms proposed for this category
of problem. Moreover, they performed a pre-processing procedure before running
the algorithm, with the aim of reducing the number of valid pieces for entering the
process which is independent from the main solving approach. Recently, a two-phase
heuristic for the non-guillotine case of U2DCP was proposed by Birgin et al. (2012);
it solves the guillotine variant of the problem in the first phase in two steps: a fast
heuristic step based on the earlier two-stage algorithm proposed by Gilmore and
Gomory (1965) and an exact dynamic programming step proposed by Russo et al.
(2013). The latter method introduces a solution-correcting procedure and improves
one of the two dynamic programming procedures of Gilmore and Gomory (1966).
Furthermore, in their algorithm, they employed the reduction of the discretization
points method proposed by Cintra et al. (2008) and pre-processing method proposed
by Birgin et al. (2012). This algorithm is one of the most effective exact dynamic
programming algorithms proposed for solving the U2DCPs. The objective of this
research is to maximize the profit of an enterprise dealing with the cutting problem by
A Two-Phase Optimization Approach for Reducing the Size … 65
minimizing the amount of wastage and surpluses generated during the production.
A two-phase algorithm is proposed to serve the mentioned objective, which deter-
mines the proper dimension of the raw material required for the production such that
all products of the company can be produced with minimum wastage. Moreover,
through determining the best combination and quantity of raw materials, the number
of surpluses and procurement cost are reduced. In the next section, the characteristics
of the problem are introduced.
The aim of this study is to offer a solution to the cutting problem of the box
production industry. To deal with this problem, a two-phase approach is proposed.
In these industries, the products are carton boxes of various sizes according to the
customer’s demands. These carton boxes must meet accurate specifications
regarding their material types and dimensions in accordance with the customer’s
requested specifications. The carton boxes are produced from raw sheets of carton
provided by the company’s suppliers in various predefined sizes. The suppliers can
supply the raw sheets in specific standardized sizes. More details about the problem
are given as follows:
• In each planning horizon, the customer orders a specific number of boxes;
• Several sizes of the raw materials are available at each supplier known to the
company;
• The number of deliverable products is easily determined by the company if and
only if a specific raw material is assigned to produce a specific product;
• There exists more than one suitable candidate raw material for producing one or
more products;
• The raw material procured by the companies is distinguished and separated based
on its dimensions and the combination of the materials used for building them;
• Each specific size of the raw material used in production generates a certain
amount of waste. This wastage is dependent on the production strategy
employed for assigning the products to the raw material;
• Each company may have its own individual policies for selecting the measures
of the purchased raw materials.
Like any other industry, the profitability of the business is its most important
concern. Therefore, nearly all companies in this industry are interested in achieving
the following objectives:
• Reducing the wastage cost through minimizing the production-related wastage
of materials;
• Reducing the size of the cutting problem through minimizing the variety of the
selected raw material such that all products are producible.
66 S. Mosallaeipour et al.
A high variety of raw material is confusing. In this industry, due to the need to
minimize waste, accurate determination of the dimensions of the raw materials used
for producing the products is crucial. On the other hand, all companies usually have a
huge variety of products. While utilizing a dedicated raw material with correct
dimensions for producing a product will theoretically lead to the minimum possible
waste, in practice, this one-to-one approach is almost impossible for the following
reasons: firstly, the supply of raw material is restricted to limited specified dimen-
sions, and secondly, dedicating a raw material to each product corresponds to a
massive variety of raw materials of different quantities, which is not possible due to
inventory-related restrictions. Hence, to have a standard manufacturing system with
the minimum amount of incompatibility, the company needs to reduce the size of its
problem through limiting the variety of them in-hand raw material in such a way that
its production capabilities are not reduced. Additionally, limiting the variety of raw
material is useful when suppliers offer quantity discounts where a larger purchasing
discount is deliverable if a larger quantity of a single type is purchased.
• Minimizing the in-hand inventory and production surplus
Essentially, two types of inventories are available at the companies: the finished
products and raw materials. Since the ordering style of the customers is highly
changeable, the extra inventory of the finished products (surplus) is quite likely to
remain unused for a long period of time. Apart from that, due to the vulnerability of
the inventory to shrinkage, fire, and similar hazards, companies are always at risk of
inventory loss. On the other hand, taking the required measures to encounter these
risks is extremely costly. Therefore, companies prefer to reduce their risks by
keeping their inventories at the lowest possible level.
Determining the appropriate dimensions for the raw materials, purchasing the correct
quantity of raw materials, and assigning them properly for the production of products
are the most important elements for fulfilling the main objectives of the companies.
Indeed, the mentioned requirements are the decision variables of a subcategory of
2DCPs addresses as bin packing problem or strip packing problem in the literature.
The proposed algorithm of this study is designed to deal with this problem. The
method is extendable to any other box production company as well as similar
industries with minor tailoring. In this research, to evaluate the efficiency of the
proposed method, it is implemented in a specific box production company as a case
study. In the next section, the specification of the case study is discussed.
The case discussed in this research produces over 200 different types of products
including carton boxes and divider planes. The main differences among the prod-
ucts are associated with their dimensions and combinations of materials. The
technical details are described below.
A Two-Phase Optimization Approach for Reducing the Size … 67
Sheet Types
The main raw sheet types utilized in the company are three- and five-layer sheets.
These sheets are produced by suppliers by combining several layers of carton
papers and one or more (depending on the number of plane layers) corrugated
media between the papers, which is called the Flute Layer (FL). There are two
major types of carton papers: Craft, denoted by (C), which is paper freshly pro-
duced from wood (virgin paper), and Liner (Li), which is recycled paper. While the
papers in the outer layer of a carton sheet can be made of any material, the material
type of the corrugated medium and the paper in the middle layers of a carton sheet
are usually liner paper. The different combinations of paper types and medium
layers provide a total number of six different carton sheets for use in the company.
• Five layers and double Craft (C2-5)
• Five layers and single Craft (C1-5)
• Five layers and liner (Li-5)
• Three layers and double Craft (C2-3)
• Three layers and single Craft (C1-3)
• Three layers and liner (Li-3)
In the next figure, the combination pattern of the carton sheets is illustrated. The
outer layers of the carton sheet could be both liner, both craft, or one liner and one
craft (Fig. 1).
The strength of a carton box is dependent on two factors: the combination of the
papers and the direction of the FL. A carton box acquires the minimum necessary
strength if and only if the direction of the FL is vertical with respect to the weight
that the carton must carry. Consequently, rotation of the carton sheets is not allowed
during the production process. On the other hand, according to a general rule, a
carton sheet with more crat layers in its structure has higher strength. However, the
use of more craft layers is associated with a higher production cost and therefore
more expensive product.
It is the customer who decides on the material combination of the carton sheets of
the products; however, the company normally provides an advisory service for the
customers to facilitate their decision-making process.
As previously mentioned, the products of the company are boxes and divider
planes. The measures of a box are normally represented by its length, width, and
height (a * b * c). Since the planes are two-dimensional, their measure is simply
represented by length * width (a * b).
W ¼ bþc ð2Þ
Item
Object
The raw materials for the company are produced by its suppliers from different
material combinations with different measures. Each variant of these raw materials
is called an object, which is considered as a separate raw material.
A Two-Phase Optimization Approach for Reducing the Size … 69
Pattern
The first step in producing the items is to divide the objects into smaller pieces
according to the items’ spread dimensions. There are various strategies for dividing
an object into smaller parts, each of which is called a pattern.
As previously discussed, the aim of this study is to determine the proper dimensions
of the raw materials. One of the constraints associated with this problem is the
supplier restrictions in delivering the requested measures. Due to technical issues,
suppliers are unable to cut the raw sheets into any desirable measures; the available
lengths of a sheet from a supplier may vary between 45 to 200 based on 5 cm
increments (i.e. 45, 50, 55,…, 200). Moreover, the stocks can only be cut into the
following predefined widths: 90, 100, 110, 120, 140, 150, 160, and 200. The next
table represents the possible measures of lengths and widths as the dimensions of an
object (Table 1).
Clusters of Products
Before proceeding to the solution approach, the production data of the problem
must be marshalled and categorized. In this regard, initially the data related to the
item types are collected and the products with an identical material combination are
placed in the same category. Based on this classification, six different clusters of
products are defined: C1-5, C2-5, C1-3, C2-3, Li-5, and Li-3. It is notable that to
produce the items in each cluster, the material combination of the objects must be
identical to the material combination of the cluster. However, several sizes of
objects can be used. Selecting the proper object(s) for each cluster is one of the
objectives of this problem.
X
n
Objective function 1: min zj ð3Þ
j¼1
n X
X P
Objective function 2: min cjp xpj ð4Þ
j¼1 p¼1
subject to:
n X
X P
8i: gipj xpj di ð5Þ
j¼1 p¼1
The description of the model is as follows: the objective function (3) minimizes
the variety of objects (i.e., the variety of raw materials) that should be used in the
production procedure. Objective function (4) minimizes the procurement cost of
object j by optimizing the usage frequency of the object–pattern combination. At
the same time, objective function (4) minimizes the surplus amount by justifying
the purchased material at the required level. Constraint (4) guarantees that the
production number of satisfies its demand. Constraint (6) denotes that there is no
limitation on providing the required number of objects. Finally, the constraints (7)
and (8) define the nature of the variables.
Step 1. The matrix of remaining lengths is formed based on all available lengths of
the objects. This matrix represents the remaining length of an object (regardless of
A Two-Phase Optimization Approach for Reducing the Size … 73
its width) if it is utilized for delivering an integer multiple of the length of a certain
item. To perform this calculation, the spread length of the products and available
lengths for objects are determined. The feasible length of the objects must satisfy
the following two conditions:
• It must be larger than the spread length of the product;
• The material remaining after extracting an integer multiple length of a product
must be less than 5 cm.
The matrix of the remaining length in the first step is represented in Table 4.
Step 2. The matrix of the remaining lengths is handled to create the “assignability
matrix”. The assignability matrix is a 0–1 matrix indicating whether an item is
assignable to an object. If the length of an object is suitable for extracting the length of
an item, the item is considered assignable to that object and therefore the digit in the
relevant intersection of the rows and columns is “1”; otherwise, it is zero (Table 5).
74 S. Mosallaeipour et al.
Step 3. The length of an item might be assignable to several objects. In this step, the
total number of objects that can produce an item is calculated and represented in
Table 6. Additionally, in the last row of Table 6, denoted as the object’s produc-
tivity, the total number of items producible by the relevant object is represented.
Step 4. The rows and columns of the assignability matrix are sorted based on
decreasing order of assignable lengths and productivity of each object (the object
length with more applications is shown in the first column; the object with more
assignability is shown in the first row); see Table 7.
s.t.
X
8i: aij zj 1 ð10Þ
j
8i: zj ¼ 0; 1 ð11Þ
Step 6. This step is the last step of phase 1. The items of the cluster which are
assigned to a certain object are classified according to the next table (see
Table 10).
A Two-Phase Optimization Approach for Reducing the Size … 77
As discussed above, to specify an object for ordering, both its length and its
width must be determined. Hence the appropriate lengths of the objects in each
cluster are determined in the first phase of the proposed procedure. In the second
phase, various combinations of the determined lengths and available widths are
examined and the most profitable combination is employed for producing a
specific product in a cluster. For the second phase, the main specialized cutting
software that is currently in widespread use in these industries is employed.
Using the software, the proper objects and their required quantity to satisfy the
demand for the items in each group of products is determinable. For this purpose,
all combinations of the assigned length and the available widths are elaborated.
Considering the demand for the items, the optimal strategy of object selection and
the order quantity is determined. This information indicates what combination of
the length and width should be decided for an object and what quantity of each
object must be purchased. The results are shown in Table 11. The last column of
the table represents the utilization of the proposed method based on the waste of
the raw material.
Conclusion
In this study, a category of cutting problem that is frequently used in box pro-
duction companies and that is an NP-hard problem was investigated and formulated
and a two-phase method for approaching it was introduced. The main principles of
the approach were minimizing the production waste and production costs and
maximizing the efficiency of the material selection for production. This method is
easy to implicate, returns a very good solution, and is applicable for a wide range of
similar problems in this industry with minor adaptation. Considering the environ-
ment of the problem, in this investigation, the suppliers’ competitions were limited
to their ability to provide the raw material, the uncertainty of the demand was
neglected, and the specific restriction was applied in the determination of the proper
material selection. The focus of the method was on selecting the material that would
initially generate an acceptable amount of waste. This component may vary
depending on different circumstances. In future development of the study, price
competitive suppliers, the uncertainty of demand, and a different method of material
selection (such as maximizing the useable leftovers) seem to be very interesting
assumptions to be taken into consideration.
A Two-Phase Optimization Approach for Reducing the Size … 81
References
Beasley JE (1985a) Algorithms for unconstrained two-dimensional guillotine cutting. J Oper Res
Soc 36(4):297–306. https://doi.org/10.2307/2582416
Beasley JE (1985b) An exact two-dimensional non-guillotine cutting tree search procedure. Oper
Res 33(1):49–64. https://doi.org/10.1287/opre.33.1.49
Birgin EG, Lobato RD, Morabito R (2012) Generating unconstrained two-dimensional
non-guillotine cutting patterns by a recursive partitioning algorithm. J Oper Res Soc 63
(2):183–200. https://doi.org/10.1057/jors.2011.6
Christofides N, Hadjiconstantinou E (1995) An exact algorithm for orthogonal 2-D cutting
problems using guillotine cuts. Eur J Oper Res 83(1):21–38. https://doi.org/10.1016/0377-2217
(93)E0277-5
Cintra GF, Miyazawa FK, Wakabayashi Y, Xavier EC (2008) Algorithms for two-dimensional
cutting stock and strip packing problems using dynamic programming and column generation.
Eur J Oper Res 191(1):61–85. https://doi.org/10.1016/j.ejor.2007.08.007
Gilmore PC, Gomory RE (1965) Multistage cutting stock problems of two and more dimensions.
Oper Res 13(1):94–120. https://doi.org/10.1287/opre.13.1.94
Gilmore PC, Gomory RE (1966) The theory and computation of knapsack functions. Oper Res 14
(6):1045–1074. https://doi.org/10.2307/168433
Kang M, Yoon K (2011) An improved best-first branch-and-bound algorithm for unconstrained
two-dimensional cutting problems. Int J Prod Res 49(15):4437–4455. https://doi.org/10.1080/
00207543.2010.493535
Lodi A, Martello S, Monaci M (2002) Two-dimensional packing problems: a survey. Eur J Oper
Res 141(2):241–252. https://doi.org/10.1016/S0377-2217(02)00123-6
Russo M, Sforza A, Sterle C (2013) An improvement of the knapsack function based algorithm of
Gilmore and Gomory for the unconstrained two-dimensional guillotine cutting problem. Int J
Prod Econ 145(2):451–462. https://doi.org/10.1016/j.ijpe.2013.04.031
Russo M, Sforza A, Sterle C (2014) An exact dynamic programming algorithm for large-scale
unconstrained two-dimensional guillotine cutting problems. Comput Oper Res 50:97–114.
https://doi.org/10.1016/j.cor.2014.04.001
Scheithauer G, Terno J (1996) The G4-heuristic for the pallet loading problem. J Oper Res Soc 47
(4):511–522. Retrieved from http://dx.doi.org/10.1057/jors.1996.57
Physical Discomfort Experienced
in Traditional Education
and Tablet-Assisted Education:
A Comparative Literature Analysis
Keywords Students Physical challenges Traditional education
Tablet-assisted education
Introduction
Tablet personal computers (PCs) have become one of the mostly frequently used
portable computing devices. In addition to increasing usage of tablet PCs, as
reported by Melanson (2011), utilization of tablet PCs in education is also
increasing, and many countries have launched initiatives for the use of tablets by
children and adolescents in education (Tamim et al. 2015). In addition to reasons of
widespread Internet access, and characteristics like being lightweight when com-
pared to other IT alternatives like notebooks, laptops or desktops, increase in
educational usage includes reasons as having a considerable variety of educational
applications to be used, tablet-assisted systems are attractive to teachers (Henderson
and Yeow 2012). Furthermore, Hashemi et al. (2011) underlined the effectiveness
of mobile learning for the pedagogy and supported the idea of using mobile devices
such as tablets as an educational tool.
On the other hand, Roth-Isigkeit et al. (2005) stated that 83% of a sample of 749
school-aged children and adolescents had experienced pain in the past three
months. At the same time, 64% of these students reported musculoskeletal pain.
Although there is no definite statistical evidence regarding the factors checked in
the study, carrying school bags as a load on the back region, engagement in
sedentary activities like IT usage, and having postural habits can be listed among
possible factors in musculoskeletal pain. Clinch and Eccleston (2009) underline the
fact that children experiencing musculoskeletal pain today will become adults who
could experience more serious problems in the future, which may become a burden
to their countries’ health systems.
This study compares musculoskeletal, postural, and ocular problems experienced
by children and adolescents in traditional education settings or traditional education
activities and tablet-assisted or tablet-integrated educational settings and/or activi-
ties. The study will follow a comparative approach of a systematic literature review,
with the main aim being to observe the differences between traditional and
tablet-assisted education in terms of the physical problems experienced.
Literature Review
It may be beneficial to mention at the beginning of this section that Greig et al.
(2005), Sommerich et al. (2007), and Straker et al. (2008) claimed that there is an
association between tablet usage and musculoskeletal discomfort.
In their study comparing tablet-, desktop-, and paper-based IT through a set of
coloring tasks, Straker et al. (2008) argued that children’s use of mobile technology—
such as tablet computers—has been associated with experiences of musculoskeletal
discomfort. Although the study did not take the educational environment into con-
sideration, it was performed with an educational activity, namely a coloring task.
Utilizing an infrared motion analysis system for posture assessment and surface
electromyography for assessment of muscle activity in the neck region, the study
provides us with the information that tablet computer use by children is associated
with increased muscle activity in the upper trapezius and cervical erector spinae when
compared to traditional paper-based IT and traditional desktop computers.
Furthermore, muscle activities of paper-based IT and tablet computers were
reported to be not significantly different. Especially, spinal postures of participants
Physical Discomfort Experienced in Traditional Education … 87
were not statistically different when paper-based IT and tablet computer activities
were compared. Spinal posture flexion during paper-based IT and tablet computer
activities was higher when compared to the desktop computers (the higher display
option in the study). This result also supports the results of Briggs et al. (2004)
regarding display height and neck flexion.
Assuming that tablet computers are placed on a flat desk, similarly to the tra-
ditional paper–book–pen combination, a less neutral posture and increased neck
activity can be expected. Therefore, it can be clearly stated that further studies on
tablet-assisted education are expected to fill in this gap scientifically.
Although the subjects of elementary school age who participated in a study by
Zovkic et al. (2011) were not examined for the effects of tablet usage and were not
exposed to any tablet usage for educational activities at school, the researchers
underlined the suggestions of Straker et al. (2008) based on the findings that tablet
use was very similar to paper-based IT use. The study recommends that both tablet
computers and paper-based IT should be used for moderate periods of time.
The use of computer screen devices may result in a syndrome named computer
vision syndrome, which includes problems like headache, eyestrain, and neck/back
pain (Yan et al. (2008). When schoolchildren are at risk of experiencing this syn-
drome, because their musculoskeletal and vision developments are not yet complete,
it becomes a more critical issue. Therefore, the results of Sommerich et al. (2007)
deserve greater emphasis. They studied tablet usage by high school students and their
data collection involved filling in a questionnaire, in addition to the use of monitoring
software to record the duration of tablet usage by subjects and their preferences.
Although the study took place in a high school, it was not clearly stated whether the
tracked tablet computer usage involved an educational purpose or not. The research
pointed out that the most frequently observed types of discomfort experienced by
subjects were eye and neck discomfort. Questionnaire results also reported that eyes,
neck, head, right hand/wrist, and upper and lower back are the body parts in which
the subjects experienced discomfort associated with using tablet computers.
The results of Kim et al. (2014) imply that prolonged use of touchscreen key-
boards potentially increases the risk of experiencing musculoskeletal discomfort.
The researchers suggest that the reason behind this potential risk is that touchscreen
keyboards are easily activated, users cannot rest their fingers and wrists on the
keyboard, and therefore some muscle groups are forced to stay motionless and
experience an increased static load. The muscle groups that are most affected (as a
result of this static loading) are in the wrist and shoulder regions.
Methodology
This study systematically reviewed the physical problems experienced by the young
population to identify the similarities and differences in the problems experienced in
traditional and tablet-assisted educational settings and activities with scientific
evidence.
88 B. N. Uyal et al.
This comparative review has clearly put forward that if tablets are not used with a
stand with an appropriate angle adjustment, musculoskeletal and posture-related
problems experienced in traditional educational settings are likely to be experi-
enced in tablet-assisted education as well. On the other hand, problems in the wrist/
hand and shoulder regions and symptoms of computer vision syndrome can be
attributed to physical problems related to tablet usage. Whether associated with
traditional or tablet-assisted education, any physical problem of children and
adolescents should be taken into account because their physical development is not
yet complete.
Syazwan et al. (2011) used the Standardized Nordic Questionnaire and RULA
for data collection and posture assessment of schoolchildren. Data collection was
followed by an intervention that showed that the body posture of school children in
their classroom settings can be improved and thus musculoskeletal discomfort and
pain experienced may be reduced via some exercises and awareness of future risks
regarding bad body posture.
The results of Straker et al. (2009) show that in the period of adaption to new IT,
children need to be encouraged to avoid a monotonous posture and activity, while
the results of Fanucchi et al. (2009) indicate that an eight-week exercise program
reduced the intensity of lower back pain of 12–13 years old schoolchildren. It is
obvious that proper simple physical exercises offered by specialists (like physical
therapy and rehabilitation specialists or physiotherapists) to be added to school
programs to reduce or eliminate physical discomfort or pain experienced by chil-
dren and adolescents.
Physical Discomfort Experienced in Traditional Education … 89
References
Anshel J (1997) Computer vision syndrome: causes and cures. Managing Office Technol 42
(7):17–19
Briggs A, Straker LM, Greig A (2004) Upper quadrant postural changes of school children in
response to interaction with different information technologies. Ergonomics 47(7):790–819
Clinch J, Eccleston C (2009) Chronic musculoskeletal pain in children: assessment and
management. Rheumatology, kep001
Fanucchi GL, Stewart A, Jordaan R, Becker P (2009) Exercise reduces the intensity and
prevalence of low back pain in 12–13 year old children: a randomised trial. Aust J Physiother
55(2):97–104
Greig AM, Straker LM, Briggs AM (2005) Cervical erector spinae and upper trapezius muscle
activity in children using different information technologies. Physiotherapy 91(2):119–126
Hashemi M, Azuzunezhad M, Najafi V, Nesari AJ (2011) What is mobile learning? Challenges
and capabilities. Procedia Soc Behav Sci 30:2477–2481
Hedge A (2005) Visual Ergonomics Handbook. In: Anshel J (ed) Chapter 9: Kids and computers,
Taylor and Francis, pp 137–155, e-book ISBN-13: 978-1-56670-682-7
Henderson S, Yeow J (2012) iPad in education: a case study of iPad adoption and use in a primary
school. In: 2012 45th Hawaii international conference on system sciences (hicss), pp 78–87.
IEEE
Ismail SA, Tamrin SB, Hashim Z (2009) The association between ergonomic risk factors, RULA
score, and musculoskeletal pain among school children: a preliminary result. Glob J Health Sci
1(2):73–84
Kim JH, Aulck L, Thamsuwan O, Bartha MC, Johnson PW (2014) The effect of key size of touch
screen virtual keyboards on productivity, usability, and typing biomechanics. Hum Factors 56
(7):1235–1248
Limon S, Valinsky LJ, Ben-Shalom Y (2004) Children at risk: risk factors for low back pain in the
elementary school environment. Spine 29(6):697–702
Melanson D (2011, Oct 3) IDC: 18 million tablets, 12 million e-readers shipped in 2010. Retrieved
from: http://www.engadget.com/2011/03/10/idc-18-million-tablets-12-million-e-readers-
shipped-in-2010/
Mohd Azuan K, Zailina H, Shamsul BMT, Nurul Asyiqin MA, Mohd Azhar MN, Syazwan Aizat I
(2010) Neck, upper back and lower back pain and associated risk factors among primary school
children. J Appl Sci 10:431–435
Roth-Isigkeit A, Thyen U, Stöven H, Schwarzenberger J, Schmucker P (2005) Pain among
children and adolescents: restrictions in daily living and triggering factors. Pediatrics 115(2):
e152–e162
Sommerich CM, Ward R, Sikdar K, Payne J, Herman L (2007) A survey of high school students
with ubiquitous access to tablet PCs. Ergonomics 50(5):706–727
Straker L, Coleman J, Skoss R, Maslen BA, Burgess-Limerick R, Pollock CM (2008) A
comparison of posture and muscle activity during tablet computer, desktop computer and paper
use by young children. Ergonomics 51(4):540–555
Staker LM, Maslen B, Burgess-Limerick R, Pollock C (2009) Children have less variable postures
and muscle activities when using new electronic information technology. J Electromyogr
Kinesiol 19(2):132–143
Syazwan A, Azhar MM, Anita A, Azizan H, Shaharuddin M, Hanafiah JM, Muhaimin AA,
Nizar AM, Mohd Rafee B, Mohd Ibthisham A, Kasani A (2011) Poor siting posture and heavy
schoolbag as contributors to musculoskeletal pain in children: an ergonomic school education
intervention program. J Pain Res 4(4):287–296
Tamim RM, Borokhovski E, Pickup D, Bernard RM (2015) Large-scale, government-supported
educational tablet initiatives. In: Commonwealth of learning. ISBN 978-1-894975-69-8
Yan Z, Hu L, Chen H, Lu F (2008) Computer vision syndrome: a widely spreading but largely
unknown epidemic among computer users. Comput Hum Behav 24(5):2026–2042
90 B. N. Uyal et al.
Zovkic M, Vrbanec T, Jasminka D (2011) Computer ergonomic of elementary school students. In:
Proceedings of 22nd Central European conference on information and intelligent systems,
Varazdin, Croatia, pp 37–45
Zunjic A, Papic G, Bojovic B, Matija L, Slavkovic G, Lukic P (2015) The role of ergonomics in
the improvement of quality of education. FME Trans 43(1):82–87
Future Research and Suggestions Based
on Maritime Inventory Routing Problem
Abstract The problem of the distribution of containers by ships and keeping the
inventory levels within specified bounds at ports in a marine system is called the
maritime inventory routing problem. The problem has been studied by both aca-
demics and practitioners. However, there are insufficient studies concerning envi-
ronmental issues and ship compartment capacities. In this study, an extensive
literature survey focusing on these issues is summarized and analyzed and the gaps
in the field are identified. Besides, future trends, opportunities, and suggestions are
presented.
Introduction
The maritime inventory routing problem (MIRP) plays an important role in general
trade. Routing of ships and maintenance of inventory levels at ports should be
managed carefully to meet demand. Inventory-routing problems consider both
inventory and vehicle-routing decisions simultaneously in a single model. There are
many extensions and variants of these problems including, firstly, variable pro-
duction and consumption rates. The other variants are multiple products and use of
spot charters (Christiansen et al. 2011).
MIRP has been widely studied in the literature. In previous studies of MIRP, the
sustainability and compartment capacities were not considered in detail. However,
maritime transportation prompts significant fuel consumption (De et al. 2017).
Related Works
MIRP with multiple compartments differs from the classical MIRP in that hetero-
geneous goods are delivered in multiple compartments in the same ship. Fagerholt
and Christiansen (2000) discuss a problem with multiple compartments. The scope
of the problem is a pick-up and delivery problem. A set partitioning approach is
proposed for this problem. Multiple products are distributed by ship because of a
fleet including a flexible cargo. Al-Khayyal and Hwang (2007) consider the routing
of heterogeneous ships. They consider a pick-up delivery for the product-dedicated
compartments in a ship. Li et al. (2010) address a problem including multi-parcel
ships. The ships include dedicated compartments for multiple chemicals. They
propose a novel mixed integer linear programming approach. Siswanto et al. (2011)
discuss an inventory routing problem with undedicated compartments to minimize
the total cost. Firstly, they develop a mixed integer linear model. Then they develop
a greedy heuristic. Agra et al. (2014) discuss an inventory routing problem. They
consider the distribution of fuel oil products using heterogeneous ships with ded-
icated tanks. They determine the distribution of the products dedicated to the tank
compartments using an arc load flow formulation and use different hybrid methods
to obtain good results.
The researches on sustainable inventory routing problems deal with the opti-
mization of energy consumption of transportation. Meng and Wang (2011) propose
a mixed-integer non-linear model with sustainability constraints. They evaluate the
service frequency and vessel speed in their model. Their study shows that a slow
steaming strategy can be evaluated in the high-price scenarios or having the large
ships. Norstad et al. (2011) consider slow speed, time windows, and ship capacity
in a vessel-routing problem. Their study shows that slow speed minimizes the fuel
consumption costs. Ronen (2011) deal with slow steaming, total operating cost, and
number of ships. They propose that this strategy increases the chartering cost. De
et al. (2017) study an MIRP with sustainability aspects. They integrate the slow
Future Research and Suggestions Based on Maritime Inventory … 93
steaming policy with ship routing. This policy is adopted to estimate the amount of
fuel consumed. They present a problem including scheduling and routing con-
straints and develop a meta-heuristic. The approach is called particle swarm opti-
mization. Some other studies dealing with sustainability and compartment
capacities are given in Table 1.
MIRP has been widely studied. However, we believe that the problem structure can
be updated by adding new trends and opportunities.
94 E. Gocmen and E. Yilmaz
Results
The paper has proposed a study of combined MIRPs based on sustainability and
compartment capacities. The inventory routing problem has been studied many
times in previous maritime studies. However, the number of studies that take into
account the sustainability and compartment capacities is limited. The literature
Future Research and Suggestions Based on Maritime Inventory … 95
work reviewed in this paper is based on the problem type and the method in
Table 1. The existing papers are presented and suggestions are discussed in
Table 2. Future trends are suggested, considering both the maritime sector and the
literature research. Many papers about maritime inventory-routing problems have
been studied in the literature generally. However, MIRP integrated approaches
including both sustainability and compartment capacities have not been sufficiently
considered. Therefore, in this paper, this problem is highlighted. The importance of
integrating the inventory and routing in this sector is seen and an environmental
focus and richer models with new constraints will be more important tomorrow.
References
Agra A, Christiansen M, Delgado A, Simonetti L (2014) Hybrid heuristics for a short sea
inventory routing problem. Eur J Oper Res 236:924–935
Al-Khayyal F, Hwang S-J (2007) Inventory constrained maritime routing and scheduling for
multi-commodity liquid bulk, part I: applications and model. Eur J Oper Res 176(1):106–130
Bektaş T, Laporte G (2011) The pollution-routing problem. Transp Res Part B 45:1232–1250
Christiansen M, Fagerholt K, Flatberg T, Haugen Q, Kloster O, Lund EH (2011) Maritime
inventory routing with multiple products: a case study from the cement industry. Eur J Oper
Res 208:86–94
De A, Kumar SK, Gunasekaran A, Tiwari MK (2017) Sustainable maritime inventory routing
problem with time window constraints. Eng Appl Artif Intell 61:77–95
Fagerholt K, Christiansen M (2000) A combined ship scheduling and allocation problem. J Oper
Res Soc 51:834–842
Kramer R, Subramanian A, Vidal T, Cabral LDAF (2015) A matheuristic approach for the
pollution-routing problem. Eur J Oper Res 243:523–539
Lam JSL (2015) Designing a sustainable maritime supply chain: a hybrid QFD-ANP approach.
Transp Res Part E 78:70–81
96 E. Gocmen and E. Yilmaz
Li J, Karimi IA, Srinivasan R (2010) Efficient bulk maritime logistics for the supply and delivery
of multiple chemicals. Comput Chem Eng 34:2118–2128
Meng Q, Wang S (2011) Optimal operating strategy for a long-haul liner service route. Eur J Oper
Res 215:105–114
Norstad I, Fagerholt K, Laporte G (2011) Tramp ship routing and scheduling with speed
optimization. Transp Res Part C 19:853–865
Perakis AN, Papadakis NA (1989) Minimal time vessel routing in a time-dependent environment.
Transp Sci 23(4):266–276
Richetta O, Larson RC (1997) Modeling the increased complexity of New York City’s refuse
marine transport system. Transp Sci 31(3):272–293
Ronen D (2011) The effect of oil price on containership speed and fleet size. J Oper Res Soc
62:211–216
Siswanto N, Essam D, Sarker R (2011) Solving the ship inventory routing and scheduling problem
with undedicated compartments. Comput Ind Eng 61(2):289–299
Wang S, Wang T, Meng Q (2011) A note on liner ship fleet deployment. Flex Serv Manuf J
23:422–430
Lean Transformation Integrated
with Industry 4.0 Implementation
Methodology
Introduction
Literature Review
Some research has been performed to emphasize the interaction between lean
manufacturing and Industry 4.0. For example, Sanders et al. (2016) analyzed the
link between Industry 4.0 and lean manufacturing and investigated whether
Industry 4.0 is capable of implementing lean. A methodology has been proposed to
integrate lean manufacturing and Industry 4.0 with respect to the supplier, cus-
tomer, and process as well as human and control factors. The authors also stated
that researches and publications in the field of Industry 4.0 hold answers to help
overcome the barriers to implementation of lean manufacturing. Similarly,
Rüttimann and Stöckli (2016) discussed how lean manufacturing has to be regarded
in the context of the Industry 4.0 initiative. Sibatrova and Vishnevskiy (2016)
suggested the integration of lean management and foresight while considering the
conditions of trends in Industry 4.0 and human and time resources. Doh et al.
(2016) not only reviewed the relevant literature from the industrial revolution to the
new Industry 4.0 but also considered the need for the use of automation in lean
production systems and supply chain characterization with the aim of developing a
framework for the integration of information systems and technologies. Blöchl and
Schneider (2016) devised a new simulation game with a learning focus on lean
logistics with Industry 4.0 components to teach the adequate application of Industry
4.0 technology in production logistics. Veza et al. (2016) carried out an analysis of
global and local enterprises based on a literature review and questionnaires in order
to develop a Croatian model of the Innovative Smart Enterprise (HR-ISE model). In
that study, a selection of six basic lean tools is made and the foundations of a
generic configuration of the HR-ISE model are defined. Rauch et al. (2016) pre-
sented an axiomatic design-oriented methodology that can be regarded as a set of
guidelines for the design of lean product development processes. Linked with
Industry 4.0, these guidelines show how a lean and smart product development
process can be achieved by the use of advanced and modern technologies and
instruments. Similarly, Synnes and Welo (2016) discussed organizational capabil-
ities and tools required to enable transformation into Industry 4.0 through integrated
product and process design. Biedermann et al. (2016) stated that maintenance needs
to change to meet the requirements of Industry 4.0 and emphasized the necessity of
knowledge and data management for improving predictive maintenance perfor-
mance. Diez et al. (2015) proposed a novel lean shop floor management system,
namely the Hoshin Kanri Tree (HKT). The authors also noted that the standard-
ization of communication patterns by HKT technology should bring significant
benefits in value stream performance, speed of standardization, and learning rates to
the Industry 4.0 generation of organizations.
100 S. Satoglu et al.
First, some brief information about the basic concepts of lean philosophy and lean
production systems will be presented. Lean philosophy primarily aims at the
elimination of all activities that consume time and resources but do not add value to
the physical completion of the products (Womack and Jones 2010). These activities
are called waste, or muda in Japanese, and are termed as non-value adding activ-
ities. Here, the value is defined from the end customers’ point of view and is
product specific. Hence, a value-adding activity is one that contributes to the
physical completion of the product and that the customer may want to pay for
(Womack and Jones 2010). According to lean philosophy, the intention is to
eliminate wastes. However, sometimes some of the wastes seem to be inevitable
with the current technologies or manufacturing assets (Womack and Jones 2010).
For instance, while switching from one product to another, a setup time can be
unavoidable. Besides, there are other wastes that can be immediately eliminated by
implementing lean tools and techniques.
According to lean philosophy, there are seven traditional wastes or non-value
adding activities that are common within manufacturing systems. These are
over-production, transportation, motion, waiting, inventory, unnecessary process-
ing, and defective parts/products (Ohno 1988). Later, Womack and Jones proposed
that products or services that do not meet the customer expectations should be
regarded as a kind of waste (Womack and Jones 2010). Overproduction waste
includes producing items for which there is no order or requirement (Liker 2004).
This is the worst kind of waste, since it causes other wastes to occur. Due to
overproduction, a large amount of inventory accumulates, an excess amount of staff
is employed, excess storage space is occupied, and so on. Inventory waste is linked
to overproduction and also includes excess raw material, work-in process, and
finished goods inventory holding. Besides, excess inventory hides problems within
the production system such as frequent machine breakdowns, long setup times, and
defective parts.
There are several lean tools and techniques that can be utilized for waste
elimination. The lean tools and techniques and the wastes that they help eliminate
are shown in Table 2.
On the other hand, there exist various advanced Industry 4.0 technologies and
cyber-physical systems that can be employed for waste elimination in advanced
manufacturing systems. The most fundamental technologies and the associated
waste types that these technologies help reduce are depicted in Table 3.
The methodology for implementing these technologies integrated with lean tools
is discussed. Figure 1 illustrates this projected relationship between the lean tools
and techniques and the advanced technologies. The figure is like a ladder, implying
that the lean tools and techniques should be implemented in a sequential manner.
First, the layout of the manufacturing system should be converted into a cellular
manufacturing system that aims to produce product families through the use of
autonomous and dedicated cells that are equipped with all required resources
Table 2 The seven wastes versus lean tools/techniques
Cellular Setup Quality TPM Production Kanban WIP Supplier Jidoka CIM
manufacturing reduction control smoothing reduction development
p p p p p
Overproduction
p
Transportation
p p p
Motion
Lean Transformation Integrated with Industry …
p p p p p p p p
Waiting
p p p p p p p
Inventory
p p
Unnecessary
processing
p p p p p p
Defectives
101
102
(Durmusoglu and Satoglu 2011). Besides, adaptive robotics can be employed for
enhanced material handling and parts loading–unloading. For setup reduction
purposes, sensors that detect the components of the machines such as dies, blades,
and so on can speed up the internal setup operations and protect the operators from
accidents. Besides, adaptive robotics can also be implemented for setup reduction.
Quality control and foolproof mechanisms (poka yoke) are other important
aspects of lean production systems. To prevent the production of defective parts and
products, pattern recognition augmented reality technologies and sensor applica-
tions can be utilized.
Total Productive Maintenance (TPM) aims to improve the overall equipment
effectiveness of the machines, which includes reduction of time, speed, and quality
losses (Ahuja and Khamba 2008). Augmented reality can be utilized to guide the
operators in the performance of maintenance activities. Besides, sensors that keep
track of vibration, noise, and heat help operators to detect abnormal conditions
before failure.
Production smoothing is a production scheduling activity that aims to produce
the same quantities of a part or product on a daily or hourly basis, as far as possible.
Data analytics is a suitable tool for analyzing the demand frequency coming from
customers.
Kanban is a lean production tool where pull-production control is performed.
However, thanks to advanced auto-ID technologies, instead of scanning the bar-
codes of many kanbans, RFID tags are detected by readers and quick communi-
cation between the stages can be achieved.
Better M2M Communication, IoT, sensors and data analytics should be used to
reduce Work-in Process (WIP) among machines. Besides, by means of data ana-
lytics, the cycle times and failure characteristics of the machines can be analyzed
and the buffer area capacities among the machines can be adjusted.
104 S. Satoglu et al.
For supplier development purposes, better data analytics should be employed for
better analysis of demand data. To achieve better coordination and communication
among the supplier and customer parties, IoT technologies should be employed.
Jidoka means automation with a human touch (Liker and Morgan 2006). In other
words, the manufacturing system employs automation technologies under the
supervision of the workers. So, while implementing jidoka, sensors and IoT can be
employed.
While converting system into a computer-integrated manufacturing system, M2M
communication, sensors, IoT, 3-D printing, adaptive robotics, and data analytics can
be employed to obtain more benefit from the advanced manufacturing technologies.
Industry 4.0 technologies and automation can be applied to several methods of lean
production. The following section describes examples of possible combinations.
E-Kanban Systems. The digitalization of the kanban system has already been
known for several years. Conventional, physical cards for an order-oriented pro-
duction control are replaced by virtual kanban (Lage and Filho 2010). Depending
on the implementation of this so called e-kanban system, missing or empty bins are
recognized automatically via sensors.
Automation in Error-Proofing. Magna T.E.A.M. Systems makes widespread
use of bar-coding technology to eliminate human error and production mistakes.
Operators scan bar codes wrapped around their wrists to ensure that they are
assembling the correct product on high-mix production lines. All operators use bar
codes to log into their workstations so that there is a record of who is building what
part. Electronic work instructions displayed at every workstation address error
proofing and serve as a visual aid to operators (Weber 2016).
Chaku Chaku Lines. In 2012, the University of South Denmark, together with
the toy manufacturer Lego A/S, developed approaches for integrating automation
technology in U-shaped assembly stations, also known as chaku chaku lines. In
particular, human–machine interaction was the focus of this project. As a result,
they developed a local order-management system that shifts typical tasks of ERP
systems to employees on chaku chaku lines (Bilberg and Hadar 2012). Moreover,
the ongoing research project “Lean Intelligent Assembly Automation” also
addresses chaku chaku lines (Kolberg and Zühlke 2015).
iBin System. In 2013, Würth Industrie Services GmbH & Co. KG presented the
optical order system iBin as an extension for kanban bins (Fig. 1). A camera in the
module detects the charging level of the bin and iBin wirelessly reports the status to
an inventory control system. Besides, iBin is also able to send orders automatically
to suppliers. As a result, buffer stock can be reduced and spare parts can be scheduled
in an order-oriented way (Würth Industrie Service GmbH & Co. KG 2013).
QR Code Integrated Milk-Run System. Wittenstein AG and BIBA—Bremer
Institut für Produktion und Logistik GmbH, among others, are working on a flexible
Lean Transformation Integrated with Industry … 105
material supply system for production lines through the state-funded project
“CyProS”. Instead of using fixed intervals, an IT system calculates round-trip
intervals for the transport system based on real-time demands. In the first prototype,
collection of data during this so-called milk run is done by scanning QR codes.
Interaction with employees of the transport system is realized by conventional tablet
PCs (Kolberg and Zühlke 2015).
Pick-by-Vision. In the DHL application, warehouse workers see the physical
reality of the aisles and racks in front of them just as they could if they were not
wearing head-mounted displays, but this is augmented by a superimposed AR code
in the form of a graphical work instruction, which appears after they scan the
barcode at the storage location with their smart glasses. This code tells the workers
where to go, how many items to pick, and even where to place them in their
trolleys. When the pilot project is complete, DHL evaluates the operational suit-
ability and economic feasibility of adopting augmented-reality vision picking.
Meanwhile, its trends-research team has already identified other logistics activities
that could be enhanced by a judicious dose of AR technology (Url-1).
Augmented Reality-Based Work Standardization. The project “MOON”
(asseMbly Oriented authOring augmeNted reality) is being developed by Airbus
Military. MOON uses 3D information from the industrial digital mock-up to gen-
erate assembly instructions and their deployment by applying augmented reality
technology. A prototype was developed for the electrical harness routing in Frame
36 of the Airbus A400M (Servan et al. 2012).
Plug’n’Produce Workstations. Industry 4.0 could furthermore support lean
production’s requirement for a flexible, modular production. For several years,
SmartFactoryKL has demonstrated modular workstations based on standardized
physical and IT interfaces, which can be flexibly reconfigured to new production
lines via Plug’n’Produce. According to the Single-Minute-Exchange-of-Die
(SMED) principle, the setup time should be reduced to less than 10 min
(Kolberg et al. 2016).
Automatic Mold-Change System: At K 2016 in Dusseldorf, Staubli of
Germany (U.S. office in Duncan, S.C.) demonstrated complete hands-off mold
changing in less than 2 min, and company spokespersons said the system could
reduce that to 1 min. A mold table on rails carried a preheated mold into position
beside the press. A sensor in the cart read the mold setup parameters from a chip in
the mold. For the mold already in the press, all power and data connections were
disconnected automatically within 3 s (Url-2).
Digitized Heijunka. Besides the flexible material supply system,
Wittenstein AG digitized the Heijunka-Board. Heijunka, also known as levelling,
describes a method for converting customer orders into smaller, recurring batches
(Verein Deutscher Ingenieure e.V. 2013) (Kolberg et al. 2016).
Predictive Maintenance. Condition monitoring, data analytics, and early pre-
diction of failures increase the uptime and overall equipment effectiveness (Bal and
Satoglu 2014). For this purpose, predictive maintenance practices in manufacturing
facilities have increased. In the oil and gas industry, where equipment is in remote
106 S. Satoglu et al.
locations, oil fields have been digitized by means of sensors. The name of the
software platform is MAPR Distribution Including Hadoop® (MAPR 2015).
Conclusion
The approach used in this paper answers a significant part of this question and
illustrates that lean manufacturing and Industry 4.0 are not mutually exclusive but
can be seamlessly integrated with each other for successful production manage-
ment. This paper analyzes the researches and publications concerned in the field of
Industry 4.0 and identifies how they act as supporting factors for implementation of
lean manufacturing.
Industry 4.0 will not solve the problems of mismanaged and weakly organized
manufacturing systems. Its tools should be applied to lean activities that are per-
formed successfully before automatization. In addition, effective information flow
should be maintained effectively before introducing ICT. In this context, keeping
the data in the correct and current manner is a critical success factor in both Industry
4.0 and lean production.
References
Ahuja IPS, Khamba JS (2008) Total productive maintenance: literature review and directions. Int J
Qual Reliab Manag 25(7):709–756
Azuma RT (1997) A survey of augmented reality. Presence Teleoperators Virtual Environ 6
(4):355–385
Bal A, Satoglu SI (2014) Maintenance management of production systems with sensors and RFID:
a case study. In: Global conference on engineering and technology management (GCETM),
pp 82–89
Biedermann H, Kinz A, Bernerstätter R, Zellner T (2016) Lean smart maintenance—implemen-
tation in the process industry. Produ Manag 21(2):41–43
Blöchl SJ, Schneider M (2016) Simulation game for intelligent production logistics—the PuLL®
learning factory. Procedia CIRP 54:130–135
Diez JV, Ordieres-Mere J, Nuber G (2015) The Hoshin Kanri Tree cross plant lean shop floor
management. Procedia CIRP 32:150–155
Doh SW, Deschamps F, Pinhero De Lima E (2016) Systems integration in the lean manufacturing
systems value chain to meet industry 4.0 requirements. In: Borsato M et al (eds)
Transdisciplinary engineering: crossing boundaries, pp 642–650
Durmusoglu MB, Satoglu SI (2011) Axiomatic design of hybrid manufacturing systems in erratic
demand conditions. Int J Prod Res 49(17):5231–5261
Kolberg D, Knobloch J, Zühlke D (2016) Towards a lean automation interface for workstations.
Int J Prod Res. doi:https://doi.org/10.1080/00207543.2016.1223384
Kolberg D, Zühlke D (2015) Lean automation enabled by industry 4.0 technologies. IFAC-Papers
Online 48(3):1870–1875
Lage Junior M, Filho GM (2010) Variations of the kanban system: literature review and
classification. Int J Prod Econ 125:13–21
Liker JK (2004) The toyota way. Esensi
Lean Transformation Integrated with Industry … 107
Liker JK, Morgan JM (2006) The Toyota way in services: the case of lean product development.
Acad Manag Perspect 20(2):5–20
MAPR: Predictive Maintenance using Hadoop for the Oil and Gas Industry. https://www.mapr.
com/sites/default/files/mapr_whitepaper_predictive_maintenance_oil_gas_051515.pdf.
Accessed 24 Feb 2017 (2015)
Ohno T (1988) Toyota production system: beyond large-scale production. CRC Press
Rauch E, Dallasega P, Matt DT (2016) The way from lean product development (LPD) to smart
product development (SPD). Procedia CIRP 50:26–31
Rother M, Harris R (2001) Creating continuous flow. Lean Enterprise Institute
Rüttimann BG, Stöckli MT (2016) Lean and industry 4.0-twins, partners, or contenders? A due
clarification regarding the supposed clash of two production systems. J Serv Sci Manag 9:
485–500
Sanders A, Elangeswaran C, Wulfsberg J (2016) Industry 4.0 implies lean manufacturing: research
activities in industry 4.0 function as enablers for lean manufacturing. J Ind Eng Manag 9(3):
811–833
Serván J, Mas F, Menéndez JL, Ríos J (2012) Assembly work instruction deployment using
augmented reality. Key Eng Mater 502:25–30
Sibatrova SV, Vishnevskiy KO (2016) Present and future of the production: integrating lean
management into corporate foresight, Working Paper, National Research University Higher
School of Economics, WP BRP 66/STI/2016
Synnes EL, Welo T (2016) Enhancing integrative capabilities through lean product and process
development. Procedia CIRP 54:221–226
Takeda H (2006) The synchronized production system: going beyond just-in-time through kaizen.
KoganPage, London
Url-1. https://logisticsviewpoints.com/2015/04/16/picking-with-vision/
Url-2. http://www.ptonline.com/articles/fully-automatic-mold-change-in-under-2-min
Url-3. http://www.rfidjournal.com/articles/view?7123
Veza I, Mladineo M, Gjeldum N (2016) Selection of the basic lean tools for development of
croatian model of innovative smart enterprise. Tehnički vjesnik 23(5):1317–1324
Womack JP, Jones DT (2010) Lean thinking: banish waste and create wealth in your corporation.
Simon and Schuster
Würth Industrie Service GmbH & Co. KG iBin(R) stocks in focus—the first intelligent bin (2013)
Zuehlke D (2010) SmartFactory—towards a factory-of-things. Ann Rev Control 34:129–138
Selecting the Best Strategy for Industry 4.0
Applications with a Case Study
Abstract In this paper, we try to find the best strategy for Industry 4.0 imple-
mentation. For this aim, we determine the aggregated strategies for applying this
concept and criteria that are used to select the best strategy. With the criteria set out
in this context, basic strategies should be applied as a priority, considering for
example human resources, work organization and design, information systems, and
effective use of resources, and the development of new business models and
standardization are specified. Since this selection is a process in which many dif-
ferent measures need to be considered, multi-criteria decision-making (MCDM)
methods based on AHP-VIKOR methodologies have been applied to find the best
strategy. Fuzzy set theory was beneficial for coping with uncertainties in the
selection process.
Introduction
The world has changed as fast as it has ever existed since the industrial revolution.
This revolution has been followed by second and third generations, called Industry
2.0 and Industry 3.0, in order to able to meet the increases in demand that have
accompanied human population growth. From that moment, investments in industry
and industrial products and their returns have reciprocally increased in excessive
amounts. Today, we are taking steps to transition to a new concept called Industry
4.0 in order to bring this development further to meet the demands of the growing
human population. This concept aims to introduce technical advances such as
wireless network systems, cyber-physical systems, the Internet of Things, and cloud
computing in industry. Not only scientists but also politicians have been evaluating
this transition process since the 2000s. As a result of this evaluation process, many
strategies have been suggested to select in a systematic way. Since this process
considers many criteria, both qualitative and quantitative, which are used for
comparison of strategy alternatives, it is very difficult for experts to make decisions.
In order to deal with this multi-expert and multi-criteria environment, we will
decide how many criteria exist in it, build a set of possible strategies, collect the
appropriate information about strategies with respect to criteria, and evaluate them
to reach the goal by using multi-criteria decision making (MCDM) (Tzeng and
Huang 2011). This kind of evaluation requires the utilization of expert systems so
that data can be expressed more explanatory to handle uncertainties, and thereby
more knowledgeable decisions can be taken. There are many models dealing with
the uncertainty of strategy problems in the literature. Among these models,
stochastic selection models (Klein et al. 2009), heuristic optimization models
(Beloglazov et al. 2012), simulation models (Goh et al. 2007), and fuzzy MCDM
(Kaya and Kahraman 2011; Opricovic and Tzeng 2004) are the most frequently
applied techniques. In this paper, an integrated fuzzy MCDM methodology is
suggested for the Industry 4.0 strategy selection problem. There are several inte-
grated fuzzy MCDM methodologies in the literature, such as fuzzy Analytic
Network Process (ANP) and the fuzzy Preference Ranking Organization METHod
for Enrichment of Evaluations (PROMETHEE) (Vinodh et al. 2014); fuzzy
Analytic Hierarchy Process (AHP) and fuzzy Technique for Order Performance by
Similarity to Ideal Solution (TOPSIS) (Chen and Chen 2010); fuzzy
Decision-Making Trial and Evaluation Laboratory (DEMATEL) and Fuzzy ANP
and Fuzzy TOPSIS (Gorecky et al. 2017). In this paper, a fuzzy MCDM
methodology consisting of AHP and VIKOR methods is used to determine the best
Industry 4.0 strategy. For this aim, the criteria weights have been calculated by
using fuzzy AHP and fuzzy VIKOR has been used to determine the best strategy.
The rest of this paper is organized as follows: Section “Literature Review” presents
the literature review concerning Industry 4.0. Section “The Proposed
Methodology” presents the proposed model. Section “Real Case Study” describes a
real case study for the selection of the most appropriate Industry 4.0 strategy.
Finally, the obtained results and future research suggestions are discussed in
Section “Conclusion and Suggestions for Future Work”.
Literature Review
Industry 4.0 has drawn much attention by academicians and researchers in recent
years and the number of studies has increased dramatically. Some of the studies of
Industry 4.0 can be summed up as follows. Gorecky et al. (2017) presented the
design, implementation, and presentation of a virtual training system, VISTRA, for
future factories (Grundstein et al. 2017). They selected the automotive industry
Selecting the Best Strategy for Industry 4.0 Applications … 111
because it is one of the leading industries adopting future factory concepts and
technologies such as cyber-physical systems and the Internet of Things. Grundstein
et al. (2017) performed a study of the autonomous production control
(APC) method in job shop manufacturing (Barbosa et al. 2017). This control
method integrates all control tasks (order release, sequencing, and capacity control)
to meet due dates. They compared the APC method with other method combina-
tions and found that the APC method has the potential to meet the due dates better.
Barbosa et al. (2017) studied two key concepts of Industry 4.0 vision, namely
Cyber Physical Systems (CPSs) and Intelligent Product (IP). They suggested that
the integration of these two approaches is beneficial for future smart industries.
They presented the integration of these approaches via two real world cases.
Fleischmann et al. (2017) mentioned new methodologies for monitoring systems
based on CPSs and presented a condition monitoring system for a handling unit in a
test cell. Kolberg et al. (2016) presented an ongoing work concerning the digiti-
zation of lean production methods using CPS. Lean production is inadequate for
meeting the market demand for customized products. Industry 4.0 technologies are
combined with lean production, which is called lean automation. They gave the
example of a kanban method to explain their work. Sepulcre et al. (2016) men-
tioned that the Industry 4.0 concept targets the interconnection and computerization
of traditional industries to improve their adaptability and utilize their resources
efficiently. Oesterreich and Teuteberg (2016) reviewed applications of technologies
related to Industry 4.0 in the construction industry. They evaluated the literature
from different perspectives like political, economic, social, technological, envi-
ronmental, and legal ones and gave recommendations for future research. Chang
and Wu (2016) mentioned that Industry 4.0 offers smart productivity based on the
industrial Internet of Things, big data, and CPSs in manufacturing industries.
Rennung et al. (2016) analyzed the service industry from the perspective of
Industry 4.0. They interviewed experts and evaluated the applicability of scientific
approaches to service networks for the project “Industry 4.0”. Veza et al. (2015)
studied a partner-selection problem. They used the PROMETHEE method to
evaluate virtual enterprises. The problem was applied to a production network of
smart factories in Industry 4.0. Forstner and Dümmler (2014) claimed that the smart
factory is the central element of Industry 4.0 and established a foundation value to
enable the integration of value chains across companies.
We apply a fuzzy MCDM approach to detect the best strategy for applying the
Industry 4.0 concept. The following subsections explain the adopted methodology
in the fuzzy environment.
112 M. Erdogan et al.
Fuzzy set theory was introduced by Zadeh (1965) as a class of objects with a
continuum of grades of membership. Such a set is characterized by a membership
function that assigns to each element a grade of membership varying in a closed
interval ranging from zero to one.
The AHP was proposed by Saaty (1980) to solve complex multi-criteria decision
problems (Rezaie et al. 2014; Kaya and Kahraman 2014) and is based on the
concept of simplifying complex decision problems into elements (Zare et al. 2016).
In this paper, Buckley’s fuzzy AHP (1985) is used to determine the weights of
criteria in order to select the best strategy in Industry 4.0 (Hsieh et al. 2004;
Kahraman et al. 2014).
Fuzzy VIKOR
VIKOR was developed by Opricovic and Tzeng to find a compromise solution for
MCDM issues. This method has been applied to many areas such as risk assessment
(Gupta et al. 2016), machine selection (Wu et al. 2016), plant location selection
(Gul et al. 2016), supplier selection (Kaya and Kahraman 2010), and so on. VIKOR
is an MCDM method that ranks alternatives and determines the compromise
solution that is the closest to the “ideal” (Opricovic and Tzeng 2004). The steps of
the fuzzy VIKOR methodology are as follows (Tuzkaya et al. 2010; Kaya and
Kahraman 2010):
n represents the number of feasible alternatives, Ai = {A1, A2, …, An} and ~xij is the
rating of alternative Ai with respect to criterion j.
Step 1: Construct the fuzzy multi-criteria decision-making problem in matrix
format:
2 3
~x11 ~x11 . . . ~x1n
6 ~x21 ~x22 . . . ~x2n 7
~ ¼6
D 6 .. .. ..
7
7 ð1Þ
4. . . 5
~xm1 ~xm2 . . . ~xmn
Step 2: Determine the best ~fj ¼ ðlj ; mj ; uj Þ and worst ~fj ¼ ðl
j ; mj ; uj Þ values of
all criterion functions, j = 1, 2, …, m.
Selecting the Best Strategy for Industry 4.0 Applications … 113
~f ¼ max ~xij , ~f ¼ min ~xij , if the jth criterion belongs to the benefit criteria,
j i j i
~f ¼ min ~xij , ~f ¼ max ~xij , if the jth criterion belongs to the cost criteria.
j i j i
Step 3: Compute the normalized fuzzy difference d~ij , j = 1, …, m and i = 1, …, n.
X
m
~Si ¼ wj d~ij Þ
ð~ ð4Þ
j¼1
R wj d~ij Þ
~ i ¼ maxð~ ð5Þ
j
where ~Si refers to the measure of separation of Ai from the fuzzy best value and R ~i
to the measure of separation of Ai from the fuzzy worst value.
Step 5: Defuzzify the values of ~Si and R ~ i by using the graded mean integration
approach; for triangular fuzzy numbers, the fuzzy number C ~ ¼ ðc1 ; c2 ; c3 Þ can be
transformed into a crisp number by employing the equation below:
~ ¼C¼ c1 þ 4c2 þ c3
PðCÞ ð6Þ
6
Step 8: Propose a compromise solution, called alternative A(1), which is the best
ranked solution according to the measure Q(minimum) if the following two con-
ditions are satisfied:
Condition 1 The acceptable advantage Q Að2Þ Q Að1Þ DQ, where A(2) is the
alternative with second position in the ranking list according to Q and DQ =
1/(n − 1).
Condition 2 For acceptable stability in decision making, alternative A(1) must also
be the best ranked according to S and/or R.
If one of the conditions is not satisfied, then a set of compromise solutions is
proposed, which consists of:
– Alternatives A(1) and A(2) if only the condition C2 is not satisfied, or
– Alternatives A(1), A(2),…, A(n) if the condition C1 is not satisfied; A(n) is
determined by the relation Q AðnÞ Q Að1Þ \DQ for the maximum n (the
positions of these alternatives are “in closeness”).
A flowchart of our suggested methodology can be seen in Fig. 1.
This paper aims to find the best strategy for the implication of the Industry 4.0
initiative of companies. In the selection process, fuzzy MCDM methodology is
applied to obtain results that are closer reality. First of all, the criteria that are used
to evaluate the strategies for Industry 4.0 are defined. Figure 2 shows the hierarchy
of criteria and alternatives that are considered in the scope of this paper. Ten criteria
and five alternatives are determined for this study. Then, the weights of the criteria
are calculated to find their importance levels in the decision-making process. In this
phase, fuzzy AHP methodology with the evaluations obtained from three experts is
used. These experts are the people who study Industry 4.0 in their academic fields.
They were asked to evaluate the criteria according to a scale presented on a
Selec ng of the best
Industry 4.0 Strategy
C1: C2: C3: C4: C5: C6: C7: C8: C9: C10:
Leadership Customer Product Opera on Culture People Governance Technology Quality Organiza on
Selecting the Best Strategy for Industry 4.0 Applications …
questionnaire. After that, we checked the consistency of evaluations for each expert.
If there was any inconsistent evaluation, the questionnaires were sent back to the
experts for reevaluation. This process was repeated until all the evaluations were
consistent, which meant that the consistency ratio was lower than 0.1.
The fuzzy AHP process is conducted to calculate the criteria weights. Table 1
shows the weights in triangular fuzzy numbers. According to the results, criterion 1,
“Leadership”, was determined as the most important criterion. The least important
one was “C6: People”.
After obtaining the criteria weights, fuzzy VIKOR steps were initiated. Firstly,
experts were consulted again to score the alternatives according to the criteria.
Linguistic expressions were converted to triangular fuzzy numbers according to the
scale presented in the proposed methodology section. Then three decision makers’
evaluations were aggregated and the best and worst values for each criterion were
revealed. Then the S, R, and Q values for each alternative were calculated. Table 2
shows the S, R, and Q values.
When we analyze the results, we can see that the alternative that has the mini-
mum Q value is Alternative 3. This alternative also takes the minimum S and R
values, which means that Condition 1 is satisfied. When we look at the acceptable
advantage, Q Að2Þ Q Að1Þ DQ, where A(2) is the alternative with second
position in the ranking list according to Q,
In this paper, we aimed to find the best strategy for transition to Industry 4.0 by
using a fuzzy MCDM with the integration of fuzzy AHP and VIKOR method-
ologies. To this end, criteria and alternatives were determined from experts’ ideas
and a literature review. The criteria used to evaluate the strategies were weighted by
using fuzzy AHP methodology and the impacts of alternatives on criteria were
provided by experts for application to fuzzy VIKOR. The most important criterion
in the decision-making process was determined to be leadership. As a result of the
work, it emerged that the best alternative was the strategies designed to improve
information systems. It is not surprising that the alternative of developing infor-
mation systems, which is also referred to as the Internet of Things, takes first place
in the adoption of Industry 4.0. The last alternative was found to be developing new
business models. The development of new business models is also very important
in the implementation of this concept, but it does not appear to be a priority
strategy.
As suggestions for future papers, different MCDM methods can be used,
extensions of fuzzy sets can be considered, or the criteria and alternatives can be
divided in more detail.
References
Barbosa J, Leitão P, Trentesaux D, Colombo AW, Karnouskosk S (2017) Cross benefits from
cyber-physical systems and intelligent products for future smart industries. In: IEEE
international conference on industrial informatics (INDIN) 7819214, pp 504–509
Beloglazov A, Abawajy J, Buyya R (2012) Energy-aware resource allocation heuristics for
efficient management of data centers for cloud computing. Future Gener Comput Syst 28
(5):755–768
Buckley JJ (1985) Fuzzy hierarchical analysis. Fuzzy Sets Syst 17(3):233–247
Chang WY, Wu SJ (2016) Investigated information data of CNC machine tool for established
productivity of industry 4.0. In: 2016 5th IIAI international congress on advanced applied
informatics (IIAI-AAI). IEEE, pp 1088–1092
118 M. Erdogan et al.
Chen JK, Chen IS (2010) Using a novel conjunctive MCDM approach based on DEMATEL,
fuzzy ANP, and TOPSIS as an innovation support system for Taiwanese higher education.
Expert Syst Appl 37(3):1981–1990
Fleischmann H, Kohl J, Franke J (2017) Improving maintenance processes with distributed
monitoring systems. In: IEEE international conference on industrial informatics (INDIN)
7819189, pp 377–382
Forstner L, Dümmler M (2014) Integrierte Wertschöpfungsnetzwerke-Chancen und Potenziale
durch Industrie 4.0. e and i. Elektrotechnik und Informationstechnik 131(7):199–201
Goh KI, Cusick ME, Valle D, Childs B, Vidal M, Barabási AL (2007) The human disease
network. Proc Natl Acad Sci 104(21):8685–8690
Gorecky D, Khamis M, Mura K (2017) Introduction and establishment of virtual training in the
factory of the future. Int J Comput Integr Manuf 30(1):182–190
Grundstein S, Freitag M, Scholz-Reiter B (2017) A new method for autonomous control of
complex job shops—integrating order release, sequencing and capacity control to meet due
dates. J Manuf Syst 42:11–28
Gul M, Celik E, Aydin N, Gumus AT, Guneri AF (2016) A state of the art literature review of
VIKOR and its fuzzy extensions on applications. Appl Soft Comput 46:60–89
Gupta P, Mehlawat MK, Grover N (2016) Intuitionistic fuzzy multi-attribute group
decision-making with an application to plant location selection based on a new extended
VIKOR method. Inf Sci 370:184–203
Hsieh TY, Lu ST, Tzeng GH (2004) Fuzzy MCDM approach for planning and design tenders
selection in public office buildings. Int J Project Manag 22(7):573–584
Kahraman C, Süder A, Kaya İ (2014) Fuzzy multicriteria evaluation of health research
investments. Technol Econ Dev Econ 20(2):210–226
Kaya T, Kahraman C (2010) Multicriteria renewable energy planning using an integrated fuzzy
VIKOR and AHP methodology: the case of Istanbul. Energy 35(6):2517–2527
Kaya T, Kahraman C (2011) Multicriteria decision making in energy planning using a modified
fuzzy TOPSIS methodology. Expert Syst Appl 38(6):6577–6585
Kaya I, Kahraman C (2014) A comparison of fuzzy multicriteria decision making methods for
intelligent building assessment. J Civ Eng Manag 20(1):59–69
Klein S, Pluim JP, Staring M, Viergever MA (2009) Adaptive stochastic gradient descent
optimisation for image registration. Int J Comput Vis 81(3):227
Kolberg D, Knobloch J, Zühlke D (2016) Towards a lean automation interface for workstations.
Int J Prod Res. https://doi.org/10.1080/00207543.2016.1223384
Oesterreich T-D, Teuteberg F (2016) Understanding the implications of digitisation and
automation in the context of Industry 4.0: a triangulation approach and elements of a research
agenda for the construction industry. Comput Ind 83:121–139
Opricovic S, Tzeng GH (2004) Compromise solution by MCDM methods: a comparative analysis
of VIKOR and TOPSIS. Eur J Oper Res 156(2):445–455
Rennung F, Luminosu CT, Draghici A (2016) Service provision in the framework of Industry 4.0.
Procedia Soc Behav Sci 221:372–377
Rezaie K, Ramiyani SS, Shirkouhi SN, Badizadeh A (2014) Evaluating performance of Iranian
cement firms using an integrated fuzzy AHP-VIKOR method. Appl Math Model 38:5033–
5046
Saaty TL (1980) The analytic hierarchy process. McGraw-Hill, New York
Sepulcre M, Gozalvez J, Coll-Perales B (2016) Multipath QoS-driven routing protocol for
industrial wireless networks. J Netw Comput Appl 74:121–132
Tuzkaya G, Gülsün B, Kahraman C, Özgen D (2010) An integrated fuzzy multi-criteria decision
making methodology for material handling equipment selection problem and an application.
Expert Syst Appl 37(4):2853–2863
Tzeng GH, Huang JJ (2011) Multiple attribute decision making: methods and applications. CRC
press
Veza I. Mladineo M, Gjeldum N (2015) Managing innovative production network of smart
factories. IFAC-Papers OnLine 48(3):555–560
Selecting the Best Strategy for Industry 4.0 Applications … 119
Vinodh S, Prasanna M, Prakash NH (2014) Integrated fuzzy AHP–TOPSIS for selecting the best
plastic recycling method: a case study. Appl Math Model 38(19):4662–4672
Wu Y, Chen K, Zeng B, Xu H, Yang Y (2016) Supplier selection in nuclear power industry with
extended VIKOR method under linguistic information. Appl Soft Comput 48:444–457
Zadeh L (1965) Fuzzy sets. Inf Control 8(1965):338–353
Zare M, Pahl C, Rahnama H, Nilashi M, Mardani A, Ibrahim O, Ahmadi H (2016) Multi-criteria
decision making approach in E-learning: a systematic review and classification. Appl Soft
Comput 45:108–128
Musculoskeletal Discomfort Experienced
by Children and Adolescents During
the Use of ICT: A Statistical Analysis
of Exposure Periods and Purposes
Keywords Musculoskeletal strain Long time use
Information communication technology
Introduction
Literature Review
For many years, many researchers have been investigating the physical effects of
computer use on the musculoskeletal systems of children. The aim of the majority
of studies was to investigate muscle activities related to computer use by children
and to learn the risk factors resulting in musculoskeletal discomfort. Another aim of
these studies was to investigate differences and similarities between muscle activ-
ities when using old technology systems and computer-based technology (Oates
et al. 1998; Leaser et al. 1998; Ciccarelli et al. 2006; Breen et al. 2007; Straker et al.
2008a, b; Maslen and Straker 2009; Straker et al. 2009; Brink et al. 2009; Harris
2010; Zovkic et al. 2011).
Recent studies on ergonomics and physiotherapy have found that the use of ICT
by children and adolescents is associated with the musculoskeletal discomfort they
experience (Harris and Straker 2000; Greig et al. 2005; Sommerich et al. 2007;
Straker et al. 2008b). Moreover, these studies showed that discomfort occurring in
the neck region is one of the signs of musculoskeletal disorder.
Musculoskeletal Discomfort Experienced by Children … 123
Harris and Straker (2000) mentioned that children’s use of portable computers
with prolonged poor posture leads to musculoskeletal discomfort. They found an
association between musculoskeletal discomfort and duration of exposure to
information technology. Zovkic et al. (2011) indicated that health problems are
exacerbated by prolonged computer use. Among the health problem recorded were
wrist pain, drowsiness, dry throat, eye irritation, nose irritation, visual problems,
headaches, and neck and back pain. Straker et al. (2015) conducted a study to
analyze the muscular activities of children using tablet computers and other
activities (playing with toys and watching TV). The results indicated that the use of
desktop computers by children increased the risk of musculoskeletal discomfort.
Tablet computers introduced even less movement and muscle activity, in addition
to bad spinal posture, compared to other children activities. Woo et al. (2016)
showed that children and adolescents experienced a risk factor for musculoskeletal
discomfort similar to that of adults when they used computer devices.
However, very few studies have investigated the relationship between duration
of daily exposure of children and adolescents to ICT and musculoskeletal dis-
comfort. The aim of this study is to fill this gap and learn about the experiences of
musculoskeletal discomfort among children and adolescents.
The study also looks at the correlation between subjects’ musculoskeletal dis-
comfort and ICT use for different daily activities such as communication, gaming,
watching films, studying at school, studying outside school, surfing the Internet,
reading, and writing.
Method
Results
Among the 406 participants, 50.7% were male and 49.3% were female. The par-
ticipants were aged between 11 and 20 years. It was stated by 43.8% of participants
that they performed at least one of the activities for more than 3 h per day using one
of the ICT devices. In addition to daily desktop, laptop, or tablet computer use, 43%
of participants mentioned that they used their smartphones for many hours in their
daily lives. An interesting result is that 70.8% of the participants used at least two
different types of devices in their daily lives. Table 1 presents the percentages and
numbers regarding participants’ preferences for devices.
The participants also indicated the type of device, duration of daily use, and the
purpose of use in the questionnaire. Table 2 presents a summary of participants’
responses regarding the purpose of using desktop, laptop, or tablet computers.
Another result related to total years of ICT device use was that most of the
participants indicated that they had been using ICT devices for at least one year. It
was stated by 55% of participants that they had been using desktop computers for at
least a year, while 67.5% reported that they had been using laptop computers for at
least a year, and 58% stated that they have been using tablets for at least a year.
Musculoskeletal discomfort experienced by participants was determined through
the second part of the questionnaire. The participants stated that they mostly
experienced musculoskeletal discomfort in the neck (42.36%), upper back
(41.12%), lower back (38.67%), and right shoulder (22.41%) regions.
Five different correlation analyses were conducted to test the hypothesis. It was
found that there was a weak but significant relationship between musculoskeletal
discomfort experienced in their body regions and total hours of daily exposure of
children and adolescents to ICT devices. The results indicated that the participants
who used devices for less than 1 h per day experienced discomfort in their upper
arms (both left and right). In addition, there was no significant relationship for those
participants who used devices for 1–2 h daily. The participants who used ICT
devices for 2–3 h per day felt more discomfort in their thigh (left) and shoulders
(both left and right).
Table 1 Statistics drawn from the question “Do you use a tablet, laptop, or desktop PC or all of
them in your daily life?”
Computer usage preferences of the respondents in daily life Percentage (%) Number
Tablet only 8.9 36
Laptop only 15.0 61
Desktop only 5.4 22
Desktop and laptop 9.9 40
Laptop and tablet 20.2 82
Desktop and tablet 11.1 45
All 29.6 120
126 B. Numan Uyal et al.
Table 2 Average number of hours of daily laptop, desktop, or tablet computer usage
Purpose Device None Less 1–2 h 2–3 h More
than than
1h 3h
Communication Laptop 42.86% 28.82% 17.00% 5.42% 5.91%
174 117 69 22 24
Desktop 66.50% 17.73% 10.59% 2.96% 2.22%
270 72 43 12 9
Tablet 43.10% 21.43% 13.30% 10.10% 12.07%
175 87 54 41 49
Gaming Laptop 52.96% 23.15% 14.04% 5.17% 4.68%
215 94 57 21 19
Desktop 65.27% 23.15% 9.85% 4.43% 7.39%
265 53 40 18 30
Tablet 46.31% 21.92% 15.76% 7.88% 8.13%
188 89 64 32 33
Watching films Laptop 44.09% 18.72% 17.73% 11.82% 7.64%
179 76 72 48 31
Desktop 68.47% 11.08% 10.34% 6.16% 3.94%
278 45 42 25 16
Tablet 63.55% 13.79% 13.55% 4.43% 4.68%
258 56 55 18 19
Studying outside Laptop 52.46% 25.62% 16.26% 3.45% 2.22%
school 213 104 66 14 9
Desktop 71.67% 13.79% 8.37% 3.45% 2.71%
291 56 34 14 11
Tablet 65.52% 19.95% 7.64% 3.94% 2.96%
266 81 31 16 12
Using laptop, desktop, Laptop 72.41% 16.01% 7.64% 1.97% 1.97%
or tablet at school for 294 65 31 8 8
lectures
Desktop 62.56% 24.88% 8.87% 2.22% 1.48%
254 101 36 9 6
Tablet 79.31% 13.05% 4.68% 0.99% 1.97%
322 53 19 4 8
Using laptop, desktop, Laptop 46.80% 21.92% 16.50% 6.90% 7.88%
or tablet for surfing 190 89 67 28 32
the Internet
Desktop 68.97% 11.58% 11.08% 3.20% 5.17%
280 47 45 13 21
Tablet 45.07% 17.49% 18.23% 8.87% 10.34%
183 71 74 36 42
(continued)
Musculoskeletal Discomfort Experienced by Children … 127
Table 2 (continued)
Purpose Device None Less 1–2 h 2–3 h More
than than
1h 3h
Reading Laptop 75.37% 14.78% 6.16% 2.71% 0.99%
306 60 25 11 4
Desktop 82.02% 12.32% 3.69% 1.23% 0.74%
333 50 15 5 3
Tablet 62.32% 19.95% 12.32% 2.22% 3.20%
253 81 50 9 13
Writing Laptop 66.50% 20.94% 8.37% 1.97% 2.22%
270 85 34 8 9
Desktop 78.57% 11.82% 6.90% 1.23% 1.48%
319 48 28 5 6
Tablet 74.63% 13.05% 7.88% 1.72% 2.71%
303 53 32 7 11
The correlation analyses also showed that there was a weak but significant
correlation between cumulative years of exposure to tablet computers and dis-
comfort experienced in the (left) shoulder region. Furthermore, cumulative years of
exposure to desktop computers and discomfort experienced in the upper back
region were also weakly correlated. There was no significant relationship between
cumulative years of exposure to laptop computers and musculoskeletal discomfort
experienced in the body regions of participants.
The third correlation analysis tested the relationship between average number of
hours of daily exposure of children and adolescents to desktop, laptop, or tablet
computers and musculoskeletal discomfort experienced in their body regions. The
results indicated that there was a weak correlation between average number of hours
of daily exposure of children and adolescents to desktop and laptop computers and
some body regions. However, there was no correlation between the average number
of hours of daily exposure of children and adolescents to tablet computers and
musculoskeletal discomfort experienced in their body regions. The correlation and
significant values are shown in Table 3.
Table 4 illustrates the results of the correlation analysis to test whether there
were significant correlations between desktop, laptop, or tablet computer usage for
communication, gaming, watching films, studying outside school, studying at
school, surfing the Internet, reading, and writing and musculoskeletal discomfort
experienced in body regions. The results obtained from the analysis showed that
there were weak but significant correlations between the variables.
The results also proved that there was no correlation between musculoskeletal
discomfort experienced in body regions and many hours of daily smartphone
use.
128 B. Numan Uyal et al.
Discussion
The current study analyzed the relationship between duration of daily exposure of
children and adolescents to ICT and musculoskeletal discomfort experienced. The
participants indicated that frequently felt discomfort, which occurred mostly in the
neck, upper back, lower back, and shoulder regions. The results of Sommerich et al.
(2007) and Straker et al. (2008a, b), who investigated children’s and adolescents’
use of ICT, also support these findings.
This study showed that there is a relationship between the average number of
hours of daily exposure of children and adolescents to desktop and laptop com-
puters and discomfort experienced in the lower back, upper arm, thigh, lower leg,
hip, and knee regions. Korpinen et al. (2015) pointed out that pain, numbness, and
aches were mostly experienced in the hip and lower back regions. In addition,
Zovkic et al. (2011) showed that prolonged computer usage increases health
problems such as wrist pain, drowsiness, dry throat, eye irritation, nose irritation,
visual problems, headaches, and neck and back pain. The results of this study also
provided similar findings.
An important finding of the current research was that there is a significant
relationship between daily use of desktop, laptop, or tablet computers for different
purposes (activities) and experiencing musculoskeletal discomfort. This result was
also verified by Lin et al. (2015), Sobhy et al. (2015), and Kingston et al. (2016).
Lin et al. (2015) showed that prolonged touch-typing affects the upper extremities
and neck. Kingston et al. (2016) pointed out that reading tasks performed using
tablet computers affected the wrists, elbows, and shoulders. Sobhy et al. (2015)
investigated wrist and neck discomfort experienced during tablet usage for the
purpose of gaming. The results showed that the prolonged use of tablets for gaming
increased muscle activity and that there was a relationship between gaming activ-
ities and discomfort experienced in the neck and wrist.
Table 4 Summary of correlations between ICT usage and musculoskeletal discomfort experienced by participants
Communication Studying outside school Surfing the Internet Reading Writing
Tablet Shoulder (right) 0.126* 0.174**
Shoulder (left) 0.128* 0.141*
Upper back 0.182*
Upper arm (right) 0.133* 0.118*
Upper arm (left) 0.154**
Communication Gaming Studying outside school Surfing the Internet Reading
Laptop Shoulder (right) 0.138*
Shoulder (left) 0.118*
Forearm (right) 0.175**
Hand/fingers (right) 0.127*
Hand/fingers (left) 0.160**
Hips/buttocks 0.166** 0.216**
Knee (right) 0.149** 0.164**
Musculoskeletal Discomfort Experienced by Children …
(continued)
Table 4 (continued)
130
Communication Gaming Watching Films Studying outside school Surfing the Internet
Hand/fingers (right) 0.142*
Hand/fingers (left) 0.174*
Hips/buttocks 0.141* 0.172**
Thigh (right) 0.191**
Thigh (left) 0.235**
Lower leg (left) 0.140* 0.144* 0.139*
*p < 0.05; **p < 0.001
B. Numan Uyal et al.
Musculoskeletal Discomfort Experienced by Children … 131
Conclusion
In this study, the relationship between duration of daily exposure of children and
adolescents to ICT and musculoskeletal discomfort was investigated. The survey
findings showed that the participants felt discomfort mostly in the neck, upper back,
lower back, and shoulders. The correlation analysis indicated that there is a rela-
tionship between prolonged duration of ICT device use and musculoskeletal dis-
comfort. Musculoskeletal discomfort mostly occurred due to the use of ICT devices
for communication, surfing the Internet, reading, and writing. The results showed
that the shoulder, upper back, upper arm, forearm, and hand muscles are affected by
use of these ICT devices.
This research illustrated that there is a significant relationship between prolonged
usage of ICT devices and musculoskeletal discomfort. However, there is a need for
further research to understand the musculoskeletal discomfort experienced. In
addition, not only the musculoskeletal discomfort experienced but also the effects of
using this technology for many years on the motor skills of children and adolescents
should be examined.
References
Aly Sobhy M, Eid Mohamed A, Khaled Osama A, Ali Mostafa S (2015) Effect of using tablet
computer on myoelectric activity of wrist and neck muscles in children. Int J Curr Res 7(11):
23194–23201
Breen R, Pyper S, Rusk Y, Dockrell S (2007) An investigation of children’s posture and
discomfort during computer use. Ergonomics 50(10):1582–1592
Brink Y, Crous LC, Louw QA, Grimmer-Somers K, Schreve K (2009) The association between
postural alignment and psychosocial factors to upper quadrant pain in high school students: a
prospective study. Manual Ther 14:647–653
Cornell University Ergonomics Web (CUergo) (1999) Cornell musculoskeletal discomfort
questionnaires. Retrieved from http://ergo.human.cornell.edu/ahmsquest.html
Ciccarelli M. Straker L. Mathiassen SE, Pollock C (2006) ITKids: variation in muscle activity
among schoolchildren when using different information and communication technologies. In:
42nd annual conference of the Human Factors and Ergonomics Society of Australia
Erdinç O, Ekşioğlu M (2009a) Student specific cornell musculoskeletal discomfort questionnaires
(SS-CMDQ) (English Version). Retrieved from http://ergo.human.cornell.edu/ahSSCMDQ
quest.html
Erdinç O, Ekşioğlu M (2009b) Student specific cornell musculoskeletal discomfort questionnaires
(SS-CMDQ) (Turkish Version). Retrieved from http://ergo.human.cornell.edu/ahSSCMDQ
questTurkish.html
Greig AM, Straker LM, Briggs AM (2005) Cervical erector spinae and upper trapezius muscle
activity in children using different information technologies. Physiotheraphy 91:119–126
Harris C (2010) Musculoskeletal outcomes in children using computers: a model representing the
relationships between users correlates, computer exposure and musculoskeletal outcomes.
Ph.D. Curtin University, School of Physiotherapy
Harris C, Straker L (2000) Survey of physical ergonomics issues associated with school children’s
use of laptop computers. Int J Ind Ergon 26:337–346
132 B. Numan Uyal et al.
Hildebrandt VH, Bongers PM, Van Dijk FJH, Kemper HCG, Dul J (2001) Dutch musculoskeletal
questionnaire: description and basic qualities. Ergonomics 44(12):1038–1055
Kingston DC, Riddell MF, McKinnon CD, Gallagher KM, Callaghan JP (2016) Influence of input
hardware and work surface angle on upper limb posture in a hybrid computer workstation.
Hum Factors: J Hum Factors Ergon Soc 58(1):107–119
Korpinen L, Pääkkönen R, Gobba F (2015) Self-reported ache, pain, or numbness in hip and
lower back and use of computers and cell phones amongst Finns aged 18–65. Int J Ind Ergon
48:70–76
Leaser KL, Maxwell LE, Hedge A (1998) The effect of computer workstation design on student
posture. J Res Comput Edu 31(2):173–188
Lin MIB, Hong RH, Chang JH, Ke XM, Federici S (2015) Usage position and virtual keyboard
design affect upper-body kinematics, discomfort, and usability during prolonged tablet typing.
PLOS ONE 10(12):e0143585. doi:10.1371/journal.pone.0143585
Maslen B, Straker L (2009) A comparison of posture and muscle activity means and variation
amongst young children, older children and young adults whilst working with computers.
Work 32:311–320
Ministry of Education (TRNC) (2014) Department of Common Services for Education.
Educational Statistical Yearbook 2013–2014
Oates S, Evans GW, Hedge A (1998) An anthropometric and postural risk assessment of children’s
school computer work environments. Comput Schools: Interdisc J Pract, Theor Appl Res
14(3–4):55–63
Sommerich CM, Ward R, Sikdar K, Payne J, Herman L (2007) A survey of high school students
with ubiquitous access to tablet PCs. Ergonomics 50(5):706–727
Straker L, Burgess-Limerick R, Pollock C, Coleman J, Skoss R, Maslen B (2008a) Children’s
posture and muscle activity at different computer display heights and during paper information
technology use. Hum Factors 50(1):49–61
Straker L, Coleman J, Skoss R, Maslen BA, Burgess-Limerick R, Pollock CM (2008b) A
comparison of posture and muscle activity during tablet computer, desktop computer and paper
use by young children. Ergonomics 51(4):540–555
Straker L, Pollock C, Maslen B (2009) Principles for the wise use of computers by children.
Ergonomics 52(11):1386–1402
Straker LM, Campbell A, Coenen P, Ranelli S, Howie E (2015) Movement, posture and muscle
activity in young children using tablet computers. In: Proceedings 19th triennial congress of the
IEA, Melbourne 9–14 Aug 2015
Woo EH, White P, Lai CW (2016) Impact of information and communication technology on child
health. J Paediatr Child Health 52(6):590–594
Zovkic M, Vrbanec T, Dobsa J (2011) Computer ergonomic of elementary school students. In
Proceedings of the 22nd Central European Conference on Information and Intelligent Systems,
Varazdin, Croatia, 37–45
A Closed-Loop Reverse Supply Chain
Network Design for Waste Electrical
and Electronic Equipment
Abstract Nowadays, firms are choosing strategies that increase their economic
performance as well as their competitiveness in the field of social responsibility.
Interest in the effective reuse of resources and/or manufactured products continues
to increase in all companies as a result of global climatic changes, population
growth, rapid urbanization, and the reduction of natural resources. This study
proposes a sustainable multi-period reverse logistics network design to minimize
the waste of electric and electrical equipment, which is the one of the most crucial
sectors in terms of waste management. This study contributes to filling the gap in
the literature on mathematical closed-loop reverse logistics network design by
including multi-product, multi-objective, and multi-period parameters in the model
for all three dimensions of sustainability for decision making. The proposed model
is optimized with mixed integer linear programming. It is applied to a sample data
set and sensitivity analysis is done with crucial decision variables to reveal the
model limitations. The study ends by presenting the future directions, giving some
helpful recommendations for other researchers on this topic.
Introduction
Sustainable development is one of the most important issues of the last decade. For
the first time, this concept has emerged in a narrow sense as economic and envi-
ronmental compatibility in the Brundtland Commission Report. Supply chain
Literature Review
Sustainable business is defined by its aspects to balance triple bottom lines (TBLs,
i.e., profit, planet, and people). In sustainable business design, consideration of the
interaction between the core business and external environment, which does not
seem to affect the profit of the core business but is required to clarify its de-liberate
environmental and social value statement, is decisive (Kondoh et al. 2014).
Sustainability, the consideration of environmental factors and social aspects, in
supply chain management (SCM) has become a significant topic for researchers and
practitioners. The application of operations research methods and related models,
that is, formal modeling for closed-loop SCM and reverse logistics, has been
thoroughly examined in previously published research (Brandenburg et al. 2014).
A Closed-Loop Reverse Supply Chain Network Design for Waste … 135
optimal flow of parts and products in the CLSC network and the optimum number
of trucks hired by facilities in the forward chain of the network.
Chaabane et al. (2011) propose a comprehensive methodology to address sus-
tainable supply chain design problems where carbon emissions and total logistics
costs, including the selection of suppliers and sub-contractors, technology acqui-
sition, and the choice of transportation modes, are considered in the design phase.
The proposed methodology provides decision makers with a multi-objective mixed
integer linear programming model to determine the trade-off between economic and
environmental considerations.
He et al. (2006) review the implementation of strategies of WEEE treatment and
the recovery technologies of WEEE. They present the current status of WEEE and
corresponding responses adopted so far in China. The concept and implementation
of scientific development is critical to the sector of electronics as one of the
important industrial sectors in China’s economy. To achieve this objective, it is
significant to recycle WEEE sufficiently to comply with the regulations regarding
WEEE management and to implement green design and cleaner production con-
cepts within the electronics industry in accordance with the upcoming EU and
Chinese legislation in a proactive manner.
Yang et al. (2008) also study WEEE flow and mitigating measures in China.
They identify the sources and generation of WEEE in China and calculate WEEE
volumes. The results show that recycling capacity must increase if the rising
quantity of domestic WEEE is to be handled properly. Simultaneously, suitable
WEEE treatment will generate large volumes of secondary resources. They describe
the existing WEEE flow at the national level and future challenges and strategies for
WEEE management in China.
Walther and Spenger (2005) analyze the impact of the WEEE directive on
reverse logistics in Germany. They think that essential changes in the field of
treatment of electronic products in Germany are expected due to the new regal
requirements owned. On the other hand, the consequences in terms of changes of
organization and material flows of the German treatment system are currently
unknown. Their contribution is to predict relevant changes in this context. That sets
the framework for a deduction of recommendations for political decision makers
and actors of the treatment system.
Wu and Barnes (2016) present a new model for partner selection for reverse
logistic centers in green supply chains. The applicability of the model is demon-
strated by means of an empirical application based on data from a Chinese elec-
tronic equipment and instrument manufacturing company.
Zandieh and Chensebli (2016) investigate the reverse logistics network design
problem, including collection and inspection, recovery, and disposal centers, con-
sidering a mixed integer linear programming model. The NP-hardness of this
problem has been proved in many papers, so a novel meta-heuristic solution method
aimed at minimization of total costs fixed opening cost of collection and inspection,
recovery and disposal centers, and transportation cost of products between opened
centers using priority-based encoding presentation is proposed. Comparison of
A Closed-Loop Reverse Supply Chain Network Design for Waste … 137
outputs from their algorithm and a modified genetic algorithm shows the excellence
of this new solution method.
John (2017) develop a mathematical model for the network design of a reverse
supply chain in a multi-product, multi-period environment. The studied algorithm
achieves a reduction of the total cost of emissions.
Mathematical Model
Sets
P Set of product types
Q Set of raw material types
T Set of time periods (years)
M Set of manufacturing facilities
D Set of existing distribution facilities
C Set of existing and potential collection centers
R Set of existing and potential recovery facilities
B Set of customer locations (buyers)
K Set of transportation modes
L Set of all locations
U Set of all nodes
Parameters
Djpt demand for product p 2 P by the customer j 2 B in time t 2 T
Aqp amount of product required p 2 P to produce one unit of product q 2 Q
Gjpt end-of-life products p 2 P generated at the customer point j 2 B in time
t2T
Fqp amount of product generated p 2 P from one unit of product q 2 Q
Vp volume of product p 2 P
Capj capacities of facilities of node j 2 U
CCapj campaign capacity of node j 2 D
Capk capacity of transportation mode k 2 K
CO2k amount of CO2 generated per kilometer during transportation by using
transportation mode k 2 K
b required percentage recovery from collected parts at potential and existing
recovery centers
a conservation of mass ratio
disij distance between node i 2 U and node j 2 U, i 6¼ j
Sj increase in social utility when a decision is taken to open node j 2 C or R
138 G. Aldemir and H. B. Bolat
Costs:
FCj fixed cost of opening a new collection center or new recovery center
j 2 C or j 2 R
Epj unit recovery cost of product p 2 P in an existing or potential recovery
center j 2 C or j 2 R
TCpk unit transportation cost per kilometer of product p 2 P when using
transportation mode k 2 K
PCqm unit cost of purchasing raw material q 2 Q for manufacturing facility
m2M
Decision variables:
8
< 1; if a decision is made to open the collection center or recovery
yjt
: center j 2 C or j 2 R time t 2 T
0; otherwise
8
< 1; if a decision is made that transportation mode k 2 K will be used
wijkt
: between nodes i and j 2 U in time t 2 T
0; otherwise
( )
XXXX
Min SSijk disij wijkt CO2k ð2Þ
t2T i2U j2U k2K
A Closed-Loop Reverse Supply Chain Network Design for Waste … 139
Fig. 1 Forward and reverse flow of the products and raw materials
( )
X
Max Sj yjt ð3Þ
j2R [ C
X
0:0192xijpkt Vp SSij Capk wijkt ð4Þ
p2P
XX X
xijpkt Vp Capj yjt ð5Þ
k2K p2P i2UB
X
xijpt Djpt ð6Þ
i2D
yjt ¼ 1 ð7Þ
XXX
xijpk Vp CCapj ð8Þ
p2P k2K b2B
X X
aip xijp ¼ xjmp 8j 2 U; i 6¼ j; j 6¼ m ð9Þ
i2U m2U
XXXX X XXX
xijpkt b xijpkt 8t 2 T ð10Þ
i2C p2P k2K j2R j2C [ D i2B p2B k2K
XX XX
aximpkt ¼ xmjpkt 8p 2 P and t 2 T ð11Þ
i2U k2K j2U k2K
140 G. Aldemir and H. B. Bolat
X X
xijpkt ¼ Gipt 8i 2 B; p 2 P and t 2 T ð12Þ
j2D [ C k2K
XX XX
ximqkt þ Hmqt =Fqp xmjpkðt þ 1Þ 8T;P;Q;M ð13Þ
k2K i2R k2K j2D
XX XX
xjmqkt =Apq xijpkt 8j2R;q2Q;p2P and t2T ð14Þ
m2M k2K i2M k2K
The first objective function minimizes the fixed cost of opening a new recovery
and collection center, the cost of raw material purchased from suppliers, the total
transportation cost among nodes, and the cost of recovery. The second objective
function minimizes the total amount of CO2 generated by transportation, while the
third objective function maximizes the increase in social utility when a decision is
made to open a new node. Constraints (4), (5), and (8) are capacity constraints for
transportation modes, recovery and collection centers, and the campaign capacity
for retailers. In constraint (4), 0.0192 is equal to 1/52. Constraints (9) and (11) take
into account the conversation of mass before and after the recovery center and
collection center. Constraint (10) takes into account the collection targets of
recovery centers defined by the government in the regulation. Constraints (13) and
(14) define the relation between the amount of raw materials and the amount of
products. The last two constraints provide non-negativity for the decision variables.
In this section, the mathematical model is tested with a sample data set on an Intel®
Core™ i7-5500U processor computer with ZIMPL and SCIP solver software.
Table 1 shows all manufacturing facilities, distribution centers, buyer points,
existing and potential collection centers and recovery facilities, and the capacities
and fixed costs per year of potential collection centers and recovery facilities.
Table 2 shows the product and raw material types and their unit volumes, and
Table 3 lists the unit cost of raw materials from the supplier.
Table 4 shows the first year’s demand and the number of end-of-life products
that the customers have on hand. The demand and number of end-of-life products
are assumed to increase by 20% every year to show the applicability of the
multi-period aspect of the mathematical model.
A Closed-Loop Reverse Supply Chain Network Design for Waste … 141
Sensitivity Analysis
After the application of the model, sensitivity analysis is carried out between
increased demand and recovery centers and increased demand and collection
centers. The aggregate demand (demand of all buyers) is 48100 in the first year and
increases by 20% every year during eight years. Figures 2 and 3 show the required
number of recovery centers and collection centers.
200000 16
Demand vs. Recovery Centers
100000 6
0 -4
1 2 3 4 5 6 7 8
Years Demand RC
200000 16
100000 6
0 -4
1 2 3 4 5 6 7 8
Years Demand CC
References
Keywords ANOVA Big data Data analytics Neural networks
Recycling Reverse logistics WEEE
Introduction
Product recovery has gained considerable attention within the context of sustain-
ability. Also governmental regulations and customer perspectives on environmental
issues have motivated the organization of product recovery systems in companies.
The first legislation on environmentally conscious manufacturing (ECM) drew the
attention of both researchers and practitioners at the beginning of the 1990s. Recent
governmental regulations in Turkey have also set out collection targets for electrical
and electronic equipment (EEE) manufacturers as well as defining the formation of
product recycling and remanufacturing procedures. Table 1 gives collection targets
for EEE manufacturers in proportion to the total product produced in five cate-
gories. In this context, manufacturers are working toward establishing reverse
logistics networks, while some of them have already done so.
Governmental regulations also oblige manufacturers to report the data of all
operations to the Ministry of Environment and Urbanization. Therefore, collection
of the data has become a very critical issue for reporting and also a very good
resource for gaining remarkable inferences for manufacturing and logistics opera-
tions as well as managerial and marketing perspectives. The vast availability of
data, on the other hand, has stimulated researchers to find more effective seg-
mentation tools in order to discover more useful information about their markets
and customers due to the inefficient performance of traditional statistical techniques
(or statistics-oriented segmentation tools) when handling such voluminous data
(Sarvari et al. 2016). For this reason, data mining has been seen as a solution to this
problem. In fact, big data has attracted a great deal of attention because it provides
the ability to derive patterns, increase profit margins, find potential markets, and
carry out various predictions for the service and manufacturing sectors (LaValle
et al. 2011). In supply chain management and logistics, Wang et al. (2016)
reviewed big data analytics by investigating research and applications. Logistics
data are generated from different sources in distribution networks such as
Table 1 WEEE collection targets according to 2012 regulations (Ministry of Environment and
Urbanization, Turkey 2012, Regulation No. 28300)
Collection category 2013 2014 2015 2016 2017 2018
(%) (%) (%) (%) (%) (%)
1. Refrigerators/coolers/ 1.25 2.25 4.25 8.50 8.50 17.00
air-conditioners
2. Large house appliances 2.50 3.75 8.00 16.00 16.00 32.50
3. Televisions and monitors 1.50 2.50 5.50 11.00 11.00 21.50
4. IT and telecommunications and 1.25 2.00 4.00 8.00 8.00 16.00
consumer equipment
5. Small household appliances, 0.75 1.50 2.75 5.50 5.50 11.00
toys, and electrical and electronic
tools
Analyzing the Recycling Operations Data of the White Appliances … 149
Vendor
New Product
Recycling Facility
3PL
Service Customer
WEEE
Research Framework
Big Data
The term “big data” has become popular recently; it refers to massive datasets with
a large structure that are hard to handle using conventional database management
systems and traditional data-processing tools (Akoka et al. 2017). “Big Data rep-
resents the Information assets characterized by such a High Volume, Velocity and
Variety to require specific Technology and Analytical Methods for its transfor-
mation into Value” (De Mauro et al. 2015). In the context of waste white appli-
ances, this involves a number of applications that can be expected to benefit from
large-scale capture and analysis of data from these WEEEs.
The CRISP-DM (Cross-Industry Process for Data Mining) methodology is an
industry-proven way to guide data-mining efforts that provides a structured
approach to planning a data-mining project. This methodology consists of six
phases that cover the full data-mining process.
Business understanding. In this phase, business objectives are determined, the
situation is assessed, data-mining goals are determined, and a project plan is
produced.
Data understanding. The second stage addresses the acquisition of data resources
and understanding the characteristics of those resources. It comprises the initial data
collection, data description, data exploration, and data quality verification.
Data preparation. This includes selecting, cleaning, constructing, integrating, and
formatting data.
Modeling. In this part, sophisticated analysis methods are used to obtain infor-
mation from data. Modeling includes selecting modeling techniques, generating test
designs, and building and assessing models.
Evaluation. After the model has been chosen, data-mining results can be evaluated
to achieve the business objectives. This phase includes evaluating the results,
reviewing the data-mining process, and determining the next steps.
Deployment. In this stage, the evaluation results are taken and new knowledge is
integrated into the everyday business process to solve the original business prob-
lem. Elements of this phase include plan deployment, monitoring and maintenance,
producing a final report, and reviewing the project.
Neural Networks
Neural networks take biological systems as a model and aim to simulate their
behavior. Neural networks have been used for prediction purposes for both clas-
sification and regression of continuous target attributes (Tobergte and Curtis 2013).
Analyzing the Recycling Operations Data of the White Appliances … 151
Region 2 5 7
Campaign 3 6
A neural network consists of nodes and arcs. Nodes represent neurons in the
biological analogy and arcs correspond to dendrites and synapses. Each arc is
related to a weight, whilst each node is defined by an activation function. The
weights of the arcs adjust the values received as inputs by the nodes along the
incoming arcs. The neural network learns through being trained. It makes an
adjustment whenever it makes an incorrect prediction. The learning process occurs
by examining individual records, generating a prediction for each record, and
making adjustments to the weights (Fig. 2).
Application
The initial dataset used in this study is obtained from a database of the recycling
system of a white appliance manufacturer and consists of approximately half a
million cases of collected WEEEs. The data include 1. and 2. group of WEEE (see
Table 1). To be more precise, five groups of white appliances, namely refrigerators,
washing machines, drying machines, dishwashers, and ovens, are included in the
data. Data analysis was performed using the statistics software IBM SPSS Modeler
and IBM SPSS 23 (Table 2).
In the model, we wanted to focus on the predictor fields that matter most and
least. The dependent variable was the product group and the independent variables
were the same product, campaign, lifespan, region, and transaction hour.
A maximum training time criterion was considered as the stopping rule. In addition,
the dataset was divided into training and test groups. The training data comprised
80% of the whole dataset and the remaining 20% were used as the test data.
According to the results, being the same product was the most important predictor
for our model (0.41). After that, campaign (0.25), lifespan (0.21), region (0.08), and
transaction hour (0.04) were the other predictors, respectively (Fig. 3).
152 A. Bal et al.
Research Question
The campaign is found to be the second most important predictor for the product
groups. Therefore, we wanted to see whether a relationship between the number of
waste white appliances collected and the campaign period exists. The transaction
Analyzing the Recycling Operations Data of the White Appliances … 153
date are considered to measure the effect of the campaign period since all cases are
recorded in the transaction time (Table 3).
H 0: q = 0
HA: q 6¼ 0
We can conclude that the result of the Pearson correlation test indicates a quite
strong positive linear relationship between the transaction date and the campaign
period for waste white appliances. This means that customers are prone to deliv-
ering their waste white appliances especially during the period of the campaign. On
the other hand, when we look at the number of waste white appliances collected
during a three-year period, we can clearly see from Fig. 4 that the increases at the
end of each year indicate a seasonal effect (Fig. 5).
Meanwhile it is also important to see whether there are differences in customer
behavior by region. Therefore, firstly we questioned whether brand preferences
differ between regions. Afterwards we also questioned whether the lifespans of
35000
Nov.16
May.14
Nov.14
May.15
Nov.15
May.16
Jan.14
Mar.14
Jan.15
Mar.15
Jan.16
Mar.16
Sep.14
Sep.15
Sep.16
Jul.14
Jul.15
Jul.16
Fig. 5 Number of waste white appliances collected monthly
white appliances differ between regions. The following variance analyses were used
to test our hypotheses regarding these questions.
H0: On average, brands of new white appliances do not differ between regions.
HA: On average, brands of new white appliances differ between regions.
According to the results of the F-test carried out with a 95% confidence interval,
the significance value for the brand of new product was found to be
p = 0.000 < 0.05. With regard to the brand of new product, hypothesis HA is
accepted (Table 4). In other words, brand preference differs significantly by region.
Also, when we look at the brand preference table (Table 5), we can see that the
upper-mass brand preference increases especially in the west of Turkey. In other
words, more lower-mass brands are preferred by customers, especially in the East
Anatolia region.
H0: On average, the lifespans of waste white appliances do not differ between
regions.
HA: On average, the lifespans of waste white appliances differ between regions.
According to the results of the F-test for the lifespan of waste white appliances
with a 95% confidence interval, the significance value was found to be
p = 0.000 < 0.05. With regard to the lifespan of waste white appliances, hypothesis
HA is accepted (Table 6). In other words, the lifespans of waste white appliances
differ significantly between regions. Also, Table 7 shows the average lifespan of
Analyzing the Recycling Operations Data of the White Appliances … 155
white appliances. It can be seen that white appliances have the longest lifespans in
the Marmara and Central Anatolia regions.
This study considers waste white appliances, a type of WEEE that is a cause of
great concern all over the world. In order to understand the big data of reverse
logistics operations, three-year product records were analyzed using different tools
156 A. Bal et al.
in SPSS. The main purpose of the work was to understand customer behavior
according to different variables. Also, the results can lead the way for companies in
a strategic manner. Initially, the most important factors wanted to be seen for
product groups. Predictor importance results showed that being the same product
was the most important predictor. This indicates that customers mainly prefer to
buy the same brand again when they want to buy a new white appliance. This also
indicates customer loyalty, considering one of the most dominant brands of the
Turkish market. The second most important predictor for the product groups was
campaigns. The Pearson correlation test results showed a strong positive linear
relationship between campaign period and collected waste white appliances.
Lifespan and region were the third and fourth most important predictors for the
product groups, respectively. Therefore, we questioned whether there were differ-
ences in lifespan and choice of brand between regions. The results of both analyses
indicated that significant differences exist. Also, the tables of lifespan and brand
preferences show differences between regions. These results give a good indication
of which regions prefer lower-mass or upper-mass brands. Besides, the lifespan of
products can be a good indicator for forecasting sales.
With regard to future study, we would like to extend our research by rule mining
since association rules can help companies to make strategic decisions on reverse
logistics or marketing. Besides, demand for waste products has increased by
approximately 10%, and seasonal effects exist. Thus forecasting future demand can
be beneficial as well.
References
Akoka J, Comyn-Wattiau I, Laoufi N (2017) Research on big data—a systematic mapping study.
Comput Stand Interfaces
De Mauro A, Greco M, Grimaldi M (2015) What is big data? A consensual definition and a review
of key research topics. In: Giannakopoulos G, Sakas DP, Kyriaki-Manessi D (eds) AIP
conference proceedings, vol 1644, no 1. AIP, pp 97–104
Jain ADS, Mehta I, Mitra J, Agrawal S (2017) Application of big data in supply chain
management. Mater Today Proc 4(2):1106–1115
LaValle S, Lesser E, Shockley R, Hopkins MS, Kruschwitz N (2011) Big data, analytics and the
path from insights to value. MIT Sloan Manag Rev 52(2):21
Ministry of Environment and Urbanization (2012) Regulation No: 28300 Regulatory control of
waste electric and electronic equipment. Off J Turkish Repub
Muhtaroglu FCP, Demir S, Obali M, Girgin C (2013) Business model canvas perspective on big
data applications. In: 2013 IEEE International Conference on Big Data, IEEE, pp 32–37
Najafi M, Eshghi K, Dullaert W (2013) A multi-objective robust optimization model for logistics
planning in the earthquake response phase. Transp Res Part E: Logist Transp Rev 49(1):
217–249
Sarvari PA, Ustundag A, Takci H, Takci H (2016) Performance evaluation of different customer
segmentation approaches based on RFM and demographics analysis. Kybernetes 45(7):
1129–1157
Tobergte DR, Curtis S (2013) Business intelligence. J Chem Inf Model 53. http://doi.org/10.1017/
CBO9781107415324.004
Analyzing the Recycling Operations Data of the White Appliances … 157
Wamba SF, Akter S, Edwards A, Chopin G, Gnanzou D (2015) How ‘big data’ can make big
impact: findings from a systematic review and a longitudinal case study. Int J Prod Econ
165:234–246
Wang G, Gunasekaran A, Ngai EW, Papadopoulos T (2016) Big data analytics in logistics and
supply chain management: certain investigations for research and applications. Int J Prod Econ
176:98–110
Application of Q-R Policy for Non-smooth
Demand in the Aviation Industry
Introduction
Irregular demand patterns make demand forecasting challenging and forecast errors
can lead to substantial costs because of unfulfilled demand or obsolescent stock.
A common problem in the case of irregular demand patterns is the need to forecast
demand with the highest possible degree of accuracy and to set the inventory policy
parameters based on that information. The accuracy of forecasting methods is
closely related to the characteristic of demand data (Boylan et al. 2008). The need to
produce more accurate time series forecasts remains an issue in both conventional
and soft computing techniques; therefore, innovative methods have been developed
in the literature for intermittent demand. Exponential smoothing methods and
variations are often used for smooth demand patterns as well as to forecast spare
parts requirements (Snyder et al. 2002). However, variability and the uncertainty of
occurrence of these parts raise challenges when traditional forecasting methods are
used. Exponential smoothing methods, however, place some weight on the most
recent data regardless of whether there is zero or nonzero demand. As such, it
underestimates the size of the demand when it occurs and overestimates the
long-term average demand. Consequently, biased forecasting methods cause
unreasonably high stocks.
Typical high-performance companies such as Turkish Airlines tend to improve
robust demand forecasting techniques and processes, leading to smaller inventories
and better customer satisfaction. There is scope to increase the performance of
inventory planning systems, and modifications are required for the interaction
between forecasting and stock control in terms of their effects on system
performance.
In the literature, non-smooth demand data are categorized into three types:
erratic, lumpy, and intermittent. When demand data contain a large percentage of
zero values with random nonzero demand data with small variation, the demand is
referred to as intermittent. If the variability of demand size is high but there are only
a few zero values, it is called erratic demand. If both the variability of demand size
and the time periods between two successive nonzero demands are high, it is called
lumpy demand. The demand data type is smooth when both variability and time
periods between two successive nonzero demands are low.
The categorization scheme is based on the characteristics of demand data that are
derived from two parameters: the average inter-demand interval (ADI) and the
squared coefficient of variation (CV2). ADI is defined as the average number of
time periods between two successive demands, which indicates the intermittence of
demand,
PN1
ti
ADI ¼ i¼1
ð1Þ
N1
where N indicates the number of periods with nonzero demand and ti is the interval
between two consecutive demands. CV2 is defined as the ratio of the variance of the
Application of Q-R Policy for Non-Smooth Demand in the Aviation … 161
CV2
Erratic Lumpy
0.49
Smooth Intermittent
1.32 ADI
demand data divided by the square of average demand, which standardizes the
variability of demand.
Pn 2
Di D
CV ¼
2 i¼1
2
ð2Þ
ðn 1ÞD
where n is the number of periods, and Di and D are the actual demand in period i
and average demand, respectively. Cut-off values for Syntetos and Boylan’s cate-
gorization scheme are given in Fig. 1 (Syntetos and Boylan 2005a). The cut-off
values are ADI = 1.32 and CV2 = 0.49.
Due to excessive stocks and low customer service levels, lumpy demand pre-
sents the biggest challenge with regard to spare parts in forecasting and inventory
management (Altay and Litteral 2011).
As far as intermittent demand forecasting is concerned, stock control does not
determine the consequences of employing specific estimators. A limited number of
researchers considering forecast accuracy that it is to be differentiated from the
stock control performance of the utilized estimators (Eaves and Kingsman 2004;
Strijbosch et al. 2000).
Recent empirical research on the performance of various intermittent demand
forecasting approaches was conducted by Willemain et al. (2004) and Syntetos and
Boylan (2005b). Thus, stock-holding cost and service level measures are of utmost
importance in evaluating the performance of an inventory management system.
Since the inspirational work of Croston in the area of forecasting for intermittent
demand (Croston 1972), more than a few researches have been conducted on
forecasting implication to inventory management, although these items comprise a
substantial portion of the inventory population in parts (Porras and Dekker 2008).
Forecasts are used to determine inventory control parameters and to compare the
average inventory or service levels (Syntetos and Boylan 2008). This type of
162 M. Sahin and F. Eldemir
Croston’s Method
Croston’s method finds the forecast of the demand for the period t as follows:
YðtÞ
FðtÞ ¼ ð3Þ
PðtÞ
Syntetos’ Method
Order Order
Lead Arrives
0 50 Time 100 150 200
Time
Let “D” be the observed demand over lead time. The expected number of
shortages occurring in one cycle is:
Z1
NðRÞ ¼ EðmaxðD R; 0ÞÞ ¼ ðx RÞf ðxÞdx ð7Þ
R
The expected number of stock-outs incurred per unit of time (per year) is:
nðRÞ
¼ knðRÞ=Q ð8Þ
T
Stock-out cost ¼ pknðRÞ=Q ð9Þ
G(Q, R) is the total expected cost of holding, ordering, and stock-out costs.
Q Kk
GðQ; RÞ ¼ h þ R ks þ þ pknðRÞ=Q ð10Þ
2 Q
In order to minimize G(Q, R) for each item in the stock, the derivative of the cost
function with respect to Q and R individually should be equalized to zero to satisfy
the necessary conditions for minimization.
@GðQ; RÞ @GðQ; RÞ
¼ 0; ¼0 ð11Þ
@Q @R
rffiffiffiffiffiffiffiffiffi
2kK
Q0 ¼ EOQ ¼ ð14Þ
h
If the demand during the lead time is assumed to be normally distributed with
mean µ and standard deviation r, then the value of R0 can be found from Eq. (13)
by using a Z table (cumulative standard normal probability table).
R ¼ l þ zr ð15Þ
Once R has been obtained, n(R) can be calculated by using the standardized loss
function L(z) as follows:
Rl
nðRÞ ¼ rL ¼ rLðzÞ ð16Þ
r
where L(z) is
Z1 Z1
LðzÞ ¼ ðt zÞ/ðtÞdt ¼ t/ðtÞdt zð1 UðzÞÞ ¼ /ðzÞ zð1 UðzÞÞ ð17Þ
z z
/(z) is the standard normal probability density function and U(z) is the standard
normal probability cumulative density function.
Since there are multiple items in the stock, a performance measure that will col-
lectively calculate the performances of the forecasting methods is needed.
Regardless of the variability of the items in the stock, the geometric mean (based on
products) of the expected costs (based on time) may be employed by using the
expected costs as follows:
!!1=N
Y
N Qj Kj kj
GMEIC ¼ hj þ Rj kj sj þ þ pj kj n Rj =Qj ð18Þ
j¼1
2 Qj
Methodology
The following steps are taken when determining the best strategy in order to lower
the inventory costs:
Application of Q-R Policy for Non-Smooth Demand in the Aviation … 167
Case Study
above 0.49. The data cover 106 monthly periods from 2008 to 2013. Descriptive
statistics of the non-smooth demand dataset are given in Table 1.
Examples of non-smooth demand data from each demand category are given in
Fig. 4. Data are taken from the MRO inventory of service parts of Turkish Airlines
are given in monthly form for the last two years.
In our case, 632 demand data from the Turkish Airlines Technic MRO inventory
are selected for the purpose of classification (Table 2).
25
20
IntermiƩent
15
Lumpy
10 ErraƟc
5 Smooth
0
1 3 5 7 9 11 13 15 17 19 21 23
In this section, the inventory cost performances of the forecasting methods are
investigated. Cost-based inventory performance results are given with the appli-
cation of (Q, R) policy based on the exponential smoothing, Croston, Syntetos, and
naive forecasting methods. The following assumptions are made:
1. The lead time is fixed as 1 month.
2. Demand is random and the system is continuously reviewed.
3. The penalty cost is taken as five times the part price.
4. The holding cost is taken as 0.2* part price.
5. The ordering cost is taken from the Turkish Airlines Technic MRO inventory
cost system.
6. The demand during the lead time is a continuous random variable D with a
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
probability density function. Let µ = E(D) and r ¼ varðDÞ be the mean and
standard deviation of the demand during the lead time.
The iterative methodology given in Section “Methodology” is applied to obtain
results. The inventory costs and GMEIC are the performance measures used to
compare the intermittent demand forecasting methods.
Cost Results
Comparative results are given in Table 3 for the demand data for 632 spare parts.
Of the 631 data, 278 had intermittent characteristics. Of these 278 intermittent data,
119 (43%) are best forecasted by Syntetos’ method. Even with the smoothed data,
44% of the time, the performance of the Syntetos method is better than that of the
other methods. The smoothing parameter (a) is selected as 0.2 for the exponential
smoothing, Croston, and Syntetos methods.
Looking at the best performance is insufficient to help us compare the perfor-
mance of non-winning methods. In this case, the GMEIC can be employed.
The GMEIC results obtained from 632 different items are given in Table 4.
Conclusion
References
Altay N, Litteral LA (2011) Service parts management, demand forecasting and inventory control,
1st edn. XIV
Boylan JE, Syntetos AA, Karakostas GC (2008) Classification for forecasting and stock control: a
case study. J Oper Res Soc 59:473–481
Croston JF (1972) Forecasting and stock control for intermittent demands. Oper Res Q 23:289–
304
Application of Q-R Policy for Non-Smooth Demand in the Aviation … 171
Eaves AHC (2002) Forecasting for the ordering and stock-holding of consumable spare parts. Ph.
D. thesis, University of Lancaster, UK
Eaves AHC, Kingsman BG (2004) Forecasting for the ordering and stock-holding of spare parts.
J Oper Res Soc 50:431–437
Ghobbar AA, Friend CH (2003) Evaluation of forecasting methods for intermittent parts demand
in the field of aviation: a predictive model. Comput Oper Res 30:2097–2114
Porras E, Dekker R (2008) An inventory control system for spare parts at a refinery: An empirical
comparison of different re-order point methods. Eur J Oper Res 184:101–132
Regattieri A, Gamberi M, Gamberini R, Manzini R (2005) Managing lumpy demand for aircraft
spare parts. J Air Transp Manag 11(6):426
Snyder R, Koehler A, Ord J (2002) Forecasting for inventory control with exponential Smoothing.
J Forecast 18:5–18
Strijbosch LWG, Heuts RMJ, Schoot EHM (2000) A combined forecast-inventory control
procedure for spare parts. J Oper Res Soc 51:1184–1192
Syntetos AA, Boylan JE (2001) On the bias of intermittent demand estimates. Int J Prod Econ
71:457–466
Syntetos AA, Boylan JE (2005a) On the categorization of demand patterns. J Oper Res Soc
56:495–503
Syntetos AA, Boylan JE (2005b) The accuracy of intermittent demand estimates. Int J Forecast
21:303–314
Syntetos AA, Boylan JE (2008) Forecasting for inventory management of service parts (Chap. 20).
In: Kobbacy KAH, Murthy DNP (eds) Complex system maintenance handbook. Springer,
World Academy of Science, Engineering and Technology (To appear in 2007)
Willemain TR, Smart CN, Shockor JH, DeSautels PA (1994) Forecasting intermittent demand in
manufacturing: a comparative evaluation of Croston’s method. Int J Forecast 10:529–538
Willemain TR, Smart CN, Schwarz HF (2004) A new approach to forecasting intermittent demand
for service parts inventories. Int J Forecast 20:375–387
A Closed-Loop Sustainable Supply Chain
Network Design with System Dynamics
for Waste Electrical and Electronic
Equipment
Introduction
The aim of this study is to put forward a dynamic and sustainable supply chain
network design to minimize waste electric and electrical equipment (WEEE), which
is the one of the most crucial sectors in terms of waste management. The study is
based on the multi-period, multi-product reverse logistics concept. There are many
studies in the literature on closed-loop supply chain network design, but only a few
take into account the dynamic perspective with regard to the important variables of
A Closed-Loop Sustainable Supply Chain Network Design … 175
the supply chain. Therefore, this study contributes to filling the gap in the literature
on the mathematical closed-loop reverse supply chain network design model from a
system dynamics (SD) perspective rather being based on the deterministic and static
models proposed in the literature and to create a general dynamic framework for
that chain.
This study includes six parts: the introduction, literature review, analysis of WEEE
management in Turkey and other countries, the SD model and its application with
illustrative data, the results, and the conclusion together with recommendations for
future study. In the literature review, supply chain management (SCM), the concept
of sustainability, dimensions of sustainability, and sustainable supply chain man-
agement (SSCM) will be defined first, and then quantitative studies related to the
sustainability concept will be discussed. In addition, sustainability will be examined
and compared across different sectors. The definitions of closed-loop supply chain
and reverse logistics will be included. Then, studies that provide examples of
closed-loop supply chain and reverse logistics network design models with
multi-product, multi-period, and/or multi-echelon concepts will be explained.
In the analyses of WEEE management, information from Turkey’s waste
management directive will be presented to set specific parameter values that are
used in the model of WEEE management. Moreover, the targets for collection and
recovery by year will be investigated in this section.
In the section on the dynamic model, the parameters, costs, and decision vari-
ables will be defined first. Then, the general framework of the model will be
explained. The aim of study will be to minimize and visualize the total investment
with the first costs of new collection centers and new recovery centers, costs of
collection and recovery of products, and the cost of acquisition of raw material from
suppliers and third parties by year. After that, the data will be explained for the
application of the model.
The results of the illustrative data and its comment will be analyzed in the results
and discussion section.
Finally, the conclusion and recommendations section will contain some critical
points to offer helpful information for further studies.
Literature Review
the product remanufacturability design will incur a higher production cost for the
new product but will reap the benefit by having a lower production cost for the
remanufactured product in the second period. Consumers are assumed to be con-
scious of the price and quality of the product and therefore discount their will-
ingness to pay for the remanufactured product. In spite of being proactive in
product remanufacturability design, the market share of the new product decreases
for competitors who are reactive in choosing the design, and the former product is
more profitable due to the capture of additional market share by the refurbished
product. Also, they find that if all customers have a higher willingness to pay for the
refurbished product, being proactive is less promising.
Garg et al. (2015) investigate a multi-criteria optimization approach to manage
environmental issues in CLSC-ND. They formulate a bi-objective non-linear pro-
gramming problem, and in order to solve it they propose an interactive
multi-objective programming approach algorithm. Their model determines the
optimal flow of parts and products in the CLSC network and the optimum number
of trucks hired by facilities in the forward chain of the network. They carry out
numerical experimentation with the proposed model to validate its applicability
with the help of data from a real-life case study. The case presented in the paper is
based on a geyser manufacturer, and the application of the model to this case
provides them with the underlying tradeoffs between the two objectives. The model
also results in the very interesting fact that with the implication of the extended
supply chain, a firm can create a green image for its product, which eventually
results in an increase in demand while significantly reducing the use of trans-
portation in both directions.
He et al. (2006) review the implementation of strategies for WEEE treatment and
the technologies for recovery of WEEE. It presents the current status of WEEE and
corresponding responses adopted so far in China. The concept and implementation
of scientific development are critical to the electronics sector as one of the important
industrial sectors in China’s economy. To achieve this objective, it is significant to
recycle WEEE sufficiently to comply with the regulations regarding WEEE man-
agement and to implement green design and cleaner production concepts within the
electronics industry in line with the upcoming EU and Chinese legislation in a
proactive manner.
Yang et al. (2008) also study WEEE flow and mitigating measures in China.
They identify the sources and generation of WEEE in China and calculate WEEE
volumes. The results show that recycling capacity must increase if the rising
quantity of domestic WEEE is to be handled properly. Simultaneously, suitable
WEEE treatment will generate large volumes of secondary resources. They describe
the existing WEEE flow at the national level and future challenges and strategies for
WEEE management in China.
Walther and Spengler (2005) analyze the impact of the WEEE directive on
reverse logistics in Germany. They think that essential changes in the field of
treatment of electronic products in Germany are expected due to the new regal
requirements owned. On the other hand, the consequences in terms of changes of
organization and material flows of the German treatment system are currently
178 G. Aldemir et al.
unknown. Their contribution is to predict relevant changes in this context. That sets
the framework for a deduction of recommendations for political decision makers
and actors in the treatment system.
Forrester (1961) introduced SD in the 1960s as a modeling and simulation
methodology for dynamic management problems. Since then, SD has been applied
to various business policies, strategies, and environmental problems (Sterman
2000). However, according to Dekker et al. (2004), only a few strategic manage-
ment and environmental problems in CLSC have been analyzed and reported in the
literature. Specifically, Spengler and Schröter (2003) present a CLSC using SD.
Georgiadis et al. (2004) present the loops of product reuse with a major influence.
Van Schaik and Reuter (2004) present an SD model focused on cars showing that
the realization of the legislation targets imposed by the EU depends on the product
design. SD is a powerful methodology for obtaining insights into the problems of
dynamic complexity. Sterman mentioned that “whenever the problem to be solved
is one of choosing the best from among a well-defined set of alternatives, opti-
mization should be considered. If the meaning of best is also well-defined and if the
system to be optimized is relatively static and free of feedback, optimization may
well be the best technique to use”. Bloemhof-Ruwaard et al. (1995) state that the
latter conditions are rarely satisfied for environmental management and supply
chain systems (Georgiadis et al. 2005). The system under study in this paper is
dynamic and full of feedbacks, promoting SD as an appropriate modeling and
analysis tool.
The first study of WEEE in Turkey was done by the Ministry of Environment and
Urban Planning with a regulation for the limitation of some certain hazardous
substances in 2009. The regulation was published in the Official Gazette on May 30,
2009 and entered into force in 2009. The purpose of this regulation was to establish
guidelines for the restriction of the use of certain hazardous substances found in
electric and electronical goods, determination of the applications to be exempted
from this limitation, and recovery or disposal of WEEE in order to protect the
environment and human health (Ministry of Environment and Urban Planning
2008).
The Waste of Electric and Electrical Controlling Regulations were enacted by
the Ministry of Environment and Urban Planning with the post in the Official
Gazette on May 22, 2012 (Ministry of Environment and Urban Planning 2012). The
purpose of the regulations was the same as that of the regulation published in 2009.
Companies that had letters of conformity collected 4000 tons of WEEE in 2009,
while they collected only 1818 tons of WEEE in 2006. The household WEEE
collection targets are shown in Table 1.
Table 2 shows the recycling targets and Table 3 shows the recovery targets
according to the categories of types of equipment.
A Closed-Loop Sustainable Supply Chain Network Design … 179
Model Structure
In this study, a sustainable supply chain model for refrigerators is designed with
AnyLogic (Fig. 1). SD is a way to understand the entire behavior of actors in the
long term.
At a random moment, a decrease means the total amount of goods divided by
their lifespan. This means that it is e-waste after a few years. Refrigerators will
become e-waste after an average lifespan of five years. It is assumed that the
180 G. Aldemir et al.
household amount will increase every year and as a result they will buy refriger-
ators for each. When production increases due to demand, the total amount of scrap
will also increase year by year. For the production facility, regulation requirement is
a must so that a specific amount (in kilograms) of e-waste has to be collected. The
producer can collect end-of-life products via its retailers or from third-party firms.
If the producer wants to collect WEEE from its retailers, it has to carry out a
campaign to make customers bring their end-of-life products to buy a new one,
A Closed-Loop Sustainable Supply Chain Network Design … 181
maybe at a lower price. For this reason, retail attractiveness is defined, which comes
with the campaign rate. By the time the retailer collects the WEEE, it will reach its
capacity, so another parameter, namely retailer availability, is used. To calculate the
total amount of WEEE collected from the retailer, we can multiply the total scrap,
attractiveness, and availability. Also 50% of the rest, which is coming from the
difference between total scrap and “collected by the retailer”, will be collected by
third parties.
When we consider the capacity of the collection center, the amount of waste
must be the minimum out of the collected amount and the capacity. By using this
formula, if more capacity is needed, the results will show that a capacity increase is
essential and a new collection center may need to be opened. The capacity of the
collection center has a wearing ratio that will decrease year by year as 0.0001.
There is unit cost for the collection center, which is fixed as 80 TL/ton. From the
beginning, the model shows that 1000 WEEE is collected.
There is also a unit cost of the collection center, which is defined by dividing the
total of cost of the collection center and multiplication of the retailer cost and
collected by retailer to aggregate waste amount. The decision of the collection
center (CCDecision) can be calculated by multiplying the waste amount and the
third party cost and dividing it to amount recycled. The amount recycled must be at
least 50% for the recycling center to be efficient. Also, it is important to define a
“delay” after the decision of the collection center is given because of the time
needed to build it up.
If the third party cost changes over time, the producer can change the amount of
waste it collects from the retailer. The most important thing is for the company to
provide the required amount, which is already certain due to the WEEE regulation.
The same loop is designed with the same parameters for the recycling center. At
this time, we have to consider both recycling and collection costs together.
There is a table defining the supplier cost that gives data about quantity dis-
counts. With this cost table, the producer will decide to purchase raw material from
suppliers or to recycle its scrap for use as a raw material. It will compare costs to
enable it to make a decision about this purchasing action.
To represent a pull system, if there is demand, production will be carried out.
Production requires raw material, so the material needed will be bought or recycled.
Simulation Results
The simulation was run without any errors in AnyLogic software. Sample data were
used to test the model and the data information of parameters is given in the section
on the model structure. Graphs are taken as a result to make comments on the
capacity of the recycling center and collection center (Fig. 2).
In this model, the decision to open a new collection or recycling center is made
due to capacity. The model shows crucial conclusions regarding two big dilemmas.
First it selects its own collection system rather than working with a third-party
182 G. Aldemir et al.
Nowadays, firms are choosing strategies that increase their economic performance
as well as increasing their competitiveness in the field of social responsibility.
Interest in the effective reuse of resources and/or manufactured products continues
to increase in all companies as a result of global climatic changes, population
increase, rapid urbanization, and the reduction in the availability of natural
resources. Due to customer demand, governmental regulations, and economic
returns, manufacturers consider recovery options.
SD is a method that allows a whole system to be controlled over time to see
whether or not it is sustainable. This model shows a WEEE collection system for a
white appliances manufacturer. This study will give an idea for collection and
recycling of e-waste, making it possible to obtain benefits such as reaching
Fig. 3 Time plots for the unit cost of collection and recycling centers
A Closed-Loop Sustainable Supply Chain Network Design … 183
References
Sterman JDJD (2000) Business dynamics: systems thinking and modeling for a complex world
(No. HD30. 2 S7835 2000)
Url-1 http://www.un-documents.net/our-common-future.pdf, date retrieved 07.01.2017
Van Schaik A, Reuter MA (2004) The time-varying factors influencing the recycling rate of
products. Resour Conserv Recycl 40(4):301–328
Walther G, Spengler T (2005) Impact of WEEE-directive on reverse logistics in Germany. Int J
Phys Distrib Logist Manag 35(5):337–361
Xu L, Mathiyazhagan K, Govindan K, Haq AN, Ramachandran NV, Ashokkumar A (2013)
Multiple comparative studies of green supply chain management: pressures analysis. Resour
Conserv Recycl 78:26–35
Yang J, Lu B, Xu C (2008) WEEE flow and mitigating measures in China. Waste Manage 28
(9):1589–1597
Part II
Engineering and Technology Management
The Relationships Among the Prominent
Indices: HDI-GII-GCI
Abstract Several global indices have been used to classify and to analyze the
states of countries. Comparison can be performed not only based on country but
also annually for each country. In this study, three prominent indices, the Global
Competitiveness Index (GCI), the Global Innovation Index (GII) and the Human
Development Index (HDI) were investigated to examine the relationships between
them by employing the PLS-SEM method. According to the results, HDI has an
influence on GII while GCI is affected by GII. The results also demonstrated that
GII has a full mediating effect on the relationship between HDI and GCI. Moreover,
findings indicated that countries should improve their innovativeness by taking
human capital into consideration to gain competitive advantages.
Introduction
Literature Review
In this part of the paper, the three indices mentioned above will be discussed briefly.
Furthermore, past research regarding the interactions between these indices is
reviewed.
The Human Development Index (HDI) has been presented by the United Nations
Development Program (UNDP) to assess the level of human development among
countries on the basis of composite measurements since 1990. The index started
with 144 countries and the last index that is produced in 2015 included 188
countries from all over the world. To measure human development, three dimen-
sions are taken into consideration as long and healthy life, knowledge, and a decent
standard of living. Long and healthy life estimation is determined by life expec-
tancy at birth; in other words, the number of years a newborn infant could expect to
live if the conditions of age-specific mortality rates remain the same throughout the
The Relationships Among the Prominent Indices: HDI-GII-GCI 189
Since 1979 the annual Global Competitiveness Report has been presented by The
World Economic Forum to shed light on the factors that countries encounter to
190 B. Cetinguc et al.
between these indices with mediating relationships being considered. The model is
shown in Fig. 1.
Methodology
In order to explore the interactions in the model, the data were collected from the
websites of the corporations. We gathered the latest available data from each index,
and the year 2015 was the last year for all of them. Each index includes a different
number of countries depending on the responses to the distributed questionnaires.
Economies were included in the data of this study if they were listed for more than
two indices. After all the eliminations due to the restrictions, 99 economies
remained. Different scales were used in each index; consequently, this may cause
interpretive problems. In order to evaluate the data on the same scale, normalization
was conducted for each of the indices. Finally, all the data ranged between 0 and 1.
The aim of this research to examine causal relationships among the HDI, GII and
GCI. To examine the causal relationships among these constructs, simultaneous
analysis is required. Partial Least Squares-SEM (PLS-SEM), which is a
second-generation technique, was chosen for the analyses for the following reasons.
Comprising both a structural and measurement model, PLS-SEM is a nonpara-
metric method (Hair et al. 2011). Moreover, PLS-SEM does not require any dis-
tributional assumptions. Hair et al. (2014) suggest using PLS-SEM when the
literature is not sufficiently developed. Furthermore, he and his colleagues mention
that PLS-SEM is preferable when the main aim is to examine the explanatoriness of
a structural model. However, PLS-SEM does not have a goodness of model fit
measure to test theory; therefore, confirmation is limited (Hair et al. 2011).
Many researchers from various backgrounds have been conducting PLS-SEM in
their research. Some examples of recent studies using PLS-SEM include Calabrò et al.
(2017), Moreira et al. (2017), Pai et al. (2014), Vanalle et al. (2017), Wong (2013).
Analysis
In this study, Smart PLS 2.0 software was employed. Structural and measurement
models are the elements of PLS-SEM. Additionally, measurement models are
divided into two groups, reflective measurement models and formative measurement
192 B. Cetinguc et al.
models. In this study, the measurement model is a reflective measurement model that
is used when the indicators are caused by the constructs. Reflective measurement
models have their own requirements for validity and reliability, and these include a
composite reliability higher than 0.70, indicator loadings higher than 0.70, above
0.50 for the average variance extracted (AVE), and discriminant validity that is
measured by Fornell-Lacker criterion (Hair et al. 2011). We employed PLS-SEM
with a maximum number of 300 iterations and mean replaced missing values. Since
our constructs were measured by creating a latent variable with one indicator, the
required reliability and validity criteria have all been fulfilled. Composite reliability,
indicator loadings and discriminant validity of all constructs were equal to 1. Based
on AVE of the highest values, the discriminant validity requirement was also
satisfied.
In order to test robustness of the structural model, R2 values and path coeffi-
cients’ significance were examined. R2 values were found to be 0.675 for GCI and
0.693 for GII. Hair and his colleagues suggest that above 0.50 values of R2 are
moderate (2011). A bootstrapping procedure was conducted with 5000 subsamples
and mean replacement to estimate significance of relationships, and findings can be
seen in Table 1. The significance of relationships is supported if the t-statistics are
above 1.96 for a two-tailed test (Hair et al. 2014, p. 186). In other words p values
below 0.05 indicate significance.
It was found that the relationships between HDI-GII and GII-GCI is supported,
while the relationship between HDI-GCI is not supported. To examine the medi-
ating effect of GII, direct effects between HDI-GCI and HDI-GII-GCI were
investigated. Figures 2 and 3 show the models of direct effects.
Direct effects are significant, as seen in Tables 2 and 3; thereby we can conclude
that GII has a full-mediating effect on the relationship between HDI and GCI.
References
Husam Arman
Introduction
Innovation has been an important driver for competitive success in many industries,
and it has shown high impact on society if managed strategically (Schilling 2012).
The rapid technological change and the resulting new innovations has compelled
organizations to improve their agility to be able to adapt to and take advantage of
the new opportunities and minimize any threats. Therefore, success belongs to those
organizations that have the capacity not only to adapt to change, but also to thrive
on it (Morris et al. 2014). For R&D organizations, it is important to rethink the
way they plan and manage their research activities considering the speed of
H. Arman (&)
Techno-Economics Division, Kuwait Institute for Scientific Research (KISR),
Kuwait City, Kuwait
e-mail: [email protected]
sectors housed at nine locations. KISR conducts scientific research and performs
technological consultations for governmental and industrial clients in Kuwait.
At KISR, strategic planning has been one of the key functions that has been
practiced since 1978 via formulation of a series of five-year strategic plans. Each of
these five-year plans included a diversified set of goals that were oriented toward
achieving KISR’s goals in solving Kuwait’s current and anticipated challenges.
KISR completed a strategic transformation project in 2010 with the help of
Arthur D. Little. The aim of this project was to transform KISR into R&D Center of
Excellence focusing on innovation in support of the State of Kuwait. The project
resulted in a new vision, mission, and a long-term strategy with a 2030 road
map. The project also resulted in a new organizational structure and improved
internal processes. Therefore, the first five-year strategy of KISR’s 2030 vision was
the 7th strategic plan. In this strategic plan, the research agenda included a large
number of proposed research activities due to the highly positive atmosphere after
the transformation project and the high expectation of hiring new researchers in
addition to the anticipated improved efficiency of optimized support processes.
KISR’s long-term strategy consisted of five strategic thrusts as shown in Fig. 1.
These thrusts were designed to fulfill the new vision by focusing on client’s needs,
collaborating with leading research institutions, building research centers in
application-oriented areas, commercializing technologies, and building culture of
achievement and excellence.
The 7th strategic plan made reasonable progress along the five strategic thrusts,
by expanding stakeholder engagement, in particular, the clients, creating key
account management process, signing various MoUs with international research
institutes, investing in new research facilities, establishing a division for commer-
cialization, and revising several high impact management processes such as pub-
lication and promotion policies. However, the general quantified achievement of the
7th strategy was not as expected based on the self-assessment and the strategy
evaluation conducted by the strategic planning team at KISR which has also utilized
Methodology
The significance of this work is to highlight the challenge that strategic planning
faces in R&D organizations and how, if managed with flexibility, can be useful.
This paper proposes practical solutions that can be helpful to practitioners in the
field of strategic planning for R&D organizations. The methodology used in this
research was empirical and explorative, since there is a need to describe and
document the current situation and explain factors which together cause a certain
phenomenon (Yin 2003). The aim was to understand how the strategic planning
approach can influence the research agenda of an R&D organization. Qualitative
data were mainly used through observation, interviews, group discussions and
workshops to carry out the case study at the Institute in addition to the feedback
workshops after completing the strategic planning activity.
planning approach itself and decided to revisit it with the objective of addressing
proactively the aforementioned issues.
It was agreed with the top management that the 8th strategic plan includes an
honest and complete assessment of how we are doing and accordingly lay out a
strategy for closing any gaps, including any modifications to KISR’s portfolio of
research programs. The strategic planning approach was designed to steer the
research centers to focus their resources toward the commitment to a high level of
confidence to meet the priority elements of their strategies, which meant making
conscious decisions to stop supporting less important activities, while selecting a
portfolio of activities across their research programs. This is to further secure
innovative solutions to key clients in addition to considering the development of
innovations that will have positive impact and positioning KISR for long-term
success.
Moreover, specific steps for executing the proposed strategy were required to
ensure more attention to the factors that may enable or disable the strategy, par-
ticularly with respect to process improvement within the sectors and capability
development in every organizational unit. This perspective of considering the
strategic planning process as a problem-solving strategy was adopted as a philos-
ophy to resolve the current issues/challenges. As Rumelt (2011) stated, good
strategy results in investing time to make hard choices to gain focus and identifying
obstacles and working out how to deal with them.
The terminology that was introduced during the strategic planning process was
important in addressing the challenges faced, such as using the term solution areas
that each program is required to deliver. The ‘solution’ has given the message that
the research should result in a tangible output and application to the client;
although, it can be addressed by more than one research project or technical service
(‘area’). This was a deliberate approach for this specific stage for KISR to focus on
meeting key client’s needs. However, the key function within the strategic planning
model is the portfolio evaluation matrix (PEM). PEM was introduced to influence
the research agenda to become more client-focused, address the critical few, and
most importantly, produce a balanced portfolio of research activities using a tool
that can communicate visually the impact of the various solutions areas within each
program and at the center level. As a result, the strategic areas at the center level can
be identified, and hence, the contribution of each research program.
Aligning resources spent on R&D activities with the strategic objectives of any
organization has been one of the most challenging issues, in particular to
technology-based firms. The strategic planning process ideally ensures that the list
of R&D projects are proposed to serve the market and product strategies. The
alignment, if it happens, is usually enforced by embedding it in the evaluation
criteria. This alignment criterion is useful in the evaluation process, but will not
200 H. Arman
necessarily result in a balanced portfolio that meets the strategic objectives which
could lead to different directions. For instance, there are objectives regarding
growth in market share and profits, focusing on the cash cows projects, and others,
looking at blue sky areas.
Decision making tools and in particular, R&D portfolio analysis available in the
literature are not used widely due to the perceptions held by the R&D managers that
the models are unnecessarily difficult to understand and use, and do not engage
practitioners in a collective creative manner, especially when dealing with models
like linear programming. Cooper et al. (1997) provided various practical and simple
to use bubble chart tools including risk-reward matrix which had proven to be
useful and practical. Part of these sets is the Impact-Effort Matrix that has been used
in many contexts, including lean and six sigma (Bunce et al. 2008). The concept of
this tool is powerful that can be used as part of the strategic planning process since
it can be utilized to reflect the conceptual meaning of a strategy that addresses two
key questions; “Where do you want to go” (i.e., the ‘Impact’ you want to achieve)
and “How to get there” (i.e., the ‘Effort’ needed to ensure the ability to execute the
strategy and deliver the required results, which was the main challenge at KISR).
However, these need to be translated to each organization as per their context,
definitions, their mission and strategic objectives.
The generic matrix that we developed is based on the aforementioned concept,
but we used the terms impact and ability. This generic framework can be used to
translate any strategy to a visual and practical decision-aiding tool. The impact
would reflect the expected contribution of the R&D programs and projects on the
predefined strategic objectives of the organization, and these can be grouped based
on the desirable portfolio shape of the organization to produce a balanced portfolio
of activities.
One of the important lessons learned from the execution of the 7th strategic plan
was the number and magnitude of the projects proposed by the programs which far
exceeded the organization’s ability to support in terms of manpower, facilities,
equipment, and administration. To correct this problem, the 8th strategic plan
needed to focus on KISR’s limited resources on those initiatives that will have the
largest impact on meeting national challenges; client’s mission critical problems;
and KISR’s reputation and financial commitments. Each center followed a
sequential process to evaluate its existing research programs, determine which
programs would be continued, added, or modified within the 8th strategic plan, and
to identify the specific solution areas that would form the heart of the centers
research activities over the next five years. This process is briefly described in
Fig. 2 which shows the main features of the process. PEM is a critical function in
the process which serves the three important objectives as follows:
The Influence of the Strategic Planning Approach … 201
Fig. 2 Process overview for the development of the center research agenda
Fig. 3 Portfolio evaluation matrix (real example from one of the research center)
The Influence of the Strategic Planning Approach … 203
• High impact—Low ability: The solution areas here need special attention by
the management and rigorous assessment to all the factors need to be addressed
with urgency. These include recruitment, procurement, partnerships, consul-
tants, etc. (e.g., P2A4 in Fig. 3).
• Low impact—High ability: The solution areas that fall here should be the ones
that the center depends on in generating revenues, expanding its market, unless
the expected revenues are not high (i.e., the size of the bubble) then a possible
strategy is to divert resources to other solution areas where applicable or even
use retraining strategies to enter new research area (e.g., P6A2, P6A1 in Fig. 3).
• Low impact—Low ability: The solution areas are not desirable here, and they
should not be pursued. Therefore, it is important to revisit and reassess these
solution areas and possibly abandon them at the planning stage and reallocate
planned resources (e.g., P5A3 in Fig. 3).
The results of the overall ranking of the research center’s solution areas are not
the end of the process; further review and analysis and iterations to ensure that a
balanced portfolio of research activities are maintained. This process has eventually
helped the institute to focus on the critical few areas; and therefore, it has developed
a much shorter list of key projects within each research program structure that
satisfies the multiple goals of the institute. This has been a very useful approach
from a management perspective.
However, the feedback was not received with the same motivation from the staff
and research program managers, since they are usually excited and motivated to
conduct a large number of research ideas based primarily on their interest, rather
than on its alignment to the organization’s strategy. Therefore, they found it for
some time difficult to accept the portfolio result and especially if it redirects or
undervalues their perspective on the program contribution when compared with
other research activities. This challenge was overcome in the limited cases where
the management of the involved research centers, at early stages, the research
program managers, and in some occasion, the senior staff. Moreover, it is designed
to make the process as transparent as possible, in addition to providing enough
window for feedback and discussion.
Conclusion
This paper has demonstrated the motive to use the strategic planning approach to
influence the research agenda of an R&D organization. The design of the strategic
planning approach including the necessity to introduce new tools such as the PEM
has helped significantly the research centers to focus their effort and energy on a
manageable and balanced portfolio of research activity that meets the strategic
objectives of the organization according to its long-term road map. The final
research strategy addressed the identified gaps and issues of the centers resources
being spread too thin and too broad to be effective in all areas. Moreover, the
204 H. Arman
References
Bunce M, Wang L, Bidanda B (2008) Leveraging six sigma with industrial engineering tools in
crateless retort production. Int J Prod Res 46(23):6701–6719
Cooper RG, Edgett SJ, Kleinschmidt EJ (1997) Portfolio management in new product
development: lessons from the leaders—I. Res-Technol Manag 40(5):16–28
Jain R, Harry C, Weick C (2010) Managing research, development and innovation: managing the
unmanageable. Wiley, New Jersey
Morris L, Ma Moses, Po Chi Wu (2014) Agile innovation: the revolutionary approach to accelerate
success, inspire engagement, and ignite creativity. Wiley, New Jersey
Nagji B, Tuff G (2012) Managing your innovation portfolio. Harvard Bus Rev 90(5):66–74
Rumelt R (2011) Good strategy/bad strategy: the difference and why it matters. Profile books,
London
Schilling M (2012) Strategic management of technological innovation. McGraw-Hill Education,
New York
Tidd J, Bessant J, Pavitt K (2013) Managing innovation: integrating technological, managerial
organizational change. Wiley, Chichester
Yin R (2003) Case study research: design and methods. Newbury Park, CA Sage
Effect of Organizational Justice on Job
Satisfaction
Introduction
The concept of organizational justice is the rules and social norms covering the
management of rewards and penalties that will be given to employees. The per-
ception of organizational justice is related to the perceptions of employees about
practices in their own organizations and about how fairly they are treated. It is a
phenomenon attempting to explain in what way these perceptions affect organi-
zational outcomes such as job satisfaction and organizational commitment
(Greenberg 1993). Job satisfaction means that the employee has reached to his or
her job-related expectations or obtained work results in a good way. In other words,
it is how an employee feels himself or herself towards the job (Demirel and Özçinar
2009: 132). A positive relationship among sense of justice and job satisfaction at
workplace was reported in conducted studies (Dailey and Kirk 1992; Folger and
Konovsky 1989; Martin and Bennett 1996), that is, job satisfaction increases when
justice increases and job satisfaction decreases when perceptiveness of justice
decreases (Cobb and Frey 1996; Fryxell and Gordon 1989; Lawson et al. 2009).
Greenberg (1990), who conducted a number of researches on organizational justice,
stated that employee satisfaction was a fundamental need for running of an
organization.
In a globalizing world, job satisfaction of employees based on organizational
justice should be provided in order to keep the organization running. Throughout
the current study, positive or negative effects of organizational justice on job sat-
isfaction will be investigated within the given framework. In addition, the relations
among organizational justice and dimensions of job satisfaction will be investigated
and findings will be revealed. The obtained results will be discussed and sugges-
tions will be made in the conclusion part.
Organizational Justice
Distributive Justice
Procedural Justice
Procedural justice refers to the perceived justice on formal procedures that are used
for making decisions (Taskiran 2011, p. 106, as cited in Yadong Luo). The concept
of procedural justice depends on the opinion of whether procedures or methods
used are right from the standpoint of the individual in the process of making
decisions by the management intending the individual himself or herself or other
employees. Meaning of the procedural justice is, following the same procedures for
everyone in the organization, employees having the opportunity of participation in
the decisions and having an informative system. It also means applied procedures
are appropriate for the culture formed within the organization and are away from
prejudices and biased behaviours. In addition, justice perception of an employee
during decision making towards the employee himself or herself or towards the
others is also explained with procedural justice. Here the employee is interested in
the organization as being standard. It will not escape from eyes of the employees if
the organization treats some members differently in employee-concerning decisions
(Yavuz 2010: 306).
Interactional Justice
employees and employees’ perception of justice towards the quality of this attitude
(Bies and Moag 1986, pp. 43–55). According to Güçel (2013), interactional justice
is the intuition of justice of the differences among interpersonal behaviours within
an organization. As the sense of personal confidence increases, the power of the
organization also increases. In this context, interactional justice is also affected by
how and in which way the taken decisions were or will be expressed to the
employees. (Kılıçlar 2011: 25). As for the result, interactional justice demands
treating employees respectfully and politely, saying that is to say appropriately,
explaining events and situations together with their reasons. Contributions of
employees to work are directly proportional to employees’ productivity and job
motivation in an organization. Dealing with justice with all its dimensions and
applying it in the most accurate way are necessary so that employees can use their
performances in a positive way (Abbasoğlu 2015).
Job Satisfaction
People spend a large portion of their daily lives at work from a certain age onwards.
In this context, a person, who gets the expected from his job, that affects not only
the economical but also the psychological state, can be happier. Therefore, job
satisfaction has an important role in human life in terms of both economical and
psychological aspects (Bakan and Büyükbeşe 2004; 6). In the literature, job sat-
isfaction is investigated in nine dimensions.
Pay
Studies in the literature have revealed that job satisfaction of employees is closely
related to the paying. Research has shown that job satisfaction of those employees,
who are pleased with their pay, is high (Bölükbaşı and Yıldırtan 2009). Regarding
paying provided by an organization, employee attitude towards the job is deter-
mined by the sufficiency, being balanced and degree of meeting expectations in
return for the benefits the employee believes that he provided. Again regarding
paying, equality among the same level employees is more important than the degree
of fulfilling the expectations (Gözen 2007, p. 39). Low paying is one of the main
dissatisfaction sources for employees in businesses. However, high paying alone is
not adequate to ensure job satisfaction of employees.
Effect of Organizational Justice on Job Satisfaction 209
Promotion
Communication
Fringe Benefits
Supervision
Behaviour styles of the managers and their types of administrating the authority
affect job satisfaction of the employees. The type of manager, who we call as
reputable, democratic, or collective, provides higher job satisfaction than authori-
tarian and directive type of manager (Sarıkamış 2006, p. 62). The management
210 A. Ozel and C. A. Bayraktar
types, which are well-known by society, which are seen important, which value
team work, which work with companies with broad service environments and
which attach importance to creativity of the employees, provide their employees
more satisfaction (Başaran 1982, pp. 204–205). In 2012, in a survey conducted by
the Society for Human Resource Management, 71% of the employees stated that
effective communication that they established with their managers played a critical
role on their job satisfaction.
Operating Procedures
Policies and procedures related to work within the organization in other words, the
way of doing a work, can affect the job satisfaction of the individuals.
Co-workers
Nature of Work
Job characteristics and job itself are the most important factors that affect employee
satisfaction. In the Job Characteristics Theory (Hackman and Oldham 1980) the
quality of the job being done is seen as the basic factor that affects employee
satisfaction. In order to increase their motivation and job satisfaction accordingly,
employees should be convinced of importance and meaningfulness of their job
(Meriçöz 2015). Castillo and Cano (2004) defined quality of work as the most
important motivating factor in increasing work performance based on job
satisfaction.
Effect of Organizational Justice on Job Satisfaction 211
Contingent Rewards
Performance based rewarding of employees through a fair system affects job sat-
isfaction in a positive way. Increasing high job satisfactions of employees in an
organization can be achieved by developing proper rewarding systems (Erkmen and
Şencan 1994, p. 145). Rewards also consolidate the needs for trust and respect as
they maintain the consciousness of being valued among employees and also the
consciousness of employee benefits are considered as equal to organizational
benefits (Kaynak 1990, p. 141).
The distributive justice concept refers to whether resources are distributed fairly by
the organization (Andrews and Kaçmar 2001: 349; Melkonian et al. 2011: 812) and
is based on evaluations regarding fairness of rewards or outcomes which individ-
uals receive. In this content, wages, promotions, premiums and rewards can be
given as examples to gains achieved by the employees (cited in Taner et al. 2015:
182).
A person at a workplace will be satisfied with his or her job if he perceives and
senses fair pricing and fair opportunities of promotion (Robbins 1998: 152).
Accordingly, it can be stated that distributive justice perception of people working
in an organization affects job satisfaction positively.
The ways to determine the gains that staff will obtain, the elements of perfor-
mance evaluation and the ways to be followed in solution of problems among
employees are determinatives of perception of procedural justice (Çetinsöz and
Turhan 2016). When we think of administrative-practices related satisfaction relies
on fairness of these practices, it can be said that perception of procedural justice
causes job satisfaction (Çetinsöz and Turhan 2016).
Interactional justice is a concept pointing to qualifications of interpersonal
relations and is defined as the third separate type of justice independent of proce-
dural and distributive justice concepts (Folger and Cropanzano 1998).
Because polite and respectful manners of managers towards their subordinates
are the basic elements underlying interactional justice, it can be stated that inter-
actional justice plays an important role in ensuring job satisfaction. Within this
respect, displaying behaviours that are respectful to employees’ opinions and
feelings consolidates the feeling of trust towards the managers and ensures the
increase of job satisfaction at the same time by reducing role ambiguity and work
stress (Çetinsöz and Turhan 2016).
The research model of the study is shown in Fig. 1. The studies mentioned
above have been initiators for the formulation of the following hypotheses:
H1: Organizational justice has a positive effect on job satisfaction.
H2: Interactional justice has a positive effect on organizational justice.
212 A. Ozel and C. A. Bayraktar
Method
The current study was conducted with the participation of 165 people working at
20 different sectors on textile, machine manufacturing, consultancy services,
finance, food, etc., in Turkey. Among the participants of the survey, 68.3% are
males, 31.7% are females, 18% are younger than 25, 18% are between 25 and 34,
12.2% are between 35 and 44, 51.8% are over 45. Also 40.3% are trying to become
managers, 11.5% are first-level or lower-level managers, 30.2% are mid-level
managers, 18% are high-level managers, 2.2% are high school (or lower) graduates,
4.3% hold an associate degree, 59.7% hold an undergraduate degree and 33.8%
hold a graduate degree.
Job Satisfaction Survey (JSS) developed by Spector in 1985 was used in this study
to measure dimensions of job satisfaction. Questions, which were used in the
studies of Çavuş and Cumaliyeva (2013) to measure general job satisfaction, were
benefitted.
Effect of Organizational Justice on Job Satisfaction 213
Findings
When the data set used in the research is investigated, it is seen that skewness and
kurtosis values for all items are within this range except for the kurtosis value of
one item. That’s why, this item which was not showing a normal distribution, was
excluded from the analyses. For all items included in the analyses, skewness values
between 1.242 and 0.733 and kurtosis values between 1.212 and 1.789 were
observed.
As a result of a reliability test on organizational justice and job satisfaction, two
of the dimensions specified in the research model, Cronbach’s Alpha values among
this dimensions were found larger than 0.5. These values were accepted as reliable
since Jenkinson et al. (1994) stated that Cronbach’s Alpha values larger than 0.5
were acceptable. In order to pass the reliability test, one item from colleagues’
dimension and one from the way of doing works dimension were excluded from the
analyses.
Since Hair and colleagues (1995) defined minimum sample size required for
performing a factor analysis on a sample as 100, the sample used in the current
research and consisting of 165 data is convenient for factor analysis.
15 factors were obtained after the factor analysis. All factors could be grouped
under one factor except the ones of interactional justice. Items of interactional
justice were grouped under a second factor (Table 1).
When the table of correlations, obtained as a result of the analysis, is investi-
gated, positive correlations between organizational justice and job satisfaction,
distributive justice, procedural justice were determined.
As a result of calculating the effect of organizational justice on job satisfaction,
organizational justice was found as 23.8% effective on job satisfaction. During the
analyses, Durbin–Watson value was also calculated as 1.964 which is within the
ideal range. When effects of justice dimension on organizational justice, it is seen
that procedural justice is 43.3% effective while interactional justice is 19.2%
effective on organizational justice. According to the results of the regression
analysis, the sample did not provide enough evidence for the effect of distributional
justice on organizational justice. However, Bayarçelik and Fındıklı (2016) argued
in their studies that interactional justice did not have an effect on job satisfaction
while distributive and procedural justice had.
As the result of the regression analysis, the sample did not provide enough
evidence for the H4 hypothesis while it supported H1, H2 and H3.
214 A. Ozel and C. A. Bayraktar
Results
References
Andrews MC, Kaçmar KM (2001) Discriminating among organizational politics, justice and
support. J Organ Behav Sayı 22:347–366
Bakan İ, Büyükbeşe T (2004) Örgütsel İletişim ile İş Tatmini Unsurları Arasındaki İlişkiler:
Akademik Örgütler için Bir Alan Araştırması. Akdeniz İ.İ.B.F. Dergisi 7:1–30
Başaran İE (1982) Örgütlerde Davranış. Ankara Üniversitesi Eğitim Bilimleri Fakültesi Yayınları,
No: 108. Ankara. ss.35–37: 204–205
Bayarçelik E, Fındıklı M (2016) Procedia Soc Behav Sci 235(2016):403–411
Bies RJ, Moag JS (1986) Interactional justice: communication Criteria for fairness. In: Sheppard B
(ed) Research on negotiation in organizations, vol 1. JAI Press, Greenwich, CT, pp 43–55
Bölükbaşı AG, Yıldırtan DÇ (2009) Yerel Yönetimlerde İş Tatminini Etkileyen Faktörlerin
Belirlenmesine Yönelik Alan Araştırması. Marmara Üniversitesi İ.İ.B.F Dergisi. İstanbul. ss.
351–362
Bozkurt Ö, Bozkurt İ (2008) İş Tatminini Etkileyen İşletme İçi Faktörlerin Eğitim Sektörü
Açısından Değerlendirilmesine Yönelik Bir Alan Araştırması. Doğuş Üniversitesi Dergisi 9(1):
1–18
Castillo J, Cano J (2004) Factors explaining job satisfaction among faculty. J Agric Educ 45(3):
65–74
Çavuş Ş, Cumalieva D (2013) İş Doyumu Ve Yaşam Doyumu İlişkisi: Özel Güvenlikte Çalışanlar
Üzerine Bir Araştırma. Akademik Bakış Dergisi 37:1–17
Çaylı B (2013) Kontrol Odağı-İş Tatmini İlişkisi ve Örgütsel Adalet Algısının Aracı Etkisi,
Yayınlanmamış Yüksek Lisans Tezi. Balıkesir Üniversitesi Sosyal Bilimler Enstitüsü,
Balıkesir
Çetinsöz B, Turhan M (2016) İşgörenlerin Örgütsel Adalet Algılarının İş Tatmini Üzerine Etkisi
Ve Bir Uygulama Örneği, Mehmet Akif Ersoy Üniversitesi Sosyal Bilimler Enstitüsü Dergisi
8:s:329–343
Cobb AT, Frey FM (1996) The effects of leader fairness and pay outcomes on superior/subordinate
relations. J Appl Soc Psychol 26:1401–1426
Cropanzano R, Bowen DE, Gilliland SW (2007) The management of organizational justice. Acad
Manag Perspect 21(4):34–48
Dailey RC, Kirk DJ (1992) Distributive and procedural justice as antecedents of job dissatisfaction
and intent to turnover. Hum Relat 45:305–17
Demirel Y, Özçınar MF (2009) Örgütsel Vatandaşlık Davranışının İş Tatmini Üzerinde Etkisi:
Farklı Sektörlere Yönelik Bir Araştırma. Aksaray Üniversitesi İktisadi ve İdari Bilimler Dergisi
23(1):129–145
Erkmen T, Şencan H (1994) Örgüt Kültürünün İş Doyumu Üzerindeki Etkisinin Otomotiv
Sanayide Faaliyet Gösteren Farklı Büyüklükteki İki İşletmede Araştırılması. Dokuz Eylül
Üniversitesi İşletme Fakültesi Yayınları, İzmir
Folger R, Cropanzano R (1998) Organizational justice and human resource management. Sage
Publications, London Procedural and Interactional Justice. J Appl Psychol 31(II):326
Folger R, Konovsky MA (1989) Effects of procedural and distributive justice on reactions to pay
raise decisions. Acad Manag J 32(1):111–130
Fryxell GE, Gordon ME (1989) Workplace justice and job satisfaction as predictors of satisfaction
with union sand management. Acad Manag J 32:ss. 851–866
Gözen DE (2007) İş Tatmini ve Örgütsel Bağlılık Sigorta Şirketlerine Üzerine Bir Uygulama.
Yüksek. ss. 23–41
Greenberg J (1990) Organizational justice: yesterday, today and tomorrow. J Manag 16(2):
399–432
Greenberg J (1993) The social side of fairness: interpersonal and informational classes of
organizational justice. In: Cropanzano R (ed) Justice in the workplace: approaching fairness in
human resource Örgütsel Adalet: management. Lawrence Erlbaum Associates, Publishers,
Hillsdale, New Jersey, ss. 79–103
Güçel C (2013) Örgütsel Bağlılığın Örgütsel Vatandaşlık Davranışına Etkisi Örgütsel Adaletin
Aracılık Rolü: İmalat İşletmelerine Yönelik Bir Araştırma. İşletme Araştırmaları Dergisi
5/2(2013):173–190
218 A. Ozel and C. A. Bayraktar
Hackman JR, Oldham GR (1980) Motivation through the design of work. Addison-Wesley,
Readings, MA
Hair J, Anderson R et al (1995) Multivariate data analysis. Prentice-Hall Inc., New Jersey
İşcan ÖF, Sayın U (2010) Örgütsel Adalet, İş Tatmini ve Örgütsel Güven Arasındaki İlişki,
İktisadi ve İdari Bilimler Dergisi. J Econ Admin Sci 24(4):ss. 150–153
Jenkinson C, Wright L, Coulter A (1994) Criterion validity and reliability of the SF-36 in a
population sample. Qual Life Res 3:7–12
Kaynak T (1990) Organizasyonel Davranış ve Yönlendirilmesi. Alfa Basım Yayın Dağı- tım,
İstanbul
Kılıçlar A (2011) Yöneticiye Duyulan Güven İle Örgütsel Adalet İlişkisinin Öğretmenler
Açısından İncelenmesi. İşletme Araştırmaları Dergisi 3/3(2011):23–36
Lawson KJ, Noblet AJ, Rodwell JJ (2009) Promoting employee well being: the relevance of the
work characteristics and organizational justice. Health Promot Int 24(3):ss. 223–233
Martin CL, Bennett N (1996) The role of justice judgments in explaining the relationship between
job satisfaction and organizational commitment. Group Organ Manag 21(1):84–104
Melkonian T, Monin P, Noorderhaven NG (2011) Distributive justice, procedural justice,
exemplarity, and employees’ willingness to cooperate in M&A integration processes: an
analysis of the Air France-KLM merger. Hum Resour Manag Cilt 50, Sayı 6:809–837
Meriçöz S (2015) Çalışanların Örgütsel Adalet Algılarının İş Tatminine Ve İş Performansına Olan
Etkisi: Ampirik Bir Çalışma. Yayınlanmamış Yüksek Lisans Tezi, Bahçeşehir Üniversitesi
Sosyal Bilimler Enstitüsü, İstanbul
Niehoff BP, Moorman RH (1993) Justice as a mediator of the relationship between methods of
monitoring and organizational citizenship behaviour. Acad Manag J 36(3):527–556
Robbins SP (1998) Organizational behavior: contexts, controversies, applications. Prentice Hall,
USA
Samadov S (2006) İş Doyumu ve Örgütsel Bağlılık: Özel Sektörde Bir Uygulama, ss. 15–33
Sarıkamış Ç (2006) Örgüt Kültürü ve Örgütsel İletişim Arasındaki İlişkinin Örgüte Bağlılık ve İş
Tatminine Etkisi ve Başarı Teknik Servis A.Ş.’de Bir Uygulama, Yüksek Lisans Tezi, Anadolu
Üniversitesi, Sosyal bilimler Enstitüsü. Eskişehir, ss. 62–64
Seyyed Javadin SR, Farahi M, Taheri Atar Gh (2008) Understanding the impact of organizational
justice dimensions on different aspects of job and organizational satisfaction. J Manag 1(1):55–70
Simons T, Roberson Q (2003) Why managers should care about fairness: the effects of aggregate
justice perception on organizational outcomes. J Appl Psychol 88(3):432–443
Solmuş T (2004) İş Yaşamında Duygular ve Kişilerarası İlişkiler, Psikoloji Penceresinden İnsan
Kaynakları Yönetimi. Beta Yayınları, İstanbul
Spector PE (1985) Measurement of human service staff satisfaction: development of the job
satisfaction survey. Am J Commun Psychol 13(6):693
Taner B, Turhan M, Helvacı İ, Köprülü O (2015) The effect of the leadership perception and
organizational justice on organizational commitment: a research in a state university. Int Rev
Manag Mark Cilt 5(Sayı 3):180–194
Taşkıran E (2011) Liderlik ve Örgütsel Sessizlik Arasındaki Etkileşim, Örgütsel Adaletin Rolü,
Beta Basım Yayım, İstanbul, ss. 93–107
Tekleab AG, Takeuchi R, Taylor MS (2005) Extending the chain of relationship among
organizational justice, social exchange and employee reactions: the role of contract violations.
Acad Manag J 48(1):146–157
Yavuz E (2010) Kamu ve Özel Sektör Çalışanlarının Örgütsel Adalet Algılamaları ÜzerineBir
Karşılaştırma Çalışması”, Doğuş Üniversitesi Dergisi 11(2):302–312
Yıldırım A (2010) Etik Liderlik Ve Örgütsel Adalet İlişkisi Üzerine Bir Uygulama. Yüksek Lisans
Tezi, Karamanoğlu Mehmetbey Üniversitesi Sosyal Bilimler Enstitüsü, Karaman
Yürür S (2008) Örgütsel Adalet İle İş Tatmini ve Çalışanların Bireysel Özellikleri Arasındaki
İlişkilerin Analizine Yönelik Bir Araştırma, Süleyman Demirel Üniversitesi İktisadi ve İdari
Bilimler Fakültesi Dergisi, Cilt: 13, Sayı: 2, Isparta, ss. 295–312
Importance of Developing a Decision
Support System for Diagnosis
of Glaucoma
Murat Durucu
Introduction
This study focuses on the need to develop an objective decision support system for
evaluating the level and occurrence of glaucoma disease. Glaucoma is diagnosed by
considering the patient’s family history and by using clinical techniques, such as
tonometry, ophthalmoscopy, perimetry, gonioscopy, and pachymetry. Glaucoma
causes irreversible blindness in patients in its late stages. Diagnosis at an early stage
allows for therapies that slow the progression of the disease. Also, diagnosis at an
M. Durucu (&)
Industrial Engineering Department, Management Faculty,
Istanbul Technical University, Istanbul, Turkey
e-mail: [email protected]
early stage can decrease the socio-economic wages for patients and the countries
where they live (Mazhar 2013). However, problems have been experienced in the
diagnosis of the disease by clinical examination, so it is necessary to develop new
techniques so that diagnosis can be made at an early stage. This disease is fre-
quently encountered, especially between the ages of 40 and 80 years, despite its
dependence on genetic factors. Globally, 3.54% of the population suffers from
glaucoma; also, according to data from 2013, 64.3 million people between the ages
of 40 and 80 years suffer from glaucoma worldwide, and this number is expected to
reach 76 million in 2020 (Quigley and Broman 2006) and 111.8 million in 2040
(Tham et al. 2014). Some studies have found that glaucoma affects 44.7 million
people across the world, 2.8 million of them living in the United States, leading to
1.6 million in direct and indirect costs including 0.9 million in total. It is reported
that costs of $2.5 billion are generated. The development of glaucoma is illustrated
in Fig. 1. Increased blood pressure inside the pupil damages the optical nerves.
The major symptoms of glaucoma include (1) blurred vision, (2) severe pain in
the eye, (3) rainbow hallows with light headache, (4) brow pain nausea, and
(5) vomiting with red eye. Intraocular pressure is also identified as one of the risk
factors that cause glaucomatous damage to develop, and lower pressure leads to
progressive retinal degenerative change (Murthi and Madheswaran 2014).
Different levels of glaucoma are recognized in the literature. A comparison
between normal vision and glaucoma eye can be seen in Fig. 2. A description of the
levels of glaucoma considered by ophthalmologists can be found in Table 1.
In recent years, imaging technology such as Heidelberg retinal tomography
(HRT), stereoscopic disc photography (SDP), and optical coherence tomography
(OCT) have been used for the diagnosis of glaucoma (Mwanza and Budenz 2016).
This better accuracy and faster imaging techniques in response technique of OCT
has become the most common method used by experts. The retinal nerve fiber layer
(RNFL), optic nerve head (ONH), and reasonable analysis are applied to detect any
glaucoma damage by OCT (Bai et al. 2016). Due to all the economic factors, early
diagnosis of glaucoma is very important for the patient’s quality of life as economic
costs. Clinically, the diagnosis of glaucoma can be done through measurement of
the CDR, defined as the ratio of the vertical height of the optic cup to the vertical
height of the optic disc. An increment in the cupping of the ONH corresponds to
increased ganglion cell death and hence CDR can be used to measure the proba-
bility of developing the disease. A CDR value that is greater than 0.65 indicates
high glaucoma risk (Li and Chutatape 2003).
Figure 3 illustrates medical imaging normal and affected eyes (Murthi and
Madheswaran 2014). When there are defects in the optical nerves, irreversible
blindness will begin.
222 M. Durucu
Methodology
To date, procedures that have been employed for detection of glaucomatous visual
field progression may be broadly grouped into four categories: subjective clinical
judgment, defect classification systems, trend analyses, and event analyses.
Clinical Judgment
Visual field defect classification systems use predetermined criteria to grade single
test results, providing a discrete score for each visual field test result. The advantages
Importance of Developing a Decision Support System … 223
of this approach are that test results are immediately stratified into broadly similar
defect magnitudes, interpretation is relatively simple, and progression can be easily
defined as worsening of the score over time. There are, however, a number of
drawbacks to the use of classification systems. They do not provide information on
the spatial configuration of defects and may not be scaled linearly; for example, a
change from 0 to 3 may not be equal to a change from 10 to 13.
Trend Analyses
Event Analyses
Event analyses are valuable because they attempt to identify single events of sig-
nificant change relative to a reference examination (Hitchings 1994). Event anal-
yses can be relatively simple and can look for statistically significant differences
between one examination and another, for example by using the DELTA program
of the Octopus perimeter. This particular method employs a paired t-test to deter-
mine whether significant differences are present between one test result and another.
Conclusion
Although HRT or OCT images offer precision and quickness, especially in the early
stages, difficulty and mistakes are experienced in the diagnosis of glaucoma. When
the diagnosis relies on the discretion of the doctor and placement process, it is
difficult to obtain objective results. It is very important to develop an objective
decision-support system for diagnosis and for determining the level of glaucoma
disease in patients.
In recent years, computer aided diagnosis (CAD) has played a major role in
screening for glaucoma. The CAD system is simple, repetitive, not prone to inter- or
intra-observer variability, and fast. Also, CAD can be used to screen many patients
in a short time. There is a scarcity of ophthalmologists in many developing
countries, where CAD can be very useful. The proposed decision-support system
for glaucoma can differentiate normal and glaucoma classes accurately.
224 M. Durucu
References
Bai XL, Niwas SI, Lin WS, Ju BF, Kwoh CK, Wang LP, Sng CC, Aquino MC, Chew PTK (2016)
Learning ECOC code matrix for multiclass classification with application to glaucoma
diagnosis. J Med Syst 40(4):10
Fitzke FW, Hitchings RA, Poinoosawmy D, McNaught AI, Crabb DP (1996) Analysis of visual
field progression in glaucoma. Br J Ophthalmol 80(1):40–48
Hitchings RA (1994) Perimetry–back to the future? Br J Ophthalmol 78(11):805–806
Holmin C, Krakau CE (1980) Visual field decay in normal subjects and in cases of chronic
glaucoma. Albrecht Von Graefes Arch Klin Exp Ophthalmol 213(4):291–298
Holmin C, Krakau CE (1982) Regression analysis of the central visual field in chronic glaucoma
cases. A follow-up study using automatic perimetry. Acta Ophthalmol (Copenh) 60
(2):267–274
Li H, Chutatape O (2003) A model-based approach for automated feature extraction in fundus
images. In: Proceedings of the 9th IEEE international conference on computer vision
Mazhar S (2013) Nuggets in clinical approach to diagnosis of glaucoma. J Clin Ophthalmol Res 1
(3):175–181
Murthi A, Madheswaran M (2014) Medical decision support system to identify glaucoma using
cup to disc ratio. J Theor Appl Inf Technol 68(2):406–413
Mwanza JC, Budenz DL (2016) Optical coherence tomography platforms and parameters for
glaucoma diagnosis and progression. Curr Opin Ophthalmol 27(2):102–110
Quigley HA, Broman AT (2006) The number of people with glaucoma worldwide in 2010 and
2020. Br J Ophthalmol 90(3):262–267
Tham Y-C, Li X, Wong TY, Quigley HA, Aung T, Cheng C-Y (2014) Global prevalence of
glaucoma and projections of glaucoma burden through 2040: a systematic review and
meta-analysis. Ophthalmology 121(11):2081–2090
Werner EB, Bishop KI, Koelle J et al (1988) A comparison of experienced clinical observers and
statistical tests in detection of progressive visual field loss in glaucoma using automated
perimetry. Arch Ophthalmol 106(5):619–623
Determinants of Mobile Banking Use:
An Extended TAM with Perceived Risk,
Mobility Access, Compatibility, Perceived
Self-efficacy and Subjective Norms
Introduction
Mobile banking is “the use of mobile terminals such as cell phones and personal
digital assistants to access banking networks via the wireless application protocol
(WAP)” (Zhou et al. 2010). With the increase in the number of smart phone users,
customers gain an opportunity to do their banking activities with a device that is
near. The use of mobile banking offers the users to access to the mobile application
Literature Review
The literature review on mobile banking have tried to understand the factors
affecting actual use, intentional use (Gu et al. 2009; Lin 2011; Luarn and Lin 2005;
Koenig-Lewis et al. 2010; Akturan and Tezcan 2012; Teo et al. 2012; Hanafizadeh
et al. 2014) and continued intention to use mobile banking (Gumussoy 2016; Lin
2011). Gu et al. (2009) explain the intention to use mobile banking with the TAM
and trust. They find that perceived usefulness is the strongest predictor, followed by
the trust. Lin (2011) examines the effect of innovation attributes (perceived relative
advantage, ease of use and compatibility) and knowledge-based trust (perceived
competence, benevolence and integrity) on behavioral intention to use and adoption
(continue to use) mobile banking with using innovation diffusion theory and
knowledge based trust. The results of the study show that perceived relative
Determinants of Mobile Banking Use … 227
In any type of behavior, people first intend to use a particular system, then actually
use that system in the future. Therefore, understanding the determinants of
behavioral intention to use is important in predicting actual behavior. Behavioral
intention to use shows the possibility of using that system in the future (Ajzen and
Fishbein 1980). TAM is one of the most used model that explain intention and
actual use in a simpler but effective way. In the basic model of TAM, behavioral
intention to use is predicted by two important constructs: Perceived usefulness and
perceived ease of use. Perceived usefulness is defined as the “degree to which an
individual believes that using a particular system would enhance his or her per-
formance” (Davis 1989, p. 320). Another important factor is perceived ease of use,
which is “the degree to which a person believes that using a particular system
would be free of effort” (Davis 1989, p. 195). As the system becomes more useful
and easy to use, then people will intend to use that system more. Otherwise, they
will choose any other system that satisfy the needs of the user.
Furthermore, as the people do not have to spend too much time on how to use or
learning, the system is perceived to be more useful. Thereby, perceived ease of use
influence perceived usefulness in a positive way.
The relationships defined in TAM are verified by several studies in the literature
about mobile banking (Gu et al. 2009; Luarn and Lin 2005; Koenig-Lewis et al.
2010). Therefore, we hypothesize the followings:
Determinants of Mobile Banking Use … 229
H1: Perceived usefulness has a positive effect on intention to use mobile banking
H2: Perceived ease of use has a positive effect on intention to use mobile banking
H3: Perceived ease of use has a positive effect on perceived usefulness
Perceived Risk
Perceived risk has two dimensions: technology-driven risk resulting from infras-
tructure and relational risk resulting from behaviors of service providers (Pavlou
2003). Service provider may not behave in the way it is required in terms of
reliability and they behave in an opportunistic way by taking the advantage of the
uncontrollable transactions (Pavlou 2003). Moreover, there is always an inherent
possibility of hacking mobile applications due to security vulnerability associated
with mobile applications technology. These kind of technological and relational
risks reduce the trust of the users in mobile banking, which in turn reduces the
intention to use mobile banking. Furthermore, users will not find mobile banking
useful when the perceived risk is high, they will prefer to use branch banking or
other traditional channels. There are several studies indicating the significant
relationship between perceived risk and intention to use (Hanafizadeh et al. 2014;
Chitungo and Munongo 2013; Akturan and Tezcan 2012) Thus, the following
hypotheses are proposed.
H4: Perceived risk has a negative effect on intention to use mobile banking
H5: Perceived risk has a negative effect on perceived usefulness
Subjective Norms
Subjective norms are defined as the “individual perception that most people who are
important to him think he should or should not perform the behavior in question”
(Fishbein and Ajzen 1975, p. 302). Individuals are influenced from the opinions and
behaviors of other people within their social group like friends, family members or
colleagues in intentional or unintentional ways. Widespread use of mobile banking
within the social group will increase the intention to use of the individual. There are
several studies indicating the significant relationship between subjective norms and
intention to use (Teo et al. 2012; Yu 2012; Aboelmaged and Gebba 2013) Thus, the
following hypothesis is proposed.
H6: Subjective norms has a positive effect on intention to use mobile banking
230 C. Altin Gumussoy et al.
Mobility Access
Customers can handle their transactions faster using mobile banking than visiting
bank or using phone banking. Also, using mobile banking is less time consuming
than other banking options. At anytime and anywhere, users having mobile phones
and internet access can easily do their banking transactions. Therefore, mobility
access increase perceived usefulness of mobile banking. Thus, the following
hypothesis is proposed.
H7: Mobility access has a positive effect on perceived usefulness
H8: Mobility access has a positive effect on perceived ease of use
Compatibility
Perceived Self-efficacy
Perceived self-efficacy refers to the belief of the respondents about their ability, skill
or knowledge about conducting an activity (Luarn and Lin 2005). The relationship
between perceived self-efficacy and perceived ease of use has been confirmed by
several studies (Luarn and Lin 2005). Luarn and Lin (2005) find that perceived
self-efficacy explain a high percentage of perceived ease of use. Therefore, the
following hypotheses are proposed:
H11: Self-efficacy has a positive effect on perceived usefulness
H12: Self-efficacy has a positive effect on perceived ease of use
Determinants of Mobile Banking Use … 231
The use of mobile banking has become widespread in recent years. In this study, a
research model is proposed in order to reveal the factors affecting intention to use
mobile banking by using TAM. Eleven of the twelve hypotheses are suppwith
mobile phoneorted. The results indicate that the key factors affecting behavioral
intention to use are perceived usefulness (b ¼ 0:66), perceived ease of use
(b ¼ 0:31) and perceived risk (b ¼ 0:22). Perceived usefulness and perceived
232 C. Altin Gumussoy et al.
ease of use are the most commonly used constructs in mobile banking studies
(Shaikh and Karjaluoto 2015). When the user finds mobile banking useful, easy and
performance enhancing, they prefer to use it more. Perceived usefulness is the
strongest construct and has a direct effect on behavioral intention to use, and
generally this is more important than perceived ease of use and perceived risk,
which is compatible with several studies in the literature (Gu et al. 2009; Teo et al.
2012; Hanafizadeh et al. 2014). Furthermore, for mobile banking transactions, trust,
security and risk are very important aspects affecting intention to use decision.
While the users perceive mobile banking as risky and insecure, their intention to use
mobile banking decreases. On the other hand, while trust increases, perceived risk
for the user decreases (Pavlou 2003). This significant relationship between per-
ceived risk and intention to use is also supported by several studies in the literature
(Hanfizadeh et al. 2014; Chitungo and Munongo 2013; Akturan and Tezcan 2012).
Furthermore, findings of the study show that intention to use is not affected by
subjective norms (b ¼ 0:07) or opinions of the other mobile banking users. Users
do not take care of the opinions of the other users, while they find the mobile
banking reliable, easy to use and useful. This result is also supported by the pre-
vious research, which shows that social influence has no effect on perceived use-
fulness and behavioral intention (Venkatesh et al. 2003; Gu et al. 2009). Subjective
norms or social influence is important in the early stages of experience when the
user has less experience and knowledge about the technology (Venkatesh et al.
2003). While the experience of the user increases in time, effect of social influence
decreases (Venkatesh et al. 2003). In the current study, since most of the respon-
dents have at least a bachelor’s degree, they have knowledge about mobile banking,
which may decrease the importance of others opinion.
Perceived usefulness is directly affected by mobility access (b ¼ 0:43), per-
ceived ease of use (b ¼ 0:41), compatibility (b ¼ 0:34), perceived self-efficacy
(b ¼ 0:20) and perceived risk (b ¼ 0:14). Mobility access is the most important
construct on perceived usefulness. Instead of visiting the bank or using the call
center, mobile banking is more accessible and customer can use mobile banking
anywhere at any time for their banking activities. This easier accessibility of mobile
banking increases the perceived usefulness of the customer. Furthermore, customers
find mobile banking useful while it is easy to use, easy to learn, secure and com-
patible with their lifestyle and past experiences. Compatibility is an important
aspect for adoption of mobile banking. The significant relationship between com-
patibility and behavioral intention to use mobile banking is also proved by several
studies (Koenig-Lewis et al. 2010; Lin 2011; Al-Jabri and Sohail 2012).
Perceived ease of use is directly affected by compatibility (b ¼ 0:25), mobility
access (b ¼ 0:22) and perceived self-efficacy (b ¼ 0:16). While using mobile
banking is compatible with the user’s lifestyle and past experiences, users
find mobile banking easy to use. Accessing mobile banking from anywhere at any
time increases the perceived ease of use, users find mobile banking easier than
visiting bank or call center. Several studies confirm the relationship between
perceived self-efficacy and perceived ease of use (Luarn and Lin 2005; Amin et al.
2007).
234 C. Altin Gumussoy et al.
Table 2 (continued)
Construct Reference Items
Mobile banking is a less time consuming than other
banking options
Perceived Luarn and Lin I could conduct my banking transactions using the
self-efficacy (2005) mobile banking systems…
– if I had just the built-in help facility for assistance
– if I had seen someone else using it before trying it
myself
– if someone showed me how to do it first
Managerial Implications
is an important factor for perceived usefulness and perceived ease of use. Therefore,
banks develop their mobile banking systems, taking into account their customers’
lifestyle and personal preferences. Furthermore, advertising campaign should be
done to show how easy to use mobile banking to encourage people in using mobile
banking systems.
understanding the behavioral intention to use mobile banking. In the future studies,
investigating the effects of demographic constructs such as age, gender, education
and income on behavioral intention to use mobile banking could provide more
insights about understanding adoption of mobile banking.
References
Aboelmaged, M., & Gebba, T.R. (2013). Mobile banking adoption: an examination of technology
acceptance model and theory of planned behavior. International Journal of Business Research
and Development (IJBRD), 2(1)
Ajzen I, Fishbein M (1980) Understanding attitudes and predicting social behavior. Prentice-Hall,
Englewood Cliffs, NJ
Akturan U, Tezcan N (2012) Mobile banking adoption of the youth market: perceptions and
intentions. Marketing Intelligence & Planning 30(4):444–459
Al-Jabri IM, Sohail MS (2012) Mobile banking adoption: Application of diffusion of innovation
theory. Journal of Electronic Commerce Research 13(4):379–391
Amin H, Baba R, Muhammad MZ (2007) An analysis of mobile banking acceptance by Malaysian
customers. Sunway Academic Journal 4:1–12
Bulamacı, K. (2016). Mobil abone sayısı 73.2 milyon, akıllı telefon sayısı 41.5 million. http://
btdunyasi.net/mobil-abone-sayisi-73-2-milyon-akilli-telefon-sayisi-41-5-milyon/
Chitungo SK, Munongo S (2013) Extending the technology acceptance model to mobile banking
adoption in rural Zimbabwe. Journal of Business Administration and Education 3(1):51
Davis FD (1989) Perceived Usefulness. Perceived Ease of Use and User Acceptance of
Information Technology, MIS Quarterly 13(3):319–340
Fishbein M, Ajzen I (1975) Belief, attitude, intention and behavior: an introduction to theory and
research. Addison-Wesley, Reading, MA
Gu J, Lee S, Suh Y (2009) Determinants of behavioral intention to mobile banking. Expert Syst
Appl 36:11605–11616
Gumussoy CA (2016) Usability guideline for banking software design. Comput Hum Behav
62:277–285
Hanafizadeh P, Behboudi M, Koshksaray AA, Tabar MJS (2014) Mobile-banking adoption by
Iranian bank clients. Telematics Inform 31(1):62–78
Hung SY, Ku CY, Chang CM (2003) Critical factors of WAP services adoption: An empirical
study. Electron Commer Res Appl 2:42–60
Koenig-Lewis N, Palmer A, Moll A (2010) Predicting young consumers’ take up of mobile
banking services. International journal of bank marketing 28(5):410–432
Lin HF (2011) An empirical investigation of mobile banking adoption: The effect of innovation
attributes and knowledge-based trust. Int J Inf Manage 31(3):252–260
Luarn P, Lin HH (2005) Toward an understanding of the behavioral intention to use mobile
banking. Comput Hum Behav 21(6):873–891
Mendez, F. (2016). Mobile banking in turkey: The future of digital banking. https://www.bbva.
com/en/news/economy/computerstudies-sciences-and-development/digital-processing/mobile-
banking-turkey-future-digital-banking/
Pavlou PA (2003) Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with
the Technology Acceptance Model. International Journal of Electronic Commerce 7(3):101–134
Sekaran U (1992) Research Methods for Business – A skill building approach, 2nd edn. John
Wiley & Sons Inc., United States of America
Shaikh AA, Karjaluoto H (2015) Mobile banking adoption: A literature review. Telematics Inform
32(1):129–142
238 C. Altin Gumussoy et al.
Teo AC, Tan GWH, Cheah CM, Ooi KB, Yew KT (2012) Can the demographic and subjective
norms influence the adoption of mobile banking? Int J Mobile Commun 10(6):578–597
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information
technology: Toward a unified view. MIS quarterly, 425–478
Yu S (2009) Factors influencing the use of Mobile Banking: The case of SMS-based Mobile
Banking. School of Computing and Mathematical Sciences, Auckland University, New
Zealand, Master of Computer and Information Sciences
Yu CS (2012) Factors affecting individuals to adopt mobile banking: Empirical evidence from the
UTAUT model. Journal of Electronic Commerce Research 13(2):104
Zhou T, Lu Y, Wang B (2010) Integrating TTF and UTUAT to explain mobile banking user
adoption. Computers in Human Behavior 26:760–767
url 1. (2016). The Banks Association of Turkey. https://www.tbb.org.tr/tr/bankacilik/banka-ve-
sektor-bilgileri/4
Radiologists’ Perspective
on the Importance of Factors for MRI
System Selection
Keywords Medical decision making Multi-criteria decision making
Magnetic resonance imaging Analytical hierarchy process System selection
Introduction
of AHP method in healthcare, to the best of our knowledge, this is the first study
where a multi-criteria decision-making tool is used to examine the determinants
affecting the selection of an MRI system from the perspective of radiologists.
Methodology
We identified and grouped evaluation criteria for MRI device selection into five
main categories: performance, technical issues, patient comfort, usability and brand.
Here, the main and sub-criteria in Table 3 are obtained by taking into account the
pertinent scientific literature and experts’ experience.
The AHP is an MCDM method that is considered for decisions that necessitate the
incorporation of quantitative data with less tangible, qualitative considerations such
as values and preferences (Saaty 1977, 1980; Saaty and Ergu 2015; Vaidya and
Kumar 2006; Zahedi 1986; Vargas 1990; Liberatore and Nydick 2008; Dolan and
Frisina 2002). The technique is an eigenvalue approach to the pair-wise
Table 3 Criteria taken into account to select the best MRI system
Main criteria Sub-criteria
C1: Performance factors C11: Magnetic field strength
C12: Gradient specifications
C13: Coils
C14: Software applications
C15: Oldness of device
C2: Technical issues C21: Cost of device
C22: Accessibility of technical support
C23: Installation
C24: Maintenance cost
C25: Training of technical staff
C26: Data storage capacity
C3: Patient comfort C31: MRI accessories
C32: Bore diameter
C33: Patient monitoring
C4: Usability C41: Software support
C42: User-friendly independent workstation
C5: Brand C51: Significant design features
C52: Reputation
C53: Country of manufacture
244 G. Hancerliogullari et al.
comparisons, and has been applied to many areas including healthcare and medical
decision making. An AHP method involves the following key and basic steps:
• state the problem,
• identify the goal of the problem,
• identify the criteria, sub-criteria and alternatives under consideration,
• construct the problem in a hierarchy of different levels: goal, criteria, sub-criteria
and alternatives,
• conduct a series of comparisons among each element in the corresponding level,
and calibrate them on the numerical scale,
• calculate the maximum eigenvalue, consistency ratio (CR), and normalized
values for each criteria/alternative,
• determine the relative ranking or the best alternative.
The selection hierarchy for the best MRI system is illustrated in Fig. 1.
Questionnaire
Our study is a descriptive cross-sectional study for the purpose of assessing and
identifying the importance of the aforementioned criteria affecting MRI system
selection from radiologists’ perspective. A questionnaire, containing demographic
questions, enables each expert to compare the relative priority of criteria with all
other criteria within the same category. Before conducting the survey, a pilot test
was conducted with a few radiologists in the radiology department of the university
hospital. Based on the input received, the questionnaire was modified. The resulting
questionnaire was e-mailed to the respondents. We conducted a survey involving 39
radiologists with the following demographic characteristics of experts provided in
Table 4. The average age of the radiologists is 36.8, of which 66.7% are male
33.3% are female. The average working experience as a medical doctor and radi-
ologist is 18.4 and 14 years, respectively. 61.6% work in a university hospital, and
a total of 56.5% are somehow involved in the MRI system selection and pro-
curement process.
In order to detect the relevant criteria, we apply Saaty’s pairwise comparison
(Saaty 1980). For each pair of criteria, the experts were asked the following
question: “in the selection of an MRI system, considering merely ‘performance’,
how important is each element on the left compared with each element on the
right?” The respondents were asked to rate each factor using the nine-point scale
shown in Table 5.
Results
Factors that affect the MRI system selection from a radiologist point of view include
five main criteria and 19 sub-criteria. In the questionnaire completed by 39 radi-
ologists, the responses concerning the prioritization of the criteria were calculated
using the Super Decisions software, and the consistency ratios of the paired com-
parisons were analyzed. The priority weights of the main criteria influencing the
selection of MRI system are provided in Table 6. Among the five main criteria,
“brand” is the most important criteria with the highest weight; and “performance” is
the least important, with the lowest weight value. All responders achieved the
threshold for coherence (CR 0.1).
According to the analysis, “brand” and “patient comfort” are the two most
important main criteria affecting magnetic resonance imaging (MRI) system
246 G. Hancerliogullari et al.
This study was conducted for the purpose of examining and prioritizing the factors
affecting the MRI system selection from radiologists’ point-of-view. We present the
results of a study on the application of an AHP methodology. The three-level
hierarchy composed of five main criteria and 19 sub-criteria is given in Fig. 1.
Thirty-nine radiologists evaluated the considered criteria to determine the relative
Radiologists’ Perspective on the Importance of Factors … 247
weights. Each criterion of the hierarchy is evaluated by the experts under the
defined criteria. Each expert provides a decision about her/his judgment as a precise
numerical value, range of numerical values, or a linguistic term.
Table 6 provides the weights of the criteria for MRI system selection from
radiologists’ perspective. The results of this study imply that among the main criteria
effective in the selection of an MRI system including performance, technical issues,
patient comfort, usability, and brand, the brand has the highest priority and the
technical issues the lowest priority from radiologists’ perspective. Moreover, patient
comfort has the next highest importance after the brand, showing that health service
organizations pay attention to the quality of the MRI systems. Discussion of the
results with the experts confirms that their views are the same: first brand, then
patient comfort, usability, technical issues and performance. In order to provide
high-quality care for patients, healthcare providers aim to provide well-equipped and
reliable medical systems, which are the tool for diagnosis and treatment of diseases.
248 G. Hancerliogullari et al.
Medical systems play a leading role in diagnosis and treatment and are the
crucial reason for increasing healthcare costs. The selection of medical systems is
becoming a more complex problem due to a number of factors and variable con-
ditions. Here, we have concentrated on radiologists’ perspectives on magnetic
resonance imaging system selection. The proposed multi-criteria decision-making
methodology, AHP, enables experts to be flexible and to practice a large evaluation
pool containing precise numerical values, ranges of numerical values, and linguistic
terms. Therefore, the proposed methodology has the capability of taking care of all
kinds of evaluations from experts; in our case, radiologists. Our results provide a
guideline for decision makers when selecting an MRI system based on several
criteria. For further research, other multi-criteria decision-making approaches such
as TOPSIS, PROMETHEE II and VIKOR can be used and compared to the results
of this study.
References
Abdolahian B, Mehrani H (2009) Identify factors influencing the behavior of buyers of ultrasound
devices in Tehran. J Manag 6:1–10
Armstrong G, Kotler P, Merino M, Pintado T, Juan J (2011) Introducción al marketing. Pearson
Bahadori M, Sadeghifar J, Ravangard R, Salimi M, Mehrabian F (2012) Priority of determinants
influencing the behavior of purchasing the capital medical equipments using AHP model.
World J Med Sci 7:131–136
Bian X, Moutinho L (2011) The role of brand image, product involvement and knowledge in
explaining consumer purchase behavior of counterfeits direct and indirect effects. Eur J Mark
45:191–216
Brasser BA, Hyland F, Bennett A, Liston J (2008) Facility-based equipment and capital
expenditures. 2008 a roundtable. Decision-makers discuss their purchasing strategies. Rehab
Manag 21:26–28
Cappellaro G, Ghislandi S, Anessi-Pessina E (2011) Diffusion of medical technology: the role of
financing. Health Policy 100:51–59
Dolan JG, Frisina S (2002) Randomized controlled trial of a patient decision aid for colorectal
cancer screening. Med Decis Making 22:125–139
Eldemir F, Onden I (2016) Geographical information systems and multicriteria decisions
integration approach for hospital location selection. Int J Inf Technol Decis Mak 1–23
Gray JE, Morin RL (1989) Purchasing medical imaging equipment. Radiology 171:9–16
Ho W (2008) Integrated analytic hierarchy process and its applications–a literature review. Eur J
Oper Res 186:211–228
Hobbs F, Meier P (2012) Energy decisions and the environment: a guide to the use of multi-criteria
methods. Springer Science & Business Media, New York
Ivlev J, Vacek P Kneppo (2015) Multi-criteria decision analysis for supporting the selection of
medical devices under uncertainty. Eur J Oper Res 247:216–228
Ivlev J, Jablonsky P Kneppo (2016) Multiple-criteria comparative analysis of magnetic resonance
imaging systems. Int J Med Eng Inf 8:124–141
Julong D (2007) Brand effect behavior. J Grey Syst 19:197–202
Khorramshahgol R (2012) An integrated strategic approach to supplier evaluation and selection.
Int J Inf Technol Decis Mak 11:55–76
Kotler P, Levy SJ (1969) Broadening the concept of marketing. J Mark 33:10–15
250 G. Hancerliogullari et al.
Li H-L, Ma L-C (2008) Ranking decision alternatives by integrated DEA, AHP and gower plot
techniques. Int J Inf Technol Decis Mak 7:24–258
Liberatore MJ, Nydick RL (2008) The analytic hierarchy process in medical and health care
decision making: a literature review. Eur J Oper Res 189:194–207
Martin JL, Murphy E, Crowe JA, Norris BJ (2006) Capturing user requirements in medical device
development: the role of ergonomics. Physiol Meas 27:49–62
Mayo Clinic (2016) http://www.mayoclinic.org/tests-procedures/mri/basics/definition/prc-
20012903
Money AG, Barnett J, Kuljis J, Craven MP, Martin JL, Young T (2011) The role of the user within
the medical device design and development process: medical device manufacturers’
perspectives. BMC Med Inform Decis Mak 11:1–12
Ovretveit J (2003) The quality of health purchasing. Int J Health Care Qual Assur Inc Leadersh
Health Serv. 16:116–127
Paisley S (1998) Intelligent purchasing in trent: information for decision-making in the region’s
health authorities. Health Libr Rev 15:87–95
Pecchia L, Martin JL, Ragozzino A, Vanzanella C, Scognamiglio A, Mirarchi L, Morgan SP
(2013) User needs elicitation via analytic hierarchy process (AHP). A case study on a
computed tomography (CT) scanner. BMC Med Inform Decis Mak 13:1–11
Price D, Delakis I, Renaud C, Dickinson R (2008) MRI scanners: a buyer’s guide. The buyer’s
guide to respiratory care products
Ranjbarian B, Jamshidian M, Dehghan Z (2008) Factors affecting customers’ attitudes regard to
brand. Sch Behav 14:109–118
Saaty TL (1977) A scaling method for priorities in hierarchical structures. J Math Psychol 15:
234–281
Saaty TL (1980) The analytical hierarchical process. Wiley, New York
Saaty TL, Ergu D (2015) When is a decision-making method trustworthy? Criteria for evaluating
multi-criteria decision making methods. Int J Inf Technol Decis Mak 14:1171–1187
Schmidt K, Aumann I, Hollander I, Damm K, von der Schulenburg JMG (2015) Applying the
analytic Hierarchy process in healthcare research: a systematic literature review and evaluation
of reporting. BMC Med Inform Decis Mak 15:1–27
Shah SGS, Robinson I (2007) Benefits of and barriers to involving users in medical device
technology development and evaluation. Int J Technol Assess Health Care 23:131–137
Stetz L (1964) Why should purchasing have the final decision on selection of products? Hosp
Manage 98:129–131
Steuer RE, Na P (2003) Multiple criteria decision making combined with finance: a categorized
bibliographic study. Eur J Oper Res 150:496–515
Uçkun N, Girginer N, Çelik AE (2008) Usage of analytic hierarchy process in medical equipment
purchasement decisions: a university hospital case. Elektronik Sosyal Bilimler Dergisi 7:138–153
US Department of Health & Human Services National Institutes of Health (2016) https://www.
nibib.nih.gov/science-education/science-topics/magneticresonance-imaging-mri#946
Vaidya OS, Kumar S (2006) Analytic hierarchy process: an overview of applications. Eur J Oper
Res 169:1–29
Vargas LG (1990) An overview of the analytic hierarchy process and its applications. Eur J Oper
Res 48:2–8
Zahedi F (1986) The analytic hierarchy process-a survey of the method and its applications.
Interfaces 16:96–108
Zucker RM, Chua M (2010) Evaluation and purchase of confocal microscopes: numerous factors
to consider. Curr Protoc Cytom 16:2–16
Part III
Healthcare Systems Engineering and
Management
Relation of Grip Style to the Onset
of Elbow Pain in Tennis Players
Abstract The gradual onset of pain over the region of the lateral epicondyle can
result in the Tennis Elbow condition that affects a significant number of people
whose activities involve repetitive wrist movements. This study is the first effort so
far to propose an experimental design based research to investigate the effects of
different backhand grips of tennis players regarding the onset of elbow pain.
A sample population of tennis players is selected for implementing various tests
using the same test measurements and evaluation conditions. Non-parametric
variance analysis techniques fulfil the hypothesis test and derive inferential statistics
of the research. The clear and significant differences among the evaluated grips
prove that using a two-handed backhand style for backhand strokes is safer than
other common grip styles.
Introduction
P. A. Sarvari (&)
Luxembourg Institute of Science and Technology, Esch-Sur-Alzette, Luxembourg
e-mail: [email protected]
F. Calisir S. Zaim
Industrial Engineering Department, Management Faculty, Istanbul Technical University,
Istanbul, Turkey
e-mail: [email protected]
S. Zaim
e-mail: [email protected]
(KCAV) to the generation of racket head speed during a forehand swing. This
feasibility research led for the first time to quantifying the KCAV while producing a
topspin forehand, with changing of the grip size and grip pressure in an advanced
male tennis player. Despite speculation about the roles of different grip styles
(Fig. 1 depicts the three most common backhand stroke styles) in sustaining lateral
epicondylitis injuries, there have not yet been sufficient studies to assess this theory.
This research, therefore, proposes for the first time a research design to examine the
effects of different grip styles of players regarding the onset of pain in the elbow,
which is an initialization for Tennis Elbow. The proposed method may indeed
provide a potential and pilot investigation for further and comprehensive studies
considering all possible effective factors in the onset of elbow pain in people (not
only tennis players) with repetitive wrist movements.
This paper is organised as follows. In Section “Methods”, we propose the research
method and its components, such as the definition of the research question, research
strategy to be followed, and data analysis approach. In Section “Results”, we apply
statistical techniques to infer results from the specified data set. In
Section “Discussion”, we develop a discussion on the results obtained. The conclu-
sion of the study and some future works are summarized in Section “Conclusion”.
Methods
Considering the literature mentioned above and the specified deficiency of studies
on the significant role of various styles of gripping a tennis racket in the onset of
elbow pain in players, a comprehensive research design and data collection strategy
were applied. For this, an individual structural process starts with a research
question that is a testable prediction designating the relationship between two or
more variables. Next, details in relation to defining the variables are described.
256 P. A. Sarvari et al.
Research Question
“The majority of tennis elbow results from chronic overloading and under-recovery
due to poor biomechanics caused by grip style and size, and related movement
pattern dysfunction”, according to Mark Verstegen, president and founder of
Athletes’ Performance (Brown 2009). To avoid getting an injury, one should cease
hitting balls as soon as feeling inflammation of the tendon, the outer part of the
elbow, which is a sign of starting to get Tennis Elbow. Despite much research,
analyzing players suffering from lateral Tennis Elbow based on the backhand
stroke, such as (Alizadehkhaiyat and Frostick 2015; Bauer and Murray 1999; John
and Blackwell 1994; Kentel et al. 2008; Kevin and Chung 2017; King et al. 2012;
King et al. 2010; Pitzer et al. 2014; Philip Buttaravoli 2012; Riek et al. 1999; Wang
et al. 2010), most tennis players and professionals believe that elbow injuries are
directly related to the grip style of the backhand stroke. Based on these specula-
tions, our proposed explanation for this phenomenon (hypothesis) can be expressed,
as “grip style is effective in the onset of elbow pain in tennis players”.
Research Strategy
throughout the tests. The backhand strokes are counted until the exact moment that
the player reports the starting of pain in his elbow. At this time, the ball machine is
stopped, and the number of backhand strokes is recorded (Table 1).
Analysis Approach
Two principal statistical techniques are used in the data interpretation: descriptive
statistics, which compile data from a sample using criteria such as the mean or
standard deviation, and inferential statistics, which yield results from data that are
subject to random variation (e.g., observational errors, sampling variation).
Descriptive statistics often involve two sets of features of distribution (sample or
population): the central tendency (or location) attempts to portray the central or
typical value of the distribution, while scattering (or variability) characterizes the
258 P. A. Sarvari et al.
range within which segments of the distribution diverge from its centre and each
other. Deductions on statistics are made under the structure of probability
assumption, which deals with the examination of random phenomena.
A conventional statistical method includes testing the link between two statis-
tical data sets, or a data set and artificial data drawn from the idealised shape.
A hypothesis is stated for the statistical relationship among the two data sets, and
this is compared as an alternative to an idealized null hypothesis of no correlation
between data sets. Rejecting or disproving the null hypothesis is done by applying
analytical tests that quantify the sense in which the null can be demonstrated to be
false, given the data that are handled in the test. To analyze the captured data, we
need a method for testing whether samples originate from the same distribution. For
this, analysis of variance has to be applied to gage the differences among group
means and their associated procedures, as well as testing the developed hypotheses.
Regarding the population size; where the backhand stroke is nominated as the
dependent variable, and the backhand style is the independent variable, a
non-parametric method is used for testing or comparing two or more independent
samples of equal or different sizes. For testing the hypothesis, the Kruskal–Wallis
test by ranks (one-way ANOVA on ranks), which enlarges the Mann–Whitney U
test when there are more than two groups, has been employed. A significant
Kruskal–Wallis test designates that a minimum of one sample dominates another
sample stochastically. The test does not recognize where this stochastic dominance
occurs or for how many pairs of groups the stochastic dominance scores. As it is a
non-parametric method, the Kruskal–Wallis test does not consider a normal dis-
tribution of the residuals, contrary to the analogous one-way analysis of variance. If
the researcher can make the less powerful hypotheses of an identically formed and
sized distribution for all groups, except for any deviation in medians, then the null
hypothesis is that the medians of all groups are alike, and the alternative hypothesis
is that at least one population median of one group is distinct from the population
median of at least one other group.
Results
The analyses have been performed using IBM SPSS v.23. Table 2 illustrates the
descriptive statistics for the backhand strokes factor to summarise the given data
set, which is a representation of the entire population. The descriptive statistics are
broken down into measures of the central tendency (mean), measures of variability
(standard deviation, the minimum and maximum variables), and percentiles. Thus,
the mean value of backhand shots by the 25 players is 177.360 with a standard
deviation of 31.8223, where the maximum number of received shots is 245.
Relation of Grip Style to the Onset of Elbow Pain in Tennis … 259
In this section, to test whether there are statistically significant differences among
backhand styles, a Kruskal–Wallis rank test was applied at the significance level of
0.05. Table 3 depicts the number of subjects for each backhand style. Regarding the
test statistics for grouping of the backhand style variable, Table 4 shows the results.
Considering mean ranks, it looks at least superficially as if there is probably
going to be a difference between backhand style 2 and the other two styles. When
we look at our Kruskal–Wallis result, which is an omnibus statistic, it is looking for
at least one difference somewhere, and we can see that the Chi-square is 11.933
with a p-value of 0.003. As the asymptotic significance result is 0.003 (< 0.05), we
can reject the null hypothesis of no differences between the mean ranks for the
backhand factor, which means that there is a significant difference among backhand
styles considering the dependent variable when the minimum expected cell fre-
quency is 2.0.
After determining the existence of differences between grip styles considering the
onset of pain in tennis players’ elbows, we need to follow up the Kruskal–Wallis
test results with a post hoc test or accurate comparison testing because we do not
know which backhand grip is statistically different from the others. Thus we will
have specified the safest grip style for the backhand stroke. Consequently, the
Mann–Whitney U test for analyzing within groups has been used to test the
hypotheses. Paired comparisons between backhand styles have been conducted, and
the results of the analyses are illustrated in Tables 5 and 6.
Regarding Table 6, there is no significant difference between backhand styles 1
and 3. Besides, due to the derived asymptotic significance results between
260 P. A. Sarvari et al.
Table 6 Test statisticsa for backhand styles using Mann–Whitney U test analysis
Backhand
Backhand styles 1 and 2 Mann-Whitney U 3
Wilcoxon W 24
Z −2.828
Asymp. Sig. (2-tailed) 0.005
Exact Sig. [2 * (1-tailed Sig.)] 0.003b
Backhand styles 1 and 3 Mann-Whitney U 30
Wilcoxon W 85
Z 0
Asymp. Sig. (2-tailed) 1
Exact Sig. [2 * (1-tailed Sig.)] 1.000b
Backhand styles 2 and 3 Mann-Whitney U 8
Wilcoxon W 63
Z −3.021
Asymp. Sig. (2-tailed) 0.003
Exact Sig. [2 * (1-tailed Sig.)] 0.001b
a
Grouping variable: backhand style
b
Not corrected for ties
Relation of Grip Style to the Onset of Elbow Pain in Tennis … 261
backhand styles 1 and 2 (p-value; 0.005 < a-value; 0.05) as well as between
backhand styles 2 and 3 (p-value; 0.003 < a-value; 0.05), it is clearly observable
that the backhand style 2 is very different from the other styles.
Discussion
Although Tennis Elbow is a very complicated issue with a vast range of factors,
such as physical status, racket weights, grip-head balance and wet balls, these cause
the elbow to become sore and tender at the extensor carpi radialis brevis muscle.
Besides, lateral epicondylitis is not exclusive to the sport of tennis, and it can also
occur in golf players and casual workers who make repetitive monotonous wrist
movements. However, this paper has advanced research on a pilot sample that may
suggest an avenue for further investigation. Thus, the study of the role of grip style
in the onset of elbow pain in tennis players was the primary challenge. On this
point, an experimental design was adopted to analyze three different grip styles
during backhand strokes. Because of the sample size, two non-parametric testing
methods were used to interpret the data gathered on the tennis players. Kruskal–
Wallis analysis showed the existence of statistically significant differences among
backhand styles. The Mann–Witney U test was used to conduct paired comparisons
among backhand techniques, and it was concluded that the second grip style
(two-handed backhand style) was significantly different. Players using two-handed
backhand grip style while swinging the racket to deliver backhand strokes reported
the number of strokes before the onset of pain in their elbows. From another point
of view, however, using the two-handed backhand style is safer than other common
grip styles.
Finally, but importantly, it has been proven statistically through this work that
the double-handed grip style is the proper technique to limit injuries of the elbow by
minimizing wrist flexion and ulnar deviation while delivering a backhand stroke.
The potential limitation of this study is the number of independent variables as
factors influencing the onset of pain in tennis players. Resolving this shortcoming
will depend on more design and testing of hypotheses based on the speculations and
related literature.
Conclusion
The developed hypothesis has been tested via statistical analysis techniques in the
designed experimental research. The inferences drawn from the results of the
analyses are consistent with the research question. What is more, the results confirm
the present concerns and speculations about the role of grip style in the onset of
pain in tennis players. The proposed experimental design based on the onset of pain
and preparing a background for further and comprehensive studies are considered
262 P. A. Sarvari et al.
References
Abaraogu UO, Ezema CI, Ofodile UN, Igwe SE (2017) Association of grip strength with
anthropometric measures: height, forearm diameter, and middle finger length in young adults.
Pol Ann Med 5–9. http://doi.org/10.1016/j.poamed.2016.11.008
Alizadehkhaiyat O, Frostick (2015) Electromyographic assessment of forearm muscle function in
tennis players with and without Lateral Epicondylitis. J Electromyogr Kinesiol 25(6):876–886.
http://doi.org/10.1016/j.jelekin.2015.10.013
Bauer JA, Murray RD (1999) Electromyographic patterns of individuals suffering from lateral
tennis elbow. J Electromyogr Kinesiol 9(4):245–252. http://doi.org/10.1016/S1050-6411(98)
00051-0
Bertrand AM, Fournier K, Wick Brasey M-G, Kaiser M-L, Frischknecht R, Diserens K (2015)
Reliability of maximal grip strength measurements and grip strength recovery following a
stroke. J Hand Ther 28(4):356–363. https://doi.org/10.1016/j.jht.2015.04.004
Blackwell JR, Cole KJ (1995) Wrist kinematics differ in expert and novice tennis players
performing the backhand stroke: implications for tennis elbow. J Biomech 27:509–516
Branson R, Naidu K, du Toit C, Rotstein AH, Kiss R, McMillan D, Vicenzino B (2016).
Comparison of corticosteroid, autologous blood or sclerosant injections for chronic tennis
elbow. J Sci Med Sport. http://dx.doi.org/10.1016/j.jsams.2016.10.010
Brown J (2009) How to avoid and treat tennis elbow
Christensen J, Rasmussen J, Halkon B, Koike S (2016) The development of a methodology to
determine the relationship in grip size and pressure to racket head speed in a tennis forehand
stroke. Proced Eng 147:787–792. https://doi.org/10.1016/j.proeng.2016.06.317
Chung KC, Lark ME (2017) Upper extremity injuries in tennis players : diagnosis, treatment, and
management. Hand Clin 33(1):175–186. http://doi.org/10.1016/j.hcl.2016.08.009
Daphnee DK, John S, Vaidya A, Khakhar A, Bhuvaneshwari S, Ramamurthy A (2017) Hand grip
strength: a reliable, reproducible, cost-effective tool to assess the nutritional status and
outcomes of cirrhotics awaiting liver transplant. Clin Nutr ESPEN 1–5. http://doi.org/10.1016/
j.clnesp.2017.01.011
Ekşioğlu M (2016) Normative static grip strength of population of Turkey, effects of various
factors and a comparison with international norms. Appl Ergon 52:8–17. https://doi.org/10.
1016/j.apergo.2015.06.023
García-Esquinas E, Rodríguez-Artalejo F (2017) Association between serum uric acid concen-
trations and grip strength: is there effect modification by age? Clin Nutr 6–12. http://doi.org/10.
1016/j.clnu.2017.01.008
Gubelmann C, Vollenweider P, Marques-Vidal P (2017) No association between grip strength and
cardiovascular risk: the CoLaus population-based study. Int J Cardiol 236:478–482. https://doi.
org/10.1016/j.ijcard.2017.01.110
Ingelman JP (1991) Biomechanical comparison of backhand techniques used by novice and
advanced tennis players: implications for lateral epicondylitis. Simon Fraser University
John R, Blackwell KJC (1994) Wrist kinematics differ in expert and novice tennis players
performing the backhand stroke: Implications for tennis elbow. J Biomech 27(5):509–516.
http://doi.org/10.1016/0021-9290(94)90062-0
Relation of Grip Style to the Onset of Elbow Pain in Tennis … 263
Abstract This study aims to apply lean thinking to the Emergency Department of
Kocaeli University Education and Research Hospital. From analyzing to designing
the improved system, lean techniques are adopted in order to reduce waste com-
pared to the current system and to make the process continuous. Although lean
defines wastes in the process, it does not indicate which wastes should be elimi-
nated primarily. At this point, Arena simulation software program is used. To
compare the current and simulated systems, the length of stay of patients in the
Emergency Department is selected as the main quality measurement. According to
Arena outcomes, the main bottlenecks are identified and then suggested improve-
ments are implemented in the newly designed system with lean approaches. To
determine the effects of the improvement, Arena is run again to see whether or not
there is a significant difference. Taking into account these considerations, the Arena
simulation program helped the application of lean tools in redesigning the
Emergency Department process.
Introduction
In today’s healthcare sector, patients consider not only obtaining a service from
hospitals but also the quality level of the medical care throughout their journey.
Providing high-quality healthcare has become a widespread issue around the world
as well as in Turkey. In order to increase the quality level of hospitals, total quality
management applications started to be used in hospitals in the 1990s. After 2010,
lean initiatives, a tool for total quality management, started to be adopted in the
healthcare sector in Turkey (Çavuş and Gemici 2013). Long waiting times, delayed
test results, increased medical errors, and departures from hospitals without seeing a
doctor in the Emergency Department (ED) result in patient dissatisfaction, while
lean defines them as activities that patients do not want to pay for. According to the
main purpose of lean, any kind of non-value added activity should be eliminated
from the patient’s path by focusing on process flow. Therefore, lean techniques
make it possible to find a better way to provide a good-quality service to patients by
decreasing the level of dissatisfaction due to the defined problems. More specifi-
cally, the need to improve the service quality and satisfaction of patients and
medical staff in emergencies can be met through the benefits of lean. The aim of this
paper is to decrease the total length of stay of patients as well as the waiting times,
which are the main problems in the ED of Kocaeli University Training and
Research Hospital, in the light of lean thinking. In addition, in order to eliminate
waste, this paper focuses on the process throughout the ED by using the lean
approach with a streamlined and smooth flow.
Literature Review
Lean can be seen as a rescuer for organizations by serving products at the desired
level to customers in a short time without any further cost to them. Fillingham
(2007) corrected a misunderstanding of the lean perspective by stating that instead
of producing more products by forcing workers to study more, lean is about
eliminating the seven types of wastes (motion, waiting, over-processing, transport,
defects, over-production, and inventory), which are the main parts of the obstruc-
tions from the efficient production by workers. The results of the application of lean
affect and stimulate further results as a chain effect. Carrying out production by
responding to the customer’s requirements leads to decreased stock levels, which in
turn leads to the elimination of waste and affects the financial situation of the
organization positively.
The principles of lean are not only used in manufacturing areas but have also
started to be used widely in the service sector. According to Dibia et al. (2014), lean
is applied in the ceramics, aerospace, finance, building, and electronics industries as
well as in the healthcare industry.
Application of Lean Principles in Hospitals: A Process Design … 267
Lean in Healthcare
The critical role of healthcare in people’s lives and enhanced quality of sanitation,
which are increased competency between healthcare organizations, have led to sig-
nificant improvements in the healthcare sector. The evolution of healthcare quality in
Turkey started with the application of “Total Quality Management (TQM)” in 1990
and continued with the taking of the ISO 9001 certificate in 2003 and the setting of
“Health Quality Standards” by the Ministry of Health. In 2010, the first lean appli-
cations and adoptions in hospitals were seen (Çavuş and Gemici 2013). It can be said
that there was a major step forward in 2003 with the “Healthcare Conversion
Program” regulation, whose main objective is to provide qualified, sustainable,
effective, and efficient healthcare services for everyone (Lamba et al. 2014).
Despite these initiatives, various problems such as medical errors and waiting
times can result in failure in the healthcare process. Lean philosophy has started to
be applied in hospitals to address these wastes. Lean can be applied well in
healthcare systems because it is not hard for doctors, nurses, and other medical
personnel to learn and adopt lean techniques (Curatolo et al. 2014).
Virginia Mason Medical Center achieved the following improvements by
applying lean for three years: a 53% decrease in stocks, a 36% increase in effi-
ciency, a 41% reduction in space requirements, a 65% decrease in lead times, a 44%
reduction in walking distances, a 72% decrease in the movement of materials, and
an 82% reduction in setup times (Womack et al. 2005). It can be stated that the
application of lean can result in reduction of seven types of wastes in healthcare.
Seven types of wastes, which are defined as non-valued activities, can also be
defined for EDs in healthcare. Dickson et al. (2009) highlighted this issue by giving
long waiting times for tests, doctors, and long walking distances as examples.
Patients do not want to encounter these types of delays during their ED journeys.
Problems including the waiting time at each stage in the ED affect the process in
a negative way. Mazzocato et al. (2012) stated that the problems of congestion and
waiting are mainly derived from the breakdown of the process. It can be inferred
that to decrease these problems, the process flow should be considered. Lean
focuses on the flow and process to help ameliorate these problems. Mazzocato et al.
(2014) indicated that the value defined by the customers at the ED is taking the
service on time. Lean succeeds in this respect by presenting a continuous process
for the patients to link the valued activities by eliminating wastes. This enduring
process can be obtained by establishing the wastes during the process of the patients
in the emergency services and regulating the process by small improvements to
eliminate the unnecessary tasks. Chan et al. (2014) observed that with lean work in
the ED, the consultation waiting time decreased from 13.68 to 11.65 min and the
final waiting time also decreased from 16.86 to 14.28 min.
268 H. Camgoz Akdag et al.
Simulation is one of the lean tools that is helpful for seeing the bottlenecks of the
current situation and the effects of the implementation on the system statistically.
Wang et al. (2015) mentioned that the program Rockwell Arena 13.51 provides a
preview of the waiting time and service level when changing the condition only in
the program. Discrete event simulation is beneficial for simulating the hospital
environment since it reflects instantaneous changes in the program. In addition, it is
very difficult to see the bottlenecks and the effects of the improvements in such a
complex system without using a simulation. Arena simulation software is helpful at
that point to make it possible to see the waiting time, queues, and length of the
service and also to allow different improvement scenarios to be compared with the
current process. Robinson et al. (2012) highlighted that simulation and lean should
be used together in the healthcare sector because they have the same motivations.
By using discrete event simulation with lean, the process will be better designed
than if lean is used only as a tool.
Methodology
The objective of this paper is to increase the satisfaction level of the adult patients
and medical staff by decreasing waiting times and the time spent in the ED of Kocaeli
University Education and Research Hospital. In order to achieve this aim, the process
journey of the patients was taken with a flowchart by applying lean methodologies
and tools. Lean was preferred in the study because one of the main principles of lean
thinking is the achievement of a smooth flow. For that reason, the non-valued steps
and wastes during the process were established by observing and taking approval of
nurses and doctors in a focus group. The data were collected from the Information
Technology department and the HUY system and all were confirmed by observations
and a focus group. The current flow with the collected data was transferred to the
Arena software program to simulate and trace the real patient flow and specify the
bottlenecks. Multiple small improvements were offered to enhance the process by
looking at the designated wastes and bottlenecks by using the lean approach. The ED
process at Kocaeli University Education and Research Hospital was redesigned with
a flowchart to offer a better quality service to patients by conducting a focus group
and the simulation was rerun to track the improvements.
After a patient arrives at the ED, triage or a paramedic classifies the patient as
green, yellow, or red according to severity. These different kinds of patients are
directed to the green area or the yellow/red area. These areas have their own bed
capacities and their own medical staff in terms of doctors, nurses, and interns.
Patients classified as red are the critical ones and should be examined and cured
immediately. Patients classified as yellow also have serious problems but these
health problems are not life threatening and can wait. Patients classified as green
have minor health problems and are at low risk so they can wait for longer com-
pared to yellow patients. Triage needs to stand up and go to the closed bed areas to
check bed availabilities for each incoming patient. After a suitable bed has been
found, the patient’s journey continues with examination by a doctor, tests (blood,
X-ray, tomography, urine), consulted doctor examination, treatment, and finally
discharge from hospital. The HUY system is used in the hospital to track these
stages.
While observing each step of the ED patients and drawing the flowchart of the
patients, some problems were observed. It was also seen that these observed
problems are parallel to those of the EDs examined in the literature, which were
described in the literature review section. While identifying the problems in the ED
of Kocaeli University Education and Research Hospital, the patients were also
interviewed and a focus group was conducted with medical personnel working in
the emergency service (Table 1).
Data regarding the duration of each stage of the flowchart were collected from the
HUY system and by observation and were translated into the Arena Input Analyzer
to find their distributions. To simulate the current process of Kocaeli University
Education and Research Hospital ED and to see the bottlenecks, the Rockwell
Arena simulation program was used within the context of discrete event simulation.
This program was used just to see the obstacles in the process in terms of delays and
waiting times to be minimized, as lean suggests.
The main indicator for the hospital is the length of stay (LOS) of the patient in
the ED. The LOS of the patient was measured in the simulation model by using the
record module. The difference between the arrival and discharge times gives the
length of stay of a patient in the model. After the current drawn process flow had
been modeled in Arena with reflected data from Input Analyzer, it was run with a
replication length of 30 days and 100 iterations to obtain accurate results. The
comparison of the simulated and LOS data is used to check the validity of the
270 H. Camgoz Akdag et al.
created model. According to these statistics, it can be said that it is valid while the
current length of stay of patients at the hospital is 198,110 min while the simulation
model gives a patient LOS of 197,412 min.
After seeing the long waiting times before being directed to the bed area and the
length of stay, first of all it was thought that the bed capacity was insufficient. In
order to minimize this waiting waste, the number of the beds was increased in the
resource spreadsheet in Arena to see the effect. After changing the bed capacity, the
simulation program was run 100 times to decrease the variance. The length of stay
was decreased by increasing the bed capacities. This mean of this scenario was
compared with the mean of the current situation using a two-sample t-test. The
two-sample t-test was used because at each iteration, the Arena simulation program
selects random entities according to the distributions that are entered. For this test,
the hypotheses were as follows:
Application of Lean Principles in Hospitals: A Process Design … 271
H0: The means of the bed scenario and the current situation are equal.
H1: The means of the bed scenario and the current situation are not equal.
The result showed that there is no significant difference compared to current
situation. This may originate from the lack of medical staff when the bed capacity is
increased. For this reason, this scenario is not suggested as an improvement.
After patients arrive at the ED, triage does not know the bed availability and goes
there to check it. Since triage has other regular jobs and may be busy, he or she
cannot go there and check it every time. This leads to a waiting delay. It is sug-
gested that this waste should be minimized by adopting a screen in the ED that
shows bed availabilities automatically; in other words, triage will not need to go to
the bed area to check the status of the beds.
In order to see the results of this improvement, the Hold module, which presents
the waiting area of the patients, was rearranged in Arena. After making the new
arrangement, the Arena simulation program was run for 100 iterations to see the
effects. The result showed that the length of stay decreased from 197,421 to
174,059 min, while the waiting time of the green patients decreased from 10,073 to
0 min and that of the yellow patients decreased from 29,238 to 24,462 min. This
suggested improvement was also transferred to the Output Analyzer of the Arena
simulation program in order to compare the mean of the scenario with the mean of
the current situation.
H0: The means of the suggested triage bed-checking improvement and the current
situation are equal.
H1: The means of the suggested triage bed-checking improvement and the current
situation are not equal.
The results of the two-sample t-test indicated that the length of stay and the
waiting times of the green and yellow patients show significant differences com-
pared to the current situation. So, this suggested improvement can be beneficial for
decreasing the motion of the triage checking the bed availability, the waiting waste
of the patients, and the motion of the patients due to waiting.
In this scenario, it is suggested that the process of asking the triage and secretary for
directions to and locations of other departments in the hospital be minimized, since
272 H. Camgoz Akdag et al.
this is a motion waste according to lean. To see the effect of this improvement, this
process was eliminated from Arena and simulated for 100 iterations.
The length of stay of patients decreased from 197,421 to 194,736 min. However,
when these two statistics were compared in the Arena Output Analyzer with the
two-sample t-test, the result showed that there was no significant difference between
the two means. Even though this improvement did not affect the length of stay or
waiting times significantly, due to quality assurance, this motion waste should be
eliminated.
At this point, it is important to highlight that decreased waiting time is not the
only indicator of increased service quality according to lean. This study targets the
minimization of wastes in the process, so to achieve this aim, the suggested
improvement will be established by placing signs that show the locations of other
departments around the ED in order to help the non-ED patients to find directions.
Also, these signs are associated with the visualization tool of lean.
This scenario deals with reducing the waiting time for consultation, a waiting
type of waste. When the emergency doctor cannot diagnose the patient’s disease
and needs a specialist’s opinion, a consultation order is made by a nurse via
HUY. The problem in this process is that the nurse does not know whether that
consultant is available at that moment, so the patient may wait for a long time.
The suggested improvement is the addition of a feature in the HUY system that
will show only the available doctors who have recently logged into the system
with their passwords so that the consultation request will not be sent to busy
doctors.
According to the hospital medical staff, the average waiting time for consultation
can be decreased from 112 to 40 min by implementing this new feature in the HUY
system. To see the effect of this scenario, the duration of the Consultation Module is
changed from 112 to 40 min by taking into account their distribution in Arena, and
the improved model is run for 100 replications. As a result, the average length of
stay of patients decreased from 197,421 to 190,765 min.
A two-sample t-test is conducted with Arena Output Analyzer and the null
hypothesis is that the means are equal. With a 95% confidence interval, the null
hypothesis is rejected, meaning that there is a significant difference between the
current model and improved model with regard to the consultation. In this scenario,
the suggested improvement helped to decrease waiting waste, a non-valued activity,
in the healthcare process in the Kocaeli ED.
Application of Lean Principles in Hospitals: A Process Design … 273
In addition to scenarios that affect the length of stay directly, it was also decided to
suggest small lean improvements with regard to the defined and observed problems
associated with the seven types of wastes previously to increase the quality level.
They do not affect the duration of the stages significantly, but they are required to
increase patient satisfaction. Currently, patient IDs are written on tubes and beakers
by hand and sometimes nurses write them incorrectly. To minimize this defect
waste, it is suggested that a barcode be stuck on the tubes. Also, wheelchairs are left
in the ED corridors and members of the medical staff transport them unnecessarily
when a new patient arrives. It is suggested that the lean tool 5S should be applied;
in other words, a specific place should be assigned for wheelchairs at the entrance.
In addition, students use ED corridors as a shortcut to the canteen. To prevent
crowd and motion waste, it is suggested that a password feature be installed in the
door so that it can be used for urgent situations only. In this scenario, the aim is to
see the effects of all improvements. That is why, in addition to these small
improvements, scenarios 2, 3, and 4 were simulated together. The improved sim-
ulation model was rerun for 100 replications to decrease the variance.
The new arrangement decreased the length of stay from 197,412 to 153,603 min,
decreased the yellow patients’ bed-waiting queue from 29,238 to 13,047 min, and
eliminated the green patients’ bed-waiting queue totally. From these decreased
times, it can be stated that implementing all the suggested improvements in the
model succeeded in minimizing waste in the process. The Arena outputs show that
applying all the suggested improvements in the current Kocaeli University
Education and Research Hospital ED would result in a more continuous flow in the
process as parallel to lean intention.
The Arena Output Analyzer was used to conduct a two-sample t-test with a 95%
confidence level.
H0: The means of the improved model and current model are equal.
H1: The means of the improved model and current model are not equal.
According to the results of the Arena Output Analyzer, H0 is rejected for all
indicators. This means that applying all the improvements to the simulation model
together decreased the waiting time queues and length of stay of patients
significantly.
In order to increase patient satisfaction and service quality level and also to provide
continuous flow, the suggestions for improvement explained above were reflected
to the flowchart. Some of those suggestions do not affect the flow of the process but
274 H. Camgoz Akdag et al.
do affect the layout. In order to better compare the current patient journey in the ED
and the new improved process flow, the implemented processes are shown in
yellow color. Appendix 1 presents the improved flowchart for the analyzed ED and
Appendix 2 shows the improved Arena simulation.
Conclusion
The main objective of this study was to focus on the wastes that do not add any
value for customers throughout the process. The service sector has started to adopt
lean in order to increase quality. One of the quality tools, lean, defines the
non-valued activities and classifies them as seven wastes that should be minimized
to increase the quality. Lean deals with the elimination of these wastes as they
disturb the process of the patients in the ED in order to achieve a smooth flow.
After carrying out observations and a focus group with the medical staff of the
ED, the discussed wastes were associated with the seven wastes after the process
flow was drawn up. The aim was to minimize these wastes. In addition, to track the
current situation and the suggested improvements, the Arena simulation program
was used as a lean tool. After data had been collected from the information tech-
nology of the hospital and confirmed with a focus group and observations, the
distributions of the data were obtained using the Arena Input Analyzer. After
translating the data into the simulation program, it was iterated 100 times, the
results were obtained, and different scenarios were tested with Arena to see the
effects of each improvement. As the main quality indicators are the length of stay
and the waiting time of the patients, the effects of the scenarios on these indicators
were considered.
As the lean approach states, small improvements to the current situation were
suggested. With regard to the triage going and checking bed availabilities, it is
suggested that a monitor should be purchased to eliminate the waiting waste of the
patients and the motion waste of the triage. With regard to the delayed consultation
stage in the process, it is advised that a new feature should be added to the HUY
system to prevent the medical staff from selecting a doctor who is not available or
present at the hospital. With regard to the defect waste stemming from writing
patient information on tubes and beakers by hand, it is suggested that a barcode
containing information should be stuck on these items to increase the quality. To
address the transportation waste originating from wheelchairs being left in corri-
dors, it is suggested that a wheelchair area should be designated at the entrance
within the scope of one of the lean tools, 5S. In order to minimize the motion waste
of the medical faculty’s students, it is advised that a password or card pass feature
should be installed in the door of the ED open to the faculty. To deal with the
unnecessary motion waste due to patients being unable to find different departments
and occupying the registration and triage unnecessarily, it is recommended that
colored signs indicating the way to the most frequently requested departments be
placed on the floor to guide the patients.
Application of Lean Principles in Hospitals: A Process Design … 275
These improvements were added to the newly designed flowchart to present the
improvements to the process in yellow color. In addition, when the improvements
were defined in the Arena simulation program, it was found that the total length of
stay decreased, on average, from 197.412 to 153.603 min, the waiting time of the
green patients decreased from 10.073 to 0 min, and the waiting time of the yellow
patients decreased, on average, from 29.238 to 13.047 min. The two-sample t-tests
showed that the simulation results were significantly different from those for the
current situation. With these improvements, the non-valued wastes can be mini-
mized, the quality of the service that the patients receive will increase, and patients
will be satisfied due to the continuous process.
With regard to further studies, these improvements should be maintained at the
ED of Kocaeli University Education and Research Hospital as lean culture focuses
on continuous improvement. The adoption of lean by the ED should not be limited
to the recommendations of this study but should be continued.
276 H. Camgoz Akdag et al.
References
Chan HY, Lo SM, Lee LLY, Lo WYL, Yu WC, Wu YF, Ho ST, Yeung RSD, Chan JTS (2014)
Lean techniques for the improvement of patients’ flow in emergency department. World J
Emerg Med 5(1):24–28
Curatolo N, Lamouri S, Huet JC, Rieutord A (2014) A critical analysis of lean approach
structuring in hospitals. Bus Process Manag J 20(3):433–454
Çavuş MF, Gemici E (2013) Total quality management in health sector. J Acad Soc Sci 1(1):238–
257
Dibia IK, Dhakal HN, Onuh S (2014) Lean “leadership people process outcome” (LPPO)
implementation model. J Manufact Technol Manag 25(5):694–711
Dickson EW, Singh S, Cheung DS, Wyatt CC, Nugent AS (2009) Application of lean
manufacturing techniques in the emergency department. J Emerg Med 37(2):177–182
Fillingham D (2007) Can lean save lives? Leadersh Health Serv 20(4):231–241
Lamba M, Altan Y, Aktel M, Kerman U (2014) Reconstruction in the ministry of health: an
evaluation in terms of the new public management. Amme İdaresi Dergisi 47(1):53–78
Mazzocato P, Holden RJ, Brommels M, Aronsson H, Backman U, Elg M, Thor J (2012) How does
lean work in emergency care? A case study of a lean-inspired intervention at the Astrid
Lindgren Children’s hospital, Stockholm, Sweden. BMC Health Serv Res 12(1):28
Mazzocato P, Thor J, Backman U, Brommels M, Carlsson J, Jonsson F, Hagmar M, Savage C
(2014) Complexity complicates lean: lessons from seven emergency services. J Health Organ
Manag 28(2):266–288
Robinson S, Radnor ZJ, Burgess N, Worthington C (2012) SimLean: Utilising simulation in the
implementation of lean in healthcare. Eur J Oper Res 219(1):188–197
Wang TK, Yang T, Yang CY, Chan FT (2015) Lean principles and simulation optimization for
emergency department layout design. Ind Manag Data Syst 115(4):678–699
Womack JP, Byrne AP, Fiume OJ, Kaplan GS, Toussaint J (2005) Going Lean in Health Care.
Institute for Health care Improvement Cambridge, MA