2012 Book StatisticalAndComputationalTec
2012 Book StatisticalAndComputationalTec
in Manufacturing
J. Paulo Davim (Ed.)
ABC
Editor
J. Paulo Davim
University of Aveiro
Campus Santiago
Department of Mechanical Engineering
Aveiro
Portugal
In recent years, has been increased interest in developing statistical and computational
techniques for applied in manufacturing engineering. Today, due to the great com-
plexity of manufacturing engineering and the high number of parameters used con-
ventional approaches are no longer sufficient. Therefore, in manufacturing, statistical
and computational techniques have achieved several applications, namely, modelling
and simulation manufacturing processes, optimisation manufacturing parameters,
monitoring and control, computer-aided process planning, etc.
The chapter 1 of the book provides design of experiment methods in manufac-
turing (basics and practical applications). Chapter 2 is dedicated to stream-of-
variation based quality assurance for multi-station machining processes (modelling
and planning). Chapter 3 described finite element modelling of chip formation in
orthogonal machining. Chapter 4 contains information on GA-fuzzy approaches
(application to modelling of manufacturing process) and chapter 5 is dedicated of
single and multi-objective optimization methodologies in CNC machining. Chapter
6 described numerical simulation and prediction of wrinkling defects in sheet metal
forming. Finally, chapter 7 is dedicated of manufacturing seamless reservoirs by
tube forming (finite element modelling and experimentation).
The present book can be used as a research book for final undergraduate engi-
neering course or as a topic on manufacturing at the postgraduate level. Also, this
book can serve as a useful reference for academics, manufacturing and computa-
tional sciences researchers, manufacturing, industrial and mechanical engineers,
professional in manufacturing and related industries. The interest of scientific in
this book is evident for many important centers of the research, laboratories and
universities as well as industry. Therefore, it is hoped this book will inspire and
enthuse others to undertake research in this field of statistical and computational
techniques in manufacting.
The Editor acknowledges Springer for this opportunity and for their enthusias-
tic and professional support. Finally, I would like to thank all the chapter authors
for their availability for this work.
7.6 Applications..........................................................................................271
7.6.1 Performance and Feasibility of the Process................................272
7.6.2 Requirements for Aerospace Applications .................................276
7.7 Conclusions ..........................................................................................279
References ....................................................................................................279
Subject Index......................................................................................................283
Contributors
Viktor P. Astakhov
General Motors Business Unit of PSMi, 1255 Beach Ct., Saline MI 48176, USA
[email protected]
1.1 Introduction
This could lead to a better understanding of when to use given tools and me-
thods, as well as contribute to the invention of new discovery tools and refine-
ment of existing ones. This chapter concentrates on one of the most powerful
statistical method known as design of experiments (hereafter, DOE) or experi-
mental design.
DOE is an statistical formal methodology allowing an experimentalist to estab-
lish statistical correlation between a set of input variables with a chosen outcome
of the system/process under study under certain uncertainties, called uncontrolled
inputs. The visualization of this definition is shown in Figure 1.1 where (x1,
x2,...xn) are n input variables selected for the analysis; (y1, y2,...ym)) are m possible
system/process outputs from which one should be selected for the analysis; and
(z1, z2,...zp) are p uncontrollable (the experimentalist has no influence) inputs (of-
ten referred to as noise).
The system/process is designated in Figure 1.1 as black box1, i.e. it is a device,
system or object which can be viewed solely in terms of its input, output, and
transfer (correlation) characteristics without any knowledge of its internal work-
ings, that is, its implementation is "opaque" (black). As a result, any model estab-
lished using DOE is not a mathematical model (although it is expressed using ma-
thematical symbols) but formal statistical or correlation model which does not
have physical sense as a mathematical model derived using equations of mathe-
matical physics with corresponding boundary conditions. Therefore, no attempt
should normally be made to derive some physical conclusions from the model ob-
tained as statistical correlation can be established between an input and the output
which are not physically related.
For example, one can establish a 100% correlation between the rate of grass
growth in his front yard and the level of water in a pond located in the neighbor-
hood by carrying our perfectly correct statistical analysis. However, these two are
not physically related. This physically misleading conclusion is obtained only be-
cause the amount of rainfall that affects both the rate of grass growth and the level
of water was not considered in the statistical analysis. Therefore, the experimen-
talist should pay prime attention to physical meaning of what he is doing at all
stages of DOE in order to avoid physically meaningless but statistically correct re-
sults. In the author’s opinion, this is the first thing that a potential user of DOE
should learn about this method.
1
The modern term "black box" seems to have entered the English language around 1945.
The process of network synthesis from the transfer functions of black boxes can be traced
to Wilhelm Cauer who published his ideas in their most developed form in 1941. Al-
though Cauer did not himself use the term, others who followed him certainly did de-
scribe the method as black-box analysis.
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 3
A great number of papers, manuals, and books have been written on the subject in
general and as related to manufacturing in particular. Moreover, a number of spe-
cial (for example, Satistica), specialized (for example, Minitab) and common (for
example, MS Excel) computer programs are available to assist one to carry out
DOE with endless examples available in the Web. Everything seems to be known,
the terminology, procedures, and analyses are well developed. Therefore, a logical
question why this chapter is needed should be answered.
The simple answer is that this chapter is written from the experimentalist side
of the fence rather than the statistical side used in vast majority of publication
on DOE. As the saying goes “The grass is always greener on the other side of
the fence,” i.e. “statisticians” often do not see many of real-world problems in
preparing proper tests and collecting relevant data to be analyzed using DOE.
Moreover, as mentioned above, the dangers of using complex computer statis-
tical easy-to-use software with colorful user interfaces and looks-nice graphical
representation of the results grow while user comprehension, and thus control
are diminished.
This chapter does not cover basic statistics so that a general knowledge of sta-
tistics including probability concept, regression, and correlation analysis, statisti-
cal distributions, statistical data analysis including survey sampling has to be re-
freshed prior to working with this chapter. Rather it presents the overview of
various DOE to be used in manufacturing commenting its suitability for particular
cases.
conditions. A special design called a Split Plot can be used if some of the
factors are hard to vary.
• Treatments: A treatment is the condition or a factor associated with a spe-
cific level in a specific experiment.
• Experimental units: Experimental units are the objects or entities that are
used for application of treatments and measurements of resulting effects.
• Factorial experimentation is a method in which the effects due to each
factor and to combinations of factors are estimated. Factorial designs are
geometrically constructed and vary all the factors simultaneously and
orthogonally. Factorial designs collect data at the vertices of a cube in
k-dimensions (k is the number of factors being studied). If data are collected
from all of the vertices, the design is a full factorial, requiring 2k runs
provided that each factor is run at two levels.
• Fractional factorial experimentation includes a group of experimental de-
signs consisting of a carefully chosen subset (fraction) of the experimental
runs of a full factorial design. The subset is chosen so as to exploit the spar-
sity-of-effects principle to expose information about the most important
features of the problem studied, while using a fraction of the effort of a full
factorial design in terms of experimental runs and resources. As the num-
ber of factors increases, the fractions become smaller and smaller (1/2, 1/4,
1/8, 1/16, …). Fractional factorial designs collect data from a specific sub-
set of all possible vertices and require 2k-q runs, with 2-q being the fractional
size of the design. If there are only three factors in the experiment, the
geometry of the experimental design for a full factorial experiment requires
eight runs, and a one-half fractional factorial experiment (an inscribed te-
trahedron) requires four runs.
To visualize DOE terminology, process, and its principal stages, consider the fol-
lowing diagram of a cake-baking process shown in Figure 1.2. There are five as-
pects of the process that are analyzed by a designed experiment: factors, levels of
the factors, experimental plan, experiments, and response.
1.3 Response
The response must satisfy certain requirements. First, the response should be the
effective output in terms of reaching the final aim of the study. Second, the re-
sponse should be easily measurable, preferably quantitatively. Third, the re-
sponse should be a single-valued function of the chosen parameters. Unfortu-
nately, these three important requirements are rarely mentioned in the literature
on DOE.
8 V.P. Astakhov
Fig. 1.2. Visualization of DOE terminology, process and its principal stages
Selections of factors levels is one of the most important yet least formalized, and
thus discussed stage. Each factor selected for the DOE study has a certain global
range of variation. Within this range, a local sub-range to be used in DOE is to be
defined. Practically it means that the upper and lower limits of each included fac-
tors should be set. To do this, one should use all available information such as ex-
perience, results of the previous studies, expert opinions, etc.
In manufacturing, a selection of the upper limit of a factor is a subject of
equipment, physical, and statistical limitations. The equipment limitation means
that the upper limit of the factor cannot be set higher than that allowed by the
equipment used in the tests. In the example considered in Figure 1.2, the oven
temperature cannot be selected higher than that the maximum temperature availa-
ble in the oven. In machining, if the test machine has maximum spindle rpm of
5000 and drills of 3 mm are to be tested that the upper limit of the cutting speed
cannot be more than 47.1 m/min. The physical limitation means that the upper
limit of the factor cannot be set higher than that where undesirable physical trans-
formation may occur. In the example considered in Figure 1.2, the oven tempera-
ture cannot be selected higher than that under which the edges of the cake will be
burned. In machining, the feed should be exceed the so-called breaking feed (the
uncut chip thickness or chip load) [19] under which the extensive chipping or
breakage of the cutting insert occur. The statistical limitation is rarely discussed in
the literature on DOE in manufacturing. This limitation sets the upper limit to the
maximum of a factor for which the automodelity of its variance is still valid. This
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 9
assures much less potential problems with the row variances in the further statis-
tical analysis of the test results.
There is another limitation on the selection of the upper and lower limits of the
factors chosen for DOE. This limitation requires that the factor combinations
should be compatible, i.e. all the required combinations of the factors should be
physically realizable on the setup used in the study. For example, if a combination
of cutting speed and feed results in drill breakage, then this combination cannot be
included in the test. Often, chatter occurs at high cutting regimes that limits the
combinations of the regime parameters. In the example considered in Figure 1.2, it
could happen that for a certain number of eggs and the low limit of the oven tem-
perature, the consistency of the baked cake cannot be achieved no matter what is
the baking time is. Although it sounds simple, factors compatibility is not always
obvious at the stage of the selection of their limits. If some factors incompatibility
is found in the tests, the time and resources spent on this test are wasted and the
whole DOE should be re-planned from scratch.
Mathematically, the defined combination of the selected factors can be thought
of as a point in the multi-dimensional factorial space. The coordinates of this point
are called the basic (zero) levels of the factors, and the point itself is termed as the
zero point [1, 20].
The interval of factor variation is the number which, when added to the zero
level, gives the upper limit and, when subtracted from the zero level, gives the
lower limit. The numerical value of this interval is chosen as the unit of a new
scale of each factor. To simplify the notations of the experimental conditions and
procedure of data analysis, this new scale is selected so that the upper limit cor-
responds to +1, lower to −1 and the basic level to 0. For the factors having conti-
nuous domains, a simple transformation formula is used
xi − xi −0
xi = (1.1)
Δxi
where xi is a new value of the factor i, xi is its true (or real) value of the factor
(the upper or lower limit), xi −0 is the true (real) value of the zero level of the fac-
tor, Δxi is the interval of factor variation (in true (real) units) and i is the number
of the factor.
In the example considered in Figure 1.2, assume that the maximum oven
temperature in the test is selected to be x A− max = 200o C and the minimum
temperature is x A− min = 160o C . Obviously, x A− 0 = 180o C so that Δx A = 20o C .
Than the maximum of factor A in the scale set by Eq. (1.1) is calculated
as x A(max) = ( 200 − 180 ) / 20 = +1 while its minimum is calculated as
x A (min) = (160 − 180 ) / 20 = −1 . The other factors in the test shown in Figure 1.1 are
10 V.P. Astakhov
set as follows: xB − min = 1cup , xB − max = 2 cup ; xC − min = 2 cup , xC −max = 3cup ;
xD − min = 1egg , xD − min = 3eggs .
Equation (1.1) is used in practically any type of DOE for direct and reverse
transformations of the factors. The latter is needed to convert the correlation mod-
el obtained as a result of using DOE in the real scale of factors. Unfortunately, this
is routinely forgotten step in the representation of DOE results.
Table 1.1. A 24 factorial design of 16 runs, with the response labeled according to conven-
tional notation for the factor levels. Real variables are added for clarity.
The run called '1', for example, has all four factors set to their low level, whe-
reas the run called '2', has factors A, B, and C set to their low level and D set to
its high level. Note that the estimated effect in going from the low level of A,
say, to the high level of A, is based on comparing the averages of 8 observations
taken at the low level with 8 observations taken at the high level. Each of these
averages has a variance equal to 1/8 the variance in a single observation, or in
other words to get the same information from an OFAT design one would need
8 runs with A at -1σ and 8 runs with A at +1σ, all other factors held constant.
Repeating this for each factor would require 64 runs, instead of 16. The balance
of the 24 design ensures that one can estimate the effects for each of the four
factors in turn from the average of 8 observations at the high level compared
to 8 observations at the low level: for example the main effect of D is estimated
by
12 V.P. Astakhov
For example, the interaction of factors A and B is estimated by the contrast giv-
en by the fourth column of Table 1.2
which takes the difference of the difference between responses with A at high lev-
el and A at low level with the difference between responses with B at high level
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 13
and B at low level. The column of signs in Table 1.2 for the interaction effect AB
was obtained simply by multiplying the A column by the B column, and all the
other columns are similarly constructed.
This illustrates two advantages of designed experiments: the analysis is very
simple, based on linear contrasts of observations, and as well as efficiently esti-
mating average effects of each factor, it is possible to estimate interaction effects
with the same precision. Interaction effects can never be measured with OFAT de-
signs, because two or more factors are never changed simultaneously.
The analysis, by focusing on averages, implicitly assumes that the responses are
best compared by their mean and variance, which is typical of observations that
follow a Gaussian distribution. However the models can be extended to more gen-
eral settings [1].
The discussed factorial DOE allows accurate estimation of all factors involved
and their interactions. However, the cost and time need for such a test increase
with the number of factors considered. Normally, any manufacturing test includes
a great number of independent variables. In the testing of drills, for example, there
are a number of tool geometry variables (the number of cutting edges, rake angles,
flank angles, cutting edge angles, inclination angles, side cutting edge back taper
angle, etc.) and design variables (web diameter, cutting fluid hole shape, cross-
sectional area and location, profile angle of the chip removal flute, length of the
cutting tip, the shank length and diameter, etc.) that affect drill performance.
Table 1.3 shows the number of runs needed for the so-called full factorial DOE
considered above. In laboratory conditions, at least three repetitions for the same
run are needed while if the test is run in the shop floor conditions then at least 5
repetitions should be carried out in randomized manner. The poorer the test condi-
tions, the greater the number of uncontrollable variables, the greater number of
repetitions at the same point of the design matrix is needed to pass statistical
analysis of the obtained experimental data.
Therefore, there is always a dilemma. On one hand, to keep the costs and time
spent on the test at reasonable level it is desirable to include into consideration
only a limited number of essential factors carefully selected by the experts. On the
other hand, if even one essential factor is missed, the final statistical model may
not be adequate to the process under study. Unfortunately, there is no way to jus-
tify the decisions made at the preprocess stage about the number of essential vari-
ables prior the tests. If a mistake is made at this stage, it may show up only at the
final stage of DOE when the corresponding statistical criteria are examined. Obvi-
ously, it is too late then to correct the test results by adding the missed factor. The
theory of DOE offers a few ways to deal with such a problem [20, 21].
The first relies on the collective experience of the experimentalist(s) and the re-
search team in the determination of significant factors. The problem with such an
approach is that one or more factors could be significant or not, depending on the
particular test objectives and conditions. For example, the backtaper angle in drills
is not a significant factor in drilling cast irons, but it becomes highly significant in
machining titanium alloys.
14 V.P. Astakhov
A second way is to use screening DOE. This method appears to be more prom-
ising in terms of its objectivity. A screening DOE is used when a great number of
factors are to be investigated using a relatively small number of tests. These kinds
of tests are conducted to identify the significant factors for further full factorial
analysis. The most common type of screening DOE is the so-called the fractional
factorial DOE.
Table 1.3. Two-level designs: minimum number of runs as a function of number of factors
for full factorial DOE
1 2 6 10
2
2 4=2 12 20
3 8 = 23 24 40
4 16 = 24 48 80
5 32 = 25 96 160
6
6 64 = 2 192 320
7 128 = 27 384 640
8 256 = 28 768 1280
9 512 = 29 1536 2560
10
10 1024 = 2 3072 5120
The first goal for RMS is to find the optimum response. When there is more
than one response then it is important to find the compromise optimum that does
not optimize only one response. When there are constraints on the design data,
then the experimental design has to meet requirements of the constraints. The
second goal is to understand how the response changes in a given direction by ad-
justing the design variables. In general, the response surface can be visualized
graphically. The graph is helpful to see the shape of a response surface; hills, val-
leys, and ridge lines.
A plot showing how the response changes with respect to changes in the factor
levels is a response surface, a typical example of which is shown in Figure 1.3a as
a three-dimensional perspective plot. In this graph, each value of x1 and x2 gene-
rates a y−value. This three-dimensional graph shows the response surface from
the side and it is called a response surface plot.
Sometimes, it is less complicated to view the response surface in two-
dimensional graphs. The contour plots can show contour lines of x1 and x2 pairs
that have the same response value y. An example of contour plot is as shown in
Figure 1.3b.
In order to understand the surface of a response, graphs are helpful tools. But,
when there are more than two independent variables, graphs are difficult or almost
impossible to use to illustrate the response surface, since it is beyond 3-dimension.
RSM methods are utilized in the optimization phase. Due to the high volume of
experiments, this phase focuses on a few highly influential variables, usually 3 to
5. The typical tools used for RSM are central composite design (CCD) and the
Box-Behnken design (BBD) [1]. With the aid of software, the results of these
complex designs are exhibited pictorially in 3D as ‘mountains’ or ‘valleys’ to illu-
strate performance peaks.
Fig. 1.3. Graphical representation of the outcome of RSM: (a) response surface plot, (b)
control plot
16 V.P. Astakhov
I = ABCDE
A = BCDE
B = ACDE
C = ABDE
D = ABCE
E = ABCD
AB = CDE
AC = BDE
AE = BCD
BC = ADE
BD = ACE
BE = ACD
CD = ABE
CE = ABD
DE = ABC
18 V.P. Astakhov
It follows from Table 1.5 that each main effect and 2-factor interaction is
aliased with a 3 or 4-way interaction which was assumed to be negligible. Thus,
ignoring effects expected to be unimportant, one can still obtain estimates of all
main effects and 2-way interactions. As a result, fractional factorial DOE is often
used at the first stages of the study as a screening test to distinguish obtain signifi-
cant factors, and thus to reduce the number of factors used the full factorial tests.
The process of fractionalization can be continued; one might for example as-
sign a new factor F, say, to the ABC interaction (which is aliased with DE), giving
a 26−2 design, sometimes called a 1/4 fraction of a 26. This allows one to assess
the main effect of 6 factors in just 16 runs, instead of 64 runs, although now some
2 factor interactions will be aliased with each other.
There are very many variations on this idea; one is the notion of a screening de-
sign, in which only main effects can be estimated, and everything else is aliased.
The goal is to quickly assess which of the factors is likely to be important, as a
step in further experimentation involving these factors and their interactions.
Table 1.6 shows an 8 run screening design for 7 factors. The basic design is the 23
factorial in factors A, B, and C shown in the first 3 columns; then 4 new factors
have been assigned to the columns that would normally correspond to the interac-
tions BC, AC, AB, and ABC.
Table 1.6. A screening design for 7 factors in 8 runs, built from a 23 factorial design
A B C D E F G
1 −1 −1 −1 +1 +1 +1 −1
2 −1 −1 +1 −1 −1 +1 +1
3 −1 +1 −1 −1 +1 −1 +1
4 −1 +1 +1 +1 −1 −1 −1
5 +1 −1 −1 +1 −1 −1 +1
6 +1 −1 +1 −1 +1 −1 −1
7 +1 +1 −1 −1 −1 +1 −1
8 +1 +1 +1 +1 +1 +1 +1
Any modern DOE software (for example, Statistica, Design-Ease® and Design-
Expert®, Minitab, etc.) can generate design matrix for fractional factorial DOE
and provide a table of interactions similar to that shown in Table 1.5.
As mentioned above, the objective of DOE is to find the correlation between
the response and the factors included. All factors included in the experiment are
varied simultaneously. The influence of the unknown or non-included factors is
minimized by properly randomizing the experiment. Mathematical methods are
used not only at the final stage of the study, when the evaluation and analysis of
the experimental data are conducted, but also though all stages of DOE, i.e. from
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 19
the formalization of priory information till the decision making stage. This allows
answering of important questions: “What is the minimum number of tests that
should be conducted? Which parameters should be taken into consideration?
Which method(s) is (are) better to use for the experimental data evaluation and
analysis?” [20, 21]. Therefore, one of the important stages in DOE is the selection
of the mathematical (correlation) model. An often quoted insight of George Box
is, “All models are wrong. Some are useful”[22]. The trick is to have the simplest
model that captures the main features of the process.
Mathematically, the problem of DOE can be formulated as follows: define the
estimation E of the response surface which can be represented by a function
E { y } = φ ( x1 , x2 ,..., xk ) (1.4)
where y is the process response (for example, cutting temperature, tool life, sur-
face finish, cutting force, etc.), xi , i = 1, 2,...k are the factors varied in the test (for
example, the tool cutting edge angle, cutting speed, feed, etc.).
The mathematical model represented by Eq.(1.4) is used to determine the gra-
dient, i.e., the direction in which the response changes faster than in any other.
This model represents the response surface, which is assumed to be continuous,
two times differentiable, and having only one extreme within the chosen limits of
the factors.
In general, a particular kind of the mathematical model is initially unknown due
to insufficient knowledge on the considered phenomenon. Thus, a certain approx-
imation for this model is needed. Experience shows [21] that a power series or po-
lynomial (Taylor series approximations to the unknown true functional form of the
response variable) can be selected as an approximation
p p p p p
y = β0 + ∑∑ βij xi x j + ∑∑∑ βijk xi x j xk + ...
i =1 j =1 i =1 j =1 k =1
(1.5)
i≠ j i ≠ j ≠k
where β0 is the overall mean response, βi is the main effect of the factor (i = 1,2, ...
, p), βij is the two-way interaction effect between the ith and jth factors, and βijk is
the three-way interaction effect between the ith, jth, and kth factors.
A general recommendation for setting the factor ranges is to set the levels far
enough apart so that one would expect to see a difference in the response. The use
of only two levels seems to imply that the effects must be linear, but the assump-
tion of monotonicity (or nearly so) on the response variable is sufficient. At least
three levels of the factors would be required to detect curvature.
Interaction is present when the effect of a factor on the response variable
depends on the setting level of another factor. Two factors are said to have interac-
tion when the effect of one factor varies under the different levels of another fac-
tor. Because this concept is very important in DOE, consider an example to clarify
20 V.P. Astakhov
the concept. Suppose that Factor A is the silicon content is a heat-treatable alumi-
num. Suppose that factor B is the heat treating temperature. In this case the re-
sponse values may represent the hardness. Table 1.7 shows the response depend-
ing upon the factors' setting. A plot of the data (Figure 1.4a) shows how response
is a function of factors A and B. Since the lines are almost parallel, it can be as-
sumed that little interaction between the variables occurs.
Consider another set of data as shown in Table 1.8 In this instance, the response
is the relative wear rate, factor A is the shaft speed, and factor B may be the type
of lubricant. Again, a plot of the data (Figure 1.4b) shows how response depends
on the factors. It can be seen that B2 is a better lubricant at low speed but not at
high speed. The crossing of the lines indicates an interactive effect between the
factors. In this case, factors A and B are interrelated, i.e. not independent.
Factor B1 B2
A1 20 30
A2 40 52
Fig. 1.4. Concept of interaction: (a) no interaction of factors A and B, (b) interaction of
factors A and B
Factor B1 B2
A1 1.5 2.0
A2 3.0 0.8
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 21
The βij terms in Eq.(1.5) account for the two-way interactions. Two-way inte-
ractions can be thought of as the corrections to a model of simple additivity of the
factor effects, the model with only the βi terms in Eq.(1.5). The use of the simple
additive model assumes that the factors act separately and independently on the
response variable, which is not always a very reasonable assumption.
The accuracy of such an approximation would depend upon the order (power)
of the series. To reduce the number of tests at the first stage of experimental
study, a polynomial of the first order or a linear model is sufficiently suitable.
Such a model is successfully used to calculate the gradient of the response, thus,
to reach the stationary region. When the stationary region is reached then a
polynomial containing terms of the second, and sometimes, the third order may
be employed.
Experience shows [23, 24] that a model containing linear terms and interactions
of the first order can be used successfully in metal cutting. Such a model can be
represented as
y = β0 + ∑β x + ∑β
i
i i
ij
ij i x xj (1.6)
The coefficients of Eq.(1.6) are to be determined from the tests. Using the experi-
mental results, one can determine the regression coefficients b1 , bi , and bij ,
which are estimates for the theoretical regression coefficients β1 , β i , and βij .
Thus, the regression equation constructed using the test results has the following
form
E { y} = y = b0 + ∑b x + ∑b x x
i
i i
ij
ij i j
(1.7)
III Main effects are linearly combined with two-way interactions (βi + βjk).
IV Main effects are linearly combined with three-way interactions (βi + βjkl) and
two-way interactions with each other (βij + βkl).
V Main effects and two-way interactions are not linearly combined except with
higher-order interactions (βi + βjklm) and (βij + βklm).
each column, and any pair of symbols appears an equal number of times in any
pair of columns. An orthogonal array of size n × (n − 1) with two symbols in each
column specifies an n-run screening design for n − 1 factors. The designs with
symbols ±1 are called Plackett-Burman designs and Hadamard matrices defining
them have been shown to exist for all multiples of four up to 424.
More generally, an n × k array with mi symbols in the ith column is an ortho-
gonal array of strength r if all possible combinations of symbols appear equally
often in any r columns. The symbols correspond to levels of a factor. Table 1.10
gives an orthogonal array of 18 runs, for 6 factors with three levels each.
run A B C D E F
1 −1 −1 −1 −1 −1 −1
2 −1 0 0 0 0 0
3 −1 +1 +1 +1 +1 +1
4 0 −1 −1 0 0 +1
5 0 0 0 +1 +1 −1
6 0 +1 +1 −1 −1 0
7 +1 −1 0 −1 +1 0
8 +1 +1 −1 +1 0 −1
9 +1 +1 −1 +1 0 +1
10 −1 −1 +1 +1 0 0
11 −1 0 −1 −1 +1 +1
12 −1 +1 0 0 −1 −1
13 0 −1 0 +1 −1 +1
14 0 0 +1 −1 0 −1
15 0 +1 −1 0 +1 0
16 +1 −1 +1 0 +1 −1
17 +1 0 −1 +1 −1 0
18 +1 +1 0 −1 0 +1
While there is not much difference between the two types of DOE for simpler
experiment designs, for mixed level factor designs and building robustness in
products and processes, the Taguchi approach offers some revolutionary concepts
that were not known even to the expert experimenters. These include standard me-
thod for array modifications, experiment designs to include noise factors in the
outer array, signal-to-noise ratios for analysis of results, loss function to quantify
design improvements in terms of dollars, treatment of systems with dynamic cha-
racteristics, etc.
Although the Taguchi method was developed as a powerful statistical method
for shop floor quality improvement, a way too many researchers have been using
this method as a research and even optimization method in manufacturing, and
thus in metal cutting studies (for example, [25-29] that, in the author's opinion un-
acceptable because the Taguchi methods suffer the same problems as any frac-
tional factorial DOE.
Unfortunately, it became popular to consider the used of only a fraction of the
number of test combinations needed for a full factorial design. That interest spread
because many practitioners do not take the time to find out, or for other reasons
never realized the “price” paid when one uses fractional factorial DOEs including
the Taguchi method: (1) Certain interaction effects lose their contrast so know-
ledge of their existence is gone, (2) Significant main effect and important interac-
tions have aliases – other ‘confounding’ interaction names. Thus wrong answers
can, and often do come from the time, money, and effort of the experiment.
Books on DOE written by ‘statistical’ specialists add confusion to the matter
claiming that interactions (three-factor or higher order) would be too difficult to
explain; nor could they be important. The ideal gas law (1834 by Emil Clapeyron),
known from high-school physics as
PV = nRT (1.8)
(where P is the pressure of the confined gas, V is the volume of the confined gas, n
is the number of moles of gas, R is gas constant, T is the temperature) plots as a
simple graph. It depicts a three-factor interaction affecting y (response) as pres-
sure, or as volume. The authors of these statistical books/papers may have forgot-
ten their course in physics?
The problem is that the ability of the Taguchi method is greatly overstated by
its promoters who described Taguchi orthogonal tables as Japan’s ‘secret super
weapon’ that is the real reason for developing an international reputation for quali-
ty. Claim: a large number of variables could now be handled with practical effi-
ciency in a single DOE. As later details became available, many professionals
realized that these arrays were fractional factorials, and that Taguchi went to greater
extremes that other statisticians in the degree of fractionating. According to the
Taguchi method, the design is often filled with as many single factors for which it
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 25
has room. The design becomes “saturated” so no degrees of freedom are left for
its proper statistical analysis. The growing interest in the Taguchi method in the
research and optimization studies in manufacturing attests to the fact that manu-
facturing researches either are not aware of the above-mentioned ‘price” paid for
apparent simplicity, or know of no other way to handle more and more variables at
one time.
Plackett and Burman [30] developed a special class of fractional factorial experi-
ments that includes interactions. When this kind of DOE (referred to as the Plack-
ett–Burman DOE) is conducted properly using a completely randomized se-
quence, its distinctive feature is high resolution. Despite a number of
disadvantages (for example, mixed estimation of regression coefficients), this me-
thod utilizes high-contrast diagrams for the factors included in the test as well as
for their interactions of any order. This advantage of the Plackett–Burman DOE is
very useful in screening tests.
This section presents a simple methodology of screening DOE to be used in
manufacturing tests [31]. The method, referred to as the sieve DOE, has its foun-
dation in the Plackett–Burman design ideas, an oversaturated design matrix and
the method of random balance. The proposed sieve DOE allows the experimental-
ist to include as many factors as needed at the first phase of the experimental study
and then to sieve out the non-essential factors and interactions by conducting a
relatively small number of tests. It is understood that no statistical model can be
produced in this stage. Instead, this method allows the experimentalist to deter-
mine the most essential factors and their interactions to be used at the second stage
of DOE (full factorial or RSM DOE).
The proposed sieve DOE includes the method of random balance. This method
utilizes oversaturated design plans where the number of tests is fewer than the
number of factors and thus has a negative number of degrees of freedom [32]. It is
postulated that if the effects (factors and their interactions) taken into considera-
tion are arranged as a decaying sequence (in the order of their impact on the va-
riance of the response), this will approximate a ranged exponential-decay series.
Using a limited number of tests, the experimentalist determines the coefficients of
this series and then, using the regression analysis, estimates the significant effects
and any of their interactions that have a high contrast in the noise field formed by
the insignificant effects.
The initial linear mathematical model, that includes k number of factors
(effects), has the following form
where a0 is the absolute term often called the main effect, ai (i = 1,k) are the coef-
ficients of linear terms, aij (i = 1,...,k −1; j = i+1,...,k, i ≠ j) are the coefficients of
interaction terms and δ is the residual error of the model.
The complete model represented by Eq. (1.9) can be rearranged as a split of a
linear form considering that some xi designate the iterations terms as
y = a0 + a1 x1 + ... + ak −l ,k xk −l + b1 z1 + b2 z2 + ...bl zl + δ =
(1.10)
a0 + a1 x1 + ... + ak −l ,k xk −l + Δ
where
Δ = b1 z1 + b2 z2 + ...bl zl + δ (1.11)
and
removal flute face 5. To assure drill free penetration, i.e. to prevent the interfe-
rence of the drill’s flanks with the bottom of the hole being drilled, the auxiliary
flank 6 known as shoulder dub-off is ground at certain angle φ4. Their location of
the shoulder dub-off is defined by distance b. The detailed description of the gun-
drill geometry and importance of the parameters selected for the sieve test are pre-
sented by the author earlier [33].
Fig. 1.5. Grinding parameters of the terminal end of a gundrill selected for sieve DOE
Eight factors have been selected for this sieve DOE and their intervals of varia-
tion are shown in Table 1.11. The design matrix was constructed as follows. All
the selected factors were separated into two groups. The first one contained factors
x1 , x2 , x3 , x4 , form a half-replica 24 −1 with the defining relation I = x1 x2 x3 x4 . In
this half-replica, the factors’ effects and the effects of their interactions are not
mixed. The second half-replica was constructed using the same criteria. A design
matrix was constructed using the first half-replica of the complete matrix and add-
ing to each row of this replica a randomly selected row from the second half-
replica. Three more rows were added to this matrix to assure proper mixing and
these rows were randomly selected from the first and second half-replicas. Table
1.12 shows the constructed design matrix.
As soon as the design matrix is completed, its suitability should be examined
using two simple rules. First, a design matrix is suitable if it does not contain two
identical columns having the same or alternate signs. Second, a design matrix
should not contain columns whose scalar products with any other column result in
28 V.P. Astakhov
a column of the same (“+” or “−“) signs. The design matrix shown in Table 1.12
was found suitable as it meets the requirements set by these rules. In this table, the
responses yi i = 1...11 are the average tool life calculated over three independent
tests replicas obtained under the indicated test conditions.
Analysis of the result of sieve DOE begins with the construction of a correla-
tion (scatter) diagram shown in Figure 1.6. Its structure is self-evident. Each factor
is represented by a vertical bar having on its left side values (as dots) of the re-
sponse obtained when this factor was positive (the upper value) while the values
of the response corresponding to lower lever of the considered factor (i.e. when
this factor is negative) are represented by dots on the right side of the bar. As such,
the scale makes sense only along the vertical axis.
Table 1.11. The levels of te factors selected for the sieve DOE
Factors Approach Approach Flank Drill point Flank an- Shoulder Rake an- Shoulder
angle angle angle offset gle αn2 (o) dub-off an- gle dub-off
φ1 (o) φ2 (o) αn1 (o) md (mm) gle γ (o) location
φ4 (o) b (mm)
Code
x1 x2 x3 x4 x5 x6 x7 x8
designation
Upper
45 25 25 3.0 12 45 5 4
level (+)
Lower
25 10 8 1.5 7 20 0 1
level (-)
Average
Factors tool life Corrections
Run (min)
x1 x2 x3 x4 x5 x6 x7 x8 y y c1 yc 2
1 +1 +1 −1 −1 +1 −1 +1 −1 7 18.75 11.11
2 +1 +1 +1 +1 −1 +1 +1 −1 16 11.50 11.50
3 −1 +1 −1 −1 +1 +1 −1 +1 11 11.00 11.00
4 −1 −1 +1 +1 +1 −1 −1 +1 38 21.75 16.61
5 +1 −1 −1 +1 −1 −1 +1 −1 18 13.50 13.50
6 +1 −1 +1 −1 +1 +1 −1 −1 10 21.75 14.61
7 −1 −1 −1 −1 −1 +1 −1 +1 14 14.00 14.00
8 −1 +1 −1 +1 −1 −1 +1 +1 42 25.75 16.61
9 +1 −1 −1 −1 −1 +1 −1 +1 9 20.75 20.75
10 −1 +1 +1 +1 −1 +1 −1 −1 32 15.75 15.75
11 +1 −1 +1 −1 −1 −1 +1 −1 6 17.75 10.61
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 29
y
40
30
16.4
10
+
x1
+
x2
+
x3
+
x4
+
x5
+
x6
+
x7
+
x8
x
Fig. 1.6. Correlation diagram (first sieve)
y1 + y3 + ... + yn y2 + y4 + ... + yn −1
Xi = − (1.13)
m m
where m is the number of y in Table 1.13 for the considered factor assigned to
the same sign (“+” or “−”). It follows from Table 1.13 that m = 2.
The effects of the selected factors were estimated using data in Table 1.13 and
Eq.(1.13) as
Estimated Estimated
+x1 −x1 +x1 −x1
factor factor
7 11
16 38
10 14
18 42
9
32
11
+x4 −x4
∑y = 34 ∑y = 112 ∑y 1− 4 = 25
1−1 1− 2
∑ y1−3 = 37
y1− 4 = 12.5
y1−1 = 17 y1− 2 = 37.3
y1−3 = 9.3
where si is the standard deviation of i-th cell of the correlation table defined as
2
⎛ ⎞
∑i y ⎜⎝ ∑i yi ⎟⎠
2
i
si = − (1.17)
ni − 1 ni ( ni − 1)
A factor is considered to be significant if t X > tcr where the critical value, tcr for
i
the Student’s criterion in found in a statistical table for the following number of
degrees of freedom
fr = ∑ n − k = 11 − 4 = 7
i
i (1.20)
∑y (∑ y ) ∑y
2 2
Cell # 1− i 1− i 1−i
ni s2 s 2 / ni
1 34 1156 580 2 2.00 1.00
2 112 12544 4232 3 25.33 8.44
3 37 1369 351 4 2.92 0.73
4 25 625 317 2 4.50 2.25
The discussed procedure is the first stage in the proposed sieve DOE and thus is
referred to as the first sieve. This first sieve allows the detection of the strongest
factors, i.e. those factors that have the strongest influence on the response. After
these strong linear effects are detected, the size of “the screen” to be used in the
consecutive sieves is reduced to distinguish less strong effects and their interac-
tions. This is accomplished by correction of the experimental results presented in
column y1 of Table 1.12. Such a correction is carried out by adding the effects
(with the reverse signs) of the selected factors (Eqs. (1.14) and (1.15)) to column
y1 of Table 1.12, namely, by adding 11.75 to all results at level “ + x1 ” and –16.25
to all results at level “ + x4 ”. The corrected results are shown in column yc1 of Ta-
ble 1.12. Using the data of this table, one can construct a new correlation diagram
shown in Figure 1.7, where, for simplicity, only a few interactions are shown al-
though all possible interactions have been analyzed. Using the approach described
above, a correlation table (second sieve) was constructed (Table 1.15) and the in-
teraction x4 x8 was found to be significant. Its effect is X48 = 7.14.
After the second sieve, column yc1 was corrected by adding the effect of X48
with the opposite sign, i.e. –7.14 to all results at level +x48. The results are shown
in column yc 2 of Table 1.12. Normally, the sieve of the experimental results con-
tinues while all the remaining effects and their interactions become insignificant,
at say a 5% level of significance if the responses were measured with high accura-
cy and a 10% level if not. In the considered case, the sieve was ended after the
third stage because the analysis of these results showed that there are no more
32 V.P. Astakhov
significant factors or interactions left. Figure 1.8 shows the scatter diagram of the
discussed test. As can be seen in this figure, the scatter of the analyzed data reduc-
es significantly after each sieve so normally three sieves are sufficient to complete
the analysis.
25
20
4.54 6.82
5.07
15
10
+
x1
+
x2
+
x3
+
x4
+
x5
+
x6
+
x7
+
x8
+ + + +
x3 x 7 x 1 x 6 x 4 x 8 x2 x3 x5 x
Estimated
+x3x7 − x3x7 +x4x8 − x4x8
factor
11.5 18.75 18.75 11.5
11 25.75 25.75 11
y2 − 2 = 20.08 y2 − 4 = 12.75
Estimated
+ x3x7 − x3x7 + x4x8 − x4x8
factor
14 21.75 21.75 13.5
20.75 13.5 21.75 14
17.75 21.75 17.75 20.75
−x2
∑y 2 −5 = 52.5 ∑y 2−6 = 57 ∑y 2−7 = 61.25 ∑y 2 −8 = 48.25
40
20
10
y yC1 yC2
Fig. 1.8. Scatter diagram
The results of the proposed test are summarized in Table 1.16. Figure 1.9
shows the significance of the distinguished effects in terms of their influence on
tool life. As seen, two linear effects and one interaction having the strongest ef-
fects were distinguished. The negative sign of x1 shows that tool life decreases
when this parameter increases. The strongest influence on tool life has the drill
point offset md.
While the distinguished linear effects are known to have strong influence on
tool life, the distinguished interaction x4 x8 has never before been considered in
any known studies on gun drilling. Using this factor and results of the complete
DOE, a new pioneering geometry of gun drills has been developed (for example
US Patent 7147411).
The proposed sieve DOE allows experimentalists to include into consideration
as many factors as needed. Conducting a relatively simple sieve test, the signifi-
cant factors and their interactions can be distinguished objectively and then be
used in the subsequent full DOE. Such an approach allows one to reduce the total
34 V.P. Astakhov
number of tests dramatically without losing any significant factor or factor interac-
tion. Moreover, interactions of any order can be easily analyzed. The proposed
correlation diagrams make such an analysis simple and self-evident.
18
12
Effect
6
-
0
x4 x1 x4 x 8 x5 x6 x2 etc.
Fig. 1.9. Significance of the effects distinguished by the sieve DOE (Pareto analysis)
Split-plot designs were originally developed by Fisher [36] for use in agricul-
tural experiments. As a simple illustration, consider a study of the effects of two
irrigation methods (factor A) and two fertilizers (factor B) on yield of a crop, us-
ing four available fields as experimental units. In this investigation, it is not possi-
ble to apply different irrigation methods (factor A) in areas smaller than a field, al-
though different fertilizer types (factor B) could be applied in relatively small
areas. For example, if we subdivide each whole plot (field) into two split plots,
each of the two fertilizer types can be applied once within each whole plot, as
shown in Figure 1.10. In this split-plot design, a first randomization assigns the
two irrigation types to the four fields (whole plots); then within each field, a sepa-
rate randomization is conducted to assign the two fertilizer types to the two split
plots within each field.
Fig. 1.10. Split plot agricultural layout. (Factor A is the whole-plot factor and factor B is
the split-plot factor)
In industrial experiments, factors are often differentiated with respect to the ease
with which they can be changed from experimental run to experimental run. This
may be due to the fact that a particular treatment is expensive or time-consuming to
change, or it may be due to the fact that the experiment is to be run in large batches
and the batches can be subdivided later for additional treatments. Box et.al. [35] de-
scribed a prototypical split-plot experiment with one easy-to-change factor and one
hard-to-change factor. The experiment was designed to study the corrosion resis-
tance of steel bars treated with four coatings, C1, C2, C3,and C4, at three furnace
temperatures, 360oC, 370oC, and 380oC. Furnace temperature is the hard-to-change
factor because of the time it takes to reset the furnace and reach a new equilibrium
temperature. Once the equilibrium temperature is reached, four steel bars with ran-
domly assigned coatings C1, C2, C3, and C4 are randomly positioned in the furnace
and heated. The layout of the experiment as performed is given in Table 1.17. No-
tice that each whole-plot treatment (temperature) is replicated twice and that there
36 V.P. Astakhov
is just one complete replicate of the split-plot treatments (coatings) within each
whole plot. Thus, DOE matrix has six whole plots and four subplots within each
whole plot.
Table 1.17. Split-plot design and data for studying the corrosion resistance of steel bars
Temperature Coatings
Whole-plot
(oC) (randomized order)
C2 C3 C1 C4
1. 360
73 83 67 89
C1 C3 C4 C2
2. 370
65 87 86 91
C3 C1 C2 C4
3. 380
147 155 127 212
C4 C3 C2 C1
4. 380
153 90 100 108
C4 C1 C3 C2
5. 370
150 140 121 142
C1 C4 C2 C3
6. 360
33 54 8 46
The analysis of a split-plot experiment is more complex than that for a com-
pletely randomized experiment due to the presence of both split-plot and whole-
plot random errors. In the Box et al. corrosion resistance example [35], a whole-
plot effect is introduced with each setting or re-setting of the furnace. This may be
due, e.g., to operator error in setting the temperature, to calibration error in the
temperature controls, or to changes in ambient conditions. Split-plot errors might
arise due to lack of repeatability of the measurement system, to variation in the
distribution of heat within the furnace, to variation in the thickness of the coatings
from steel bar to steel bar, and so on.
In the author’s experience, ADX Interface for Design of Experiments
(http://support.sas.com/documentation/cdl/en/adxgs/60376/HTML/default/overvie
w_sect1.htm) by SAS company is most suitable for the split-plot DOE. Following
its manual [43], consider split-plot DOE for tablet production.
The tablet production process has several stages, which include batch mixing,
where ingredients are combined and mixed with water, and pellet production,
where the batch is processed into pellets that are compressed to form tablets. It is
more convenient to mix batches and randomize the treatments within each batch
than to mix a new batch for each run. Thus, this experiment calls for a standard
two-stage split-plot design.
The moisture content and mixing speed for the batch constitute whole-plot fac-
tors, while the factors that control the variety of ways that pellets can be produced
from a single batch are subplot factors. The responses of interest are measured on
the final tablets. Table 1.18 shows all the variables involved and the stage with
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 37
which they are associated. The goal of the experiment is to determine which ef-
fects are significant. The researcher is interested in both whole-plot factors and
subplot factors.
Table 1.18. Factors and their responses in the tablet formulation experiment
Level Description
Variable Name
Low High
Whole Plot FLUID 90 115 Moisture content (%)
MIX 15 45 Mixing time (min)
Split Plot EXTRUDER 36 132 Extruder speed (rpm)
SCREEN 0.6 1.0 Screen size (mm)
RESID 2 5 Residence time (min)
DISK 450 900 Disk speed (rpm)
Response MPS Mean particle size (micron)
I. Task List
1. Click Define Variables. The Define Variables window will appear, and
the Whole-Plot Factor tab will already be selected (Figure 1.12).
2. Define the whole-plot factors.
2.1. Create two new factors by clicking Add and selecting 2.
2.2. Enter the factor names given in the whole-plot section of
Table 1.12.
3. Define the subplot factors.
3.1. Click the Sub-plot Factor tab (Figure 1.13).
3.2. Create four new factors by clicking Add and selecting 4.
3.3. Enter the factor names given in the subplot section of
Table 1.12
4. Click the Block tab (Figure 1.14). ADX will assign a unique block level
to each whole plot when it generates the design, so you do not need to
specify the number of block levels. Change the block name to BATCH,
since each whole plot is a batch of material.
5. Enter the response information.
5.1. Click the Response tab (Figure 1.15).
5.2. Change the name of the default response to MPS and its label to
Mean Particle Size (microns).
5.3. Click OK to accept the variable definitions.
Before fitting a mixed model, click Explore in the main design window. Both box
plots and scatter plots are available. Box plots help you visualize the differences in
response distribution across levels of different factors, both random and fixed.
Scatter plots show the individual responses broken down by factor level or run and
can also be used to investigate time dependence.
The box plot is the first graph that is displayed (Figure 1.16). One can explore
the distribution of the response broken down by each of the factors or batches that
make up the whole plot.
ADX does not generate a default master model for a split-plot design, so you must
do so before fitting. Click Fit in the main design window. The Fit Details for MPS
window will open (Figure 1.17). It has sections to define fixed and random effects
and classification variables. The fixed effects in a split-plot design are, as usual,
the main effects and interactions of interest between the factors. The split-plot
structure of the experiment determines the choice of random effects, which in turn
determines the proper error term for each fixed effect.
The modeling objectives of a split-plot design are in principle the same as those
of a standard screening design. You want to estimate as many effects as possible
involving the factors and determine which ones are significant. When this design
was analyzed as a standard full factorial, the 64 runs provided enough degrees of
freedom to estimate effects and interactions of all orders. However, FLUID and
MIX are whole-plot effects and apply only to batches. Therefore, with respect
to these two factors, the experiment has only four runs (the four batches). The
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 41
interaction between the whole-plot effects will estimate the whole-plot error, so
this interaction is not included as a fixed effect.
1. Click Fit in the main design window. Select Model Change master
model to open the Specify Master Model window. On the Fixed Effects tab
(Figure 1.17), specify the following as fixed effects:
• All main effects. Click and drag or hold down the CTRL key and click to
select the FLUID, MIX, EXTRUDER, SCREEN, RESID, and DISK va-
riables. Then click Add.
• All two-factor interactions except FLUID*MIX. Select all six factors and
click Interaction. Double-click FLUID*MIX in the list on the right to
remove it from the master model. (You might have to scroll to find it.).
2. Next, click the Random Effects tab (Figure 1.18). Here you specify the
whole-plot error, which is BATCH. Select BATCH and click Add.
ADX uses the MIXED procedure to estimate the fixed and random effects and va-
riance components of the model. After specifying the model, you will see the Fit
Details for MPS window. There are three tabs:
• The Fixed Effects tab (Figure 1.20) is similar to the Effects Selector for
other designs. The table lists Type 3 tests of the fixed effects. Click an
effect to toggle the selection of significant effects.
• The Random Effects tab lists the random effects broken down by level
(for interactions, these are broken down by each combination of levels).
These are empirical best linear unbiased predictors (EBLUPs) of the ob-
served random effects. You can use them to screen for unusual whole
plots and to assess the normality assumption of the model.
Choosing a “best” split-plot design for a given design scenario can be a daunting
task, even for a professional statistician [42]. Facilities for construction of split-
plot designs are not as yet generally available in software packages (with
44 V.P. Astakhov
objective functions and which have the property of "external complement," pass
through a minimum. Achievement of a global minimum indicates the existence of
a model of optimum complexity (Figure 1.23).
The notion that there exists a unique model of optimum complexity, determina-
ble by the self-organization principle, forms the basis of the inductive approach.
The optimum complexity of the mathematical model of a complex object is found
by the minimum of a chosen objective function which possesses properties of ex-
ternal supplementation (by the terminology of Godel's incompleteness theorem
from mathematical logic). The theory of self-organization modeling [47] is based
on the methods of complete, incomplete, and mathematical induction [48].
This has widened the capabilities of system identification, forecasting, pattern
recognition, and multicriterial control problems.
Fig. 1.23. Variation in least square error ε(A + B) and error measure of an "external com-
plement" Δ(B) for a regression equation of increasing complexity S; Sopt is the model of op-
timal complexity
different levels of complicity and selection of the best solution by minimum of ex-
ternal criterion characteristic. Not only polynomials but also nonlinear, probabilis-
tic functions, or clusterizations are used as basic models.
In the author’s opinion, the GMDH approach is the most suitable DOE method
for experimental studies in manufacturing because:
1. The optimal complexity of model structure is found, adequate to level of
noise in data sample. For real problems solution with noisy or short data,
simplified forecasting models are more accurate than with any other
known methods of DOE.
2. The number of layers and neurons in hidden layers, model structure,
and other optimal neutral networks (NN) parameters are determined
automatically.
3. It guarantees that the most accurate or unbiased models will be found -
method does not miss the best solution during sorting of all variants (in
given class of functions).
4. Any non-linear functions or features can be used as input variables are
used, which can influence the output variable.
5. It automatically finds interpretable relationships in data and selects effec-
tive input variables.
6. GMDH sorting algorithms are rather simple for programming.
7. The method uses information directly from the data samples and
minimizes influence of priory researcher assumptions about results of
modeling.
8. GMDH neuronets are used to increase the accuracy of other modeling
algorithms.
9. The method allows finding an unbiased physical model of object (law or
clusterization) - one and the same for all future samples.
There are many published articles and books devoted to GMDH theory and its ap-
plications. The GMDH can be considered as a further propagation or extension of
inductive self-organizing methods to the solution of more complex practical prob-
lems. It solves the problem of how to handle the data samples of observations. The
goal is to obtain a mathematical model of the object under study (the problem of
identification and pattern recognition) or to describe the processes, which will take
place at the object in the future (the problem of process forecasting). GMDH
solves, by means of a sorting-out procedure, the multidimensional problem of
model optimization
(
g = arg min CR ( g ) , CR ( g ) = f P, S , z 2 , T1 , V
g ⊂G
) (1.21)
For the definite reference function, each set of variables corresponds to definite
model structure P = S. Problem transforms to much simpler one-dimensional
CR ( g ) = f ( S ) (1.22)
M M M M M M
y = b0 + ∑ bi xi + ∑∑ bij xi x j + ∑∑∑ bijk xi x j xk (1.23)
i =1 i =1 j =1 i =1 j =1 k =1
where X(x1,x2,...,xM) is the input variables vector, M is the number of input va-
riables, A(b1,b2,...,bM) is the vector of coefficients.
Components of the input vector X can be independent variables, functional
forms or finite difference terms. Other non-linear reference functions, such as dif-
ference, probabilistic, harmonic, logistic can also be used. The method allows the
finding of simultaneously the structure of model and the dependence of modeled
system output on the values of most significant inputs of the system.
GMDH, based on the self-organizing principle, requires minimum information
about the object under study. As such, all the available information about this ob-
ject should be used. The algorithm allows the finding of the additional needed in-
formation through the sequential analysis of different models using the so-called
external criteria. Therefore, GMDH is a combined method: it is used the test data
and sequential analysis and estimation of the candidate models. The estimates are
found using relatively small part of the test results. The other part of these results
is used to estimate the model coefficients and to find the optimal model structure.
Although GMDH and regression analysis use the table of test data, the regres-
sion analysis requires the prior formulation of the regression model and its com-
plexity. This is because the row variances used in the calculations are internal cri-
teria. A criterion is called an internal criterion if its determination is based on the
same data that is used to develop the model. The use of any internal criterion leads
to a false rule: the more complex model is more accurate. This is because the
complexity of the model is determined by the number and highest power of its
terms. As such, the greater the number of terms, the smaller the variance. GMDH
uses external criteria. A criterion is called external if its determination is based on
new information obtained using “fresh” points of the experimental table not used
in the model development. This allows the selection of the model of optimum
complexity corresponding to the minimum of the selected external criterion.
Another significant difference between the regression analysis and GMDH is
that the former allows to construct the model only in the domain where the num-
ber of the model coefficients is less than then number of point of the design matrix
48 V.P. Astakhov
because the examination of the model adequacy is possible only when f ad > 0 ,
i.e. when the number of the estimated coefficients of the model, n is less than the
number of the points in the design matrix, m. GMDH allows much wider domain
where, for example, the number of the model coefficients can be millions and all
these are estimated using the design matrix containing only 20 rows. In this new
domain, accurate and unbiased models are obtained. GMDH algorithms utilize the
minimum input experimental information. This input consists of a table having 10-
20 points and the criterion of model selection. The algorithms determine the
unique model of optimal complexity by the sorting out of different models using
the selected criterion.
The essence of the self-organizing principle in GMDH is that the external crite-
ria pass their minimum when the complexity of the model is gradually increased.
When a particular criterion is selected, the computer executing GMDH finds this
minimum and the corresponding model of optimal complexity. As such, the value
of the selected criterion referred to as the depth of minimum can be considered as
the estimate of the accuracy and reliability of this model. If the sufficiently deep
minimum is not reached then the model is not found. This might take place when
the input data (the experimental data from the design matrix) are: (1) noisy; (2) do
not contain essential variables; (3) the basic function (for example, polynomial) is
not suitable for the process under consideration, etc.
The following should be clearly understood if one tries to use GMDH:
1. GMDH is not for casual used as, for example the Tagichi method, i.e. it can-
not be used readily by everyone with no statistical and programming back-
ground. Therefore, it should be deployed in complex manufacturing research
programs of high cost and high importance of the results obtained.
2. GMDH is not a part of modern statistical software packages although its al-
gorithms are available, and thus can be programmed.
3. One a research team is gained some experience with GMDH and corres-
ponding algorithms are developed for the application in a certain field of
manufacturing studies, GMDH becomes a very powerful method of DOE
that can be used at different stages of DOE and, moreover, can be combined
with other important DOE methods, for example with split-plot DOE, to ad-
just GMDH to particular needs.
Each DOE phase offers different results. For instance, in the discovery phase a
researcher’s primary goal would include ‘screening’ for vital input variables.
However, the investigator must be aware of the inability to generate a prediction
equation using low resolution screening arrays, if any variable interactions are at
work in the system. The discovery phase typically focuses on two level, fractional-
factorial arrays to identify the “vital few” variables. Fractional-factorial arrays
range from resolution III to VI. The lower resolution arrays (III and IV), for in-
stance 23-1, and 24-1 are limited in application. These arrays are used primarily to
screen input variables because of their limited capability to quantify two-factor
interactions.
The more times you replicate a given set of conditions, the more pre-
cisely you can estimate the response. Replication improves the chance of
detecting a statistically significant effect (the signal) in the mist of natural
process variation (the noise). The noise of unstable processes can drown
out the process signal. Before doing a DOE, it helps to assess the signal-
to-noise ratio. The signal-to-noise ratio defines the power of the experi-
ment, allowing the researcher to determine how many replicates will be
required for the DOE. Designs reflecting low power require more repli-
cates.
8. Determine if the data can be measured (collected) with required accura-
cy. Do not start statistical analysis without first challenging the validity
of the data: What can you expect out of garbage input?
9. Design the experiments:
9.1. Do a sequential series of experiments. Designed experiments
should be executed in an iterative manner so that information
learned in one experiment can be applied to the next. For example,
rather than running a very large experiment with many factors and
using up the majority of your resources, consider starting with a
smaller experiment and then building upon the results. A typical se-
ries of experiments consists of a screening design (fractional fac-
torial) to identify the significant factors, a full factorial or response
surface design to fully characterize or model the effects, followed
up with confirmation runs to verify your results. If you make a mis-
take in the selection of your factor ranges or responses in a very
large experiment, it can be very costly. Plan for a series of sequen-
tial experiments so you can remain flexible. A good guideline is not
to invest more than 25 percent of your budget in the first DOE.
9.2. Remember to randomize the runs.
9.3. Make the model as simple as possible—but no simpler.
9.4. Determine the total number of runs in the experiment, ideally using
estimates of variability, precision required, size of effects expected,
etc., but more likely based on available time and resources. Reserve
some resources for unforeseen contingencies and follow-up runs.
10. Perform the experiment strictly according to the experimental design, in-
cluding the initial setup for each run in a physical experiment. Do not
swap the run order to make the job easier.
11. Analyze the data from the experiment using the analysis of variance me-
thod. Do not throw away outliers without solid reasoning: Every piece of
data stores a hidden story waiting to be opened. Let the problem drive the
modeling (i.e., tool selection, data preparation). Stipulate assumptions.
12. Obtain and statistically verify the model. Refine the model iteratively.
13. Define instability in the model (critical areas where change in output is
drastically different for small changes in inputs).
14. Define uncertainty in the model (critical areas and ranges in the data set
where the model produces low confidence predictions/insights). Do not
52 V.P. Astakhov
References
1. Montgomery, D.S., Kowalski, S.M.: Design and Analysis of Experiments, 7th edn.
John Wiley & Sons, New York (2009)
2. Antony, Y.: Design of Experiments for Engineers and Scientists. Butterworth-
Heinemann, Oxford (2009)
3. Fisher, R.A.: The Design of Experiments, 9th edn. Macmillan, New York (1971)
4. Fisher, R.A.: The Design of Experiments. Oliver and Boyd, Edinburgh (1935)
5. Halt, A.: A History of Mathematical Statistics. Wiley, New York (1998)
6. Student, Tables for estimating the probability that the mean of a unique sample of ob-
servations lie between any given distance of the mean of the population from which
the sample is drawn. Biometrika 11, p p. 414–417 (1917)
7. Fisher, R.A.: Statistical Methods for Research Workers, 14th edn.(1st edn. 1925).
Hafner Press, New York (1973)
8. Telford, J.K.: A brief introduction to design of experiments. Johns Hopkins Apl. Tech-
nical Digest 27(3), 224–232 (2007)
9. Roy, R.K.: Design of Experiments Using the Taguchi Approach. Wiley-IEEE, New
York (2001)
10. Shina, S.G.: Six Sigma for Electronics Design and Manufacturing. McGraw-Hil, New
York (2002)
11. Ross, P.J.: Taguchi Techniques for Quality Engineering. McGraw-Hill, New York
(1996)
12. Phadke, M.S.: Quality Engineering Using Robust Design. Pearson Education, Upper
Saddle River (2008)
Design of Experiment Methods in Manufacturing: Basics and Practical Applications 53
13. Mukerjee, R., Wu, C.-F.: A Modern Theory of Factorial Designs. Springer, London
(2006)
14. Kotsireas, I.S., Mujahid, S.N., Pardalos, P.N.: D-Optimal Matrices. Springer, London
(2011)
15. Paquete, L.: Experimental Methods for the Analysis of Optimization Algorithms.
Springer, London (2010)
16. Farlow, S.J.: Self-organising Methods in Modeling. Marcel Dekker, New York (1984)
17. Madala, H.R.I., Ivakhnenko, A.G.: Inductive Learning Algorithms for Complex Sys-
tems Modeling. CRC Press, Boca Raton (1994)
18. Wang, H.S., Chang, C.-P.: Desing of experiments. In: Salvendy, G. (ed.) Handbook of
Industrial Engineering. Technology and Operations Management, pp. 2225–2240. John
Willey & Sons, New York (2001)
19. Astakhov, V.P.: Tribology of Metal Cutting. Elsevier, London (2006)
20. Mason, R.L., Gunst, R.F., Hess, J.L.: Statistical Design and Analysis of Experiments
with Application to Engineering and Science. John Wiley and Sons, New York (1989)
21. Montgomery, D.C.: Design and Analysis of Experiments, 5th edn. John Wiley & Sons,
New York (2000)
22. Box, G.E.P., Draper, N.R.: Empirical Model Building and Response Surfaces. Wiley,
Hoboken (1987)
23. Astakhov, V.P., Osman, M.O.M., Al-Ata, M.: Statistical design of experiments in met-
al cutting - Part 1: Methodology. Journal of Testing and Evaluation 25(3), 322–327
(1997)
24. Astakhov, V.P., Al-Ata, M., Osman, M.O.M.: Statistical design of experiments in met-
al cutting. Part 2: Application. Journal of Testing and Evaluation, JTEVA 25(3), 328–
336 (1997)
25. Gopalsamy, B.M., Mondal, B., Ghosh, S.: Taguchi method and ANOVA: An approach
for process parameters optimization of hard machining while machining hardened
steel. Journal of Scietific & Indistrial Research 68, 659–686 (2009)
26. Nalbant, M., Gökkaya, H., Sur, G.: Application of Taguchi method in the optimization
of cutting parameters for surface roughness in turning. Materials & Design 28(4),
1379–1385 (2007)
27. Lin, T.R.: The use of reliability in the Taguchi method for the optimization of the po-
lishing ceramic gauge block. The International Journal of Advanced Manufacturing
Technology 22(3-4), 237–242 (2003)
28. Yang, W.H., Tarng, Y.S.: Design optimization of cutting parameters for turning opera-
tions based on the Taguchi method. Journal of Material Processing Technology 84,
122–129 (1998)
29. Ghani, J.A., Choudhury, I.A., Hassan, H.H.: Application of Taguchi method in the op-
timization of end milling operations. Journal of Material Processing Technology 145,
84–92 (2004)
30. Plackett, R.L., Burman, J.P.: The design of optimum multifactorial experiments. Bio-
metrica 33, 305–328 (1946)
31. Astakhov, V.P.: An application of the random balance method in conjunction with the
Plackett-Burman screening desing in metal cutting tests. Journal of Testing and Evalu-
ation 32(1), 32–39 (2004)
32. Bashkov, V.M., Katsev, P.G.: Statistical Fundamental of Cutting Tool Tests. Machino-
stroenie, Moscow (1985) (in Russian)
33. Astakhov, V.P.: Geometry of Single-Point Turning Tools and Drills: Fundamentals
and Practical Applications. Springer, London (2010)
54 V.P. Astakhov
34. Holman, J.P.: Experimental Methods for Engineers, 6th edn. McGraw-Hill (1994)
35. Box, G., Hunter, W., Hunter, S.: Statistics for Experimenters: Design, Innovation, and
Discovery, 2nd edn. Wiley, New York (2005)
36. Fisher, R.A.: Statistical Methods for Research Workers. Oliver and Boyd, Edinburgh
(1925)
37. Yates, F.: Complex experiments, with discussion. Journal of the Royal Statistical So-
ciety, Series B2, 181–223 (1935)
38. Anbari, F.T., Lucas, J.M.: Designing and running super-efficient experiments: Opti-
mum blocking with one hard-to-change factor. Journal of Quality Technology 40, 31–
45 (2008)
39. Ganju, J., Lucas, J.M.: Randomized and random run order experiments. Journal of Sta-
tistical Planning and Inference 133, 199–210 (2005)
40. Ju, H.L., Lucas, J.M.: Lk factorial experiments with hard-to-change and easy-to-
change factors. Journal of Quality Technology 34, 411–421 (2002)
41. Webb, D.F., Lucas, J.M., Borkowski, J.J.: Factorial experiments when factor levels are
not necessarily reset. Journal of Quality Technology 36, 1–11 (2004)
42. Jones, B., Nachtsheim, C.J.: Split-plot designs: What, why, and how. Journal of Quali-
ty Technology 41(4) (2009)
43. Getting Started with the SAS® 9.2 ADX Interface for Design of Experiments. SAS In-
stitute Inc., Cary, NC (2008)
44. Heisenberg, W.: The Physical Principles of Quantum Theory. University of Chicago
Press, Chicago (1930)
45. von Neumann, J.: Theory of Self Reproducing Automata. University of Illinois Press,
Urbana (1966)
46. Beer, S.: Cybernetics and Management, 2nd edn. Wiley, New York (1959)
47. Madala, H.R., Ivakhnenko, A.G.: Inductive Learning Algorithms for Complex System
Modeling. CRC Press, Boca Raton (1994)
48. Arbib, M.A.: Brains, Machines, and Mathematics, 2nd edn. Springer, New York
(1987)
49. Ivachnenko, A.G.: Polynomial theory of complex systems. IEEE Transactions on Sys-
tems, Man, and Cybernetics SMC-1(4), 284–378 (1971)
50. Ivakhnenko, A.G., Ivakhnenko, G.A.: Problems of further development of the group
method of data handling algorithms. Pattern Recognition and Image Analysis 110(2),
187–194 (2000)
51. Ivakhnenko, A.G., Ivaknenko, G.A.: The review of problems solvable by algorithms of
the group method of data handling. Pattern Recognition and Image Analysis 5(4),
527–535 (1995)
2
2.1 Introduction
Conventionally, product design has been separated from manufacturing process
design in the product development cycle. This product-oriented approach is often
referred to as the over-the-wall design due to its sequential nature of the design
activities. It prevents the integration of design and manufacturing activities, and
causes the increase of production ramp-up time, product change cost and the
degradation of product quality, which is inversely related to the geometrical and
dimensional variations of the key product characteristics (KPCs). In order to
overcome the limitation of this approach, manufacturers have begun to investigate
means of simultaneously evaluating product designs and manufacturing processes
in an attempt to proactively address the potential quality problems in
manufacturing phase, reduce ramp-up times and ensure geometrical product
56 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
quality. For this purpose, research efforts have been conducted to develop reliable
three-dimensional (3D) variation propagation models that integrate both product
and manufacturing process information. Such models can be applied in a variety
of fields, such as manufacturing variation sources diagnosis, process planning,
process oriented tolerancing.
To illustrate the importance of 3D manufacturing variation propagation models,
consider the part design shown in Fig. 2.1 and the given manufacturing process
plan (with its corresponding fixture layouts, locators specifications, cutting-tool
used, thermal conditions of machine-tools, etc.) shown in Fig. 2.2. We consider
two KPCs, i.e., the distance between surfaces S0 and S3, named KPC1, and the
distance between surfaces S6 and S8, named KPC2. In order to ensure the quality
of these two KPCs, the engineers should formulate the following questions:
250
KPC2
0
30 Z
X Z
Y S8 Z
Y
200
S2 Z
KPC1
X S3
Y S5 Y X Y X
Z
S4 S6 Z
Z
ºD Y Y
X
X X X Y
S7
X S1
X
Y
Z
Y
Z S0 KPC2 = 50 ± 0.125 mm
Z KPC1 = 180 ± 0.125 mm
Fig. 2.1. Part geometry and KPCs to be manufactured in a 4-station machining process.
Dimensions in mm
2.2.1 Fundamentals
Manufacturing variability in MMPs and its impacts on part quality can be modeled
by capturing the mathematical relationships between the KCCs and the KPCs.
These relationships can be modeled with non-linear functions. For instance, a
function f1 can be defined such that y1=f1(u), where y1 is the value of a KPC and
u=[u1, u2, …, un]T are the KCCs in a MMP. By the assumption of small variations,
the non-linear function can be linearized through a Taylor series expansion and the
value of a KPC can be defined as
δf1 (u) δf (u)
y1 = f1 ( u ) + ⋅ (u1 − u1 ) + … + 1 ⋅ (u n − u n ) + ε 1 , (2.1)
δu1 u= u δu n u = u
where ε1 contains the high-order non-linear residuals of the linearization, and the
linearization point is defined by u = [u1 , u2 ,… , un ]T . This linear approximation can
be considered good enough for many MMPs [1]. From Eq. (2.1), the dimensional
variation of a KPC from its nominal value is defined as
δf1 (u) δf (u)
Δy1 = ⋅ Δu1 + … + 1 ⋅ Δu n + ε 1 , (2.2)
δu1 u= u δu n u= u
where Δy1 = y1 − f1 (u) , defines the variations of the KPC, whereas Δu j = u j − u j ,
for j = 1,…, n , defines the small variations of the KCCs in a MMP. Considering that
there are M KPCs in the part whose variations are stacked in the vector
Y = [Δy1 , Δy 2 ,… , Δy M ]T , Eq. (2.2) can be rewritten in a matrix form as
Y = Γ ⋅ U + ε, (2.3)
where U = [Δu1 , Δu 2 ,…, Δu n ]T , ε is the stacked vector of the high-order non-
linear residuals defined as ε = [ε 1 , ε 2 ,…, ε n ]T and Γ is the matrix
T
⎡ ⎡ δ f (u) δ f (u) ⎤
T
⎡ δ f M (u) δ f (u ) ⎤
T
⎤
⎢⎢ 1 ,… , 1 ⎥ ;…; ,… , M ⎥ .
⎢ ⎥
⎢⎢
⎣ ⎣ δ u δ un u = u ⎦⎥ ⎢⎣ δ un u = u δ un u = u ⎥⎦ ⎥
1 u = u
⎦
For MMPs, the derivation of Eq. (2.3) is a challenging task. At the end of the
nineties, researchers from the University of Michigan proposed the adoption of the
well-known state space model from control theory [2, Chapter 11] to mathematically
represent the relationship between the variation sources and the variations of the
machined surfaces at each station, including how the variation of the surfaces
generated at upstream stations influence the surfaces generated at downstream
stations when the upstream surfaces are used as locating datums. In this
representation, dimensional variations of the machined surfaces from nominal values
at station k are defined by a series of 6-by-1 vectors named differential motion
vectors (DMVs), in the form of x k ,i = [(d iR ) T , (θ iR )T ]T , where d iR = [d ixR , d iyR , d izR ]T is
SoV Based Quality Assurance for MMPs – Modeling and Planning 59
First, the variations of datum surfaces used for locating the workpiece deviate
the workpiece location on the machine-tool table. This term can be estimated as
x dk +1 = A k ⋅ x k , where x k is the vector of part surfaces variations from upstream
machining stations and A k linearly relates the datum variations with the
machined surface variation due to the locating deviation of the workpiece.
Secondly, the fixture-induced variations deviate the workpiece location on the
machine-tool table and thus, a machined surface variation is produced after
machining. This term can be estimated as x kf +1 = B kf ⋅ u kf , where u kf is the vector
that defines the KCCs related to fixture-induced variations and B kf is a matrix
that linearly relates locator variations with variations of machined surface.
Thirdly, the machining-induced variations such as those due to geometrical and
kinematic errors, tool-wear errors, etc., deviates the cutting-tool tip and thus, the
machined surface is deviated from its nominal values. This term is modeled as
k +1 = B k ⋅ u k , where u k is the vector that defines the KCCs related to machining-
xm m m m
induced variations and B mk is a matrix that linearly relates these KCCs with the
machined surface variations.
xdk +1 = Ak ⋅ x k x kf +1 = B kf ⋅ u kf k +1 = B k ⋅ u k
xm m m
[
xk +1 = Ak ⋅ xk + Bkf Bm ][
k ⋅ (uk ) ( uk )
f T m T
] T
xk +1 = Ak ⋅ xk + Bk ⋅ uk
Fig. 2.5. Sources of variation and state space model formulation for station k
Therefore, for an N-station machining process the derivation of the state space
model can be defined in a generic form as
x k +1 = A k ⋅ x k + B k ⋅ u k + w k , k = 1,…, N , (2.4)
where B k ⋅ u k represents the variations introduced within station k due to the
KCCs and it is defined as [B kf B mk ] ⋅ [(u kf )T (u m T T
k ) ] ; and w k is the unmodeled
system noise and linearization errors.
SoV Based Quality Assurance for MMPs – Modeling and Planning 61
Eq. (2.4) shows the relationship between KCCs and KPCs along a MMP.
Considering that an inspection station is placed after station k − 1 in order to
verify if the workpiece/part is within specifications, then, following the state space
model formulation from control theory, the KPC measurements can be expressed as
y k = Ck ⋅ x k + v k , (2.5)
where y k represents the variations of the inspected KPCs; C k ⋅ x k are the linear
combinations of the variations of workpiece features after the kth station that
define the KPCs; and v k is the measurement noise of the inspection process.
Similar to x k , the vector y k is defined as [ y k ,1 ,…, y k ,q ,…, y k , M ]T , where y k , q is
the inspected variation of the qth KPC and M is the number of KPCs inspected.
For a MMP, Eqs. (2.4) and (2.5) form the generic math-based state space
representation, named in the literature as the SoV model, which allows for integration
of KPCs and KCCs through product and process information such as fixture layout,
part geometry, sequence of machining and inspection operations, etc. Based on this
model, the use of advanced control theory, multivariate statistics and Monte Carlo
simulations enables a large number of applications along product-life cycle (Fig. 2.6).
In the literature, interesting research works can be found about manufacturing fault
identification [3-6], part quality estimation [1, 7], active control for quality variation
reduction [8-11] and process planning and process-oriented tolerancing [12-16].
The SoV model can be presented in its conventional version [1, 7, 17] and in its
extended version [18]. The conventional SoV model includes datum-, fixture- and
machining-induced variations but defines the machining-induced variations as a
generic cutting-tool path variation defined by three translational and three
orientation deviations. The extended SoV model expands the conventional model
by including specific machining-induced variations due to geometric, kinematic
errors, thermal distortions, cutting-tool wear and cutting-tool deflections, etc. In
the next subsections, the derivation of the extended SoV model will be introduced.
xk+1 = Ak ⋅ xk + Bk ⋅ uk + wk
y k = Ck ⋅ x k + v k
Fig. 2.6. Diagram of the SoV model derivation and its applications
62 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
Design Coordinate System (DCS). The nominal DCS, denoted as D , define the
reference for the workpiece features during design. The definition of D usually
depends on the nominal geometry of the part and it is usually defined at an
accessible corner. As this CS is only used in design, this CS cannot be deviated.
Reference Coordinate System (RCS). The nominal and true RCS, denoted as
Rk and Rk , respectively, define the reference for the workpiece features (Fig.
2.7d) at station k. To facilitate the model derivation, the Rk is defined as the local
coordinate system of the primary datum feature at station k. In a 3-2-1 fixture
layout, the primary datum is the main workpiece surface used to locate the part at
the machine-tool table [19, Chapter 3]. The Rk is defined similarly according to
the actual part geometry.
Fixture Coordinate System (FCS). The nominal and true FCS at station k,
denoted as Fk and Fk , respectively, define the physical position and orientation
of the fixture device according to the fixture layout. Fig. 2.7d shows the FCS for a
fixture layout based on the 3-2-1 principle.
Local Coordinate System (LCSj). The nominal and true LCSj at station k,
denoted as Lkj and Lkj , define the physical position and orientation of the jth
nominal and actual machined surface of the part respectively (Fig. 2.7c). For
planar surfaces the Z-axis of Lkj is commonly defined normal to the surface.
upward, its X-axis parallel to the long axis of the table and pointing to its positive
direction, and its Y-axis defined according to the right hand rule, as shown in
Fig. 2.7a. In this chapter, it is assumed that M k serves as the reference at station
k and thus will not deviate.
Axis Coordinate System (ACSi). The nominal and true ACSi of the i-axis used at
station k, denoted as Aki and Aki , respectively, define the physical position and
orientation of the i-axis of the machine-tool. The origin of the Aki is located at
the geometrical center of the joint of the i-axis. For prismatic joints, the axes of
Aki have the same orientation as that of the M k . An example of Aki 's of a 3-
axis vertical machine-tool is shown in Fig. 2.7a. The Aki is similarly defined for
an actual axis.
Spindle Coordinate System (SCS). The nominal and true SCS at station k,
denoted as S k and S k , respectively, define the physical position and orientation
of the spindle during machining. The origin of the S k is located at the
geometrical center of the spindle and the orientations of axes are identical to that
of the Z-axis of the machine-tool, as shown in Fig. 2.7b. The S k is defined
similarly for the actual spindle.
Cutting-Tool Coordinate System (CCS). The nominal and true CCS at station k,
denoted as C k and C k , respectively, define the physical position and orientation
of the cutter tip center during machining. The origin of the C k is located at the
cutter tip center and the orientations of its axes are identical to that of a S k , as
shown in Fig. 2.7c. The C k is defined similarly for the actual cutting-tool.
Cutting-Tool Tip Coordinate System (TPCS). The nominal and true TPCS at
station k, denoted as Pk and Pk , respectively, defines the physical position and
orientation of the cutting-tool tip. The origin of the Pk is located at the center of
the cutting edge that is used to generate feature j, and the orientations of its axes
are identical to that of Lkj . Please note that, when machining feature j at station k,
the cutting-tool tip removes material generating the machined feature which is
defined by the Lkj . Thus, the position and orientation of the Pk defines the
position and orientation of Lkj , as shown in Fig. 2.7c.
64 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
z
y
Head Azk x
y Spindle
ºMk x
y Work Column
z
space
Axk Table
x
z Saddle z
y
y
Ayk x Ak i
x
is
Bed ax
Y-
X-ax Prismatic joint
is
(a) (b)
z z
Sk y Sk y
Spindle Spindle
x x
z
Workpiece
z
z z ºD y
Cutting-tool x Cutting-tool
Pk y
y
Pk z y
x Rk x
y
x Ck x Ck y z
x j x
z Lk j
Lk Fk x
z
y
y
x
Frontal machining Peripheral machining
operation operation Fixture
y z
(c) (d)
Fig. 2.7. Example of CSs involve in a 3-axis vertical machine-tool
x y
H ºº AA ykk H ºº AA z kk x AA z kk
y
xºAMx k
k
HººM
Ax
k
z
k
H ºº SAk k z
x SAk k
H ºº M
Fk
k
x CP k
k
H ºº CS k
x ºFM k k
k
H ºº FRk x CS k
k Hºº RDk H ºº CP k k
x FRk k
k
Fig. 2.8. Relationships between the different CSs in a 3-axis vertical machine-tool
deviate the cutting-tool tip w.r.t. the machine-tool CS. The most common
machining-induced variations are due to geometric and kinematic errors of
machine-tool axes, thermal distortions, cutting-tool deflections and cutting-tool
wear, which induce deviations of the CSs Aki , S k , C k and Pk from their
respective nominal values. The second chain, defined from M k to Rk ,
represents how fixture- and datum-induced variations deviate the workpiece
location w.r.t. the MCS. Note that in order to represent one CS w.r.t. another CS, a
homogeneous transformation matrix (HTM) is used. How to derive the HTMs is
explained in Appendix 2.1.
In order to derive the machined surface variation as a function of the variations
of all CSs involved in these chains, we consider the following Corollary from [1].
Corollary 1: Consider the CS R , 1 and 2 as shown in Fig. 2.9. Consider now
that CS 1 and CS 2 are deviated from nominal values. Noting the variation of CS
1 and CS 2 w.r.t. R as x1R and x 2R , respectively, then, the variation of CS 2
w.r.t. 1 in vector form can be formulated as
⎛ ⎛ 1 ⎞ T ⎛ 1 ⎞T ⎛ 1 ⎞ ⎞
⎜ − ⎜ R ⎟ ⎜ R ⎟ ⋅ ⎜ tˆ ⎟ I 3×3 0 ⎟ ⎛ R⎞
x12 = ⎜⎜ ⎝ ⎟ ⋅ ⎜ x1 ⎟,
2⎠ ⎝ 2⎠ ⎝ 2⎠
T ⎟ ⎜ xR ⎟
− ⎛⎜ R 1 ⎞⎟ 0 I 3×3 ⎟⎟ ⎝ 2 ⎠
(2.6)
⎜⎜ 0
⎝ ⎝ 2⎠ ⎠
= D12 ⋅ x1R + x 2R ,
66 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
1
where R is the rotation matrix of 2 w.r.t. 1 , I 3×3 is a 3× 3 identity matrix
2
and tˆ 1
is the skew matrix of vector t 1
(see Appendix 2.2). The proof of this
2 2
Corollary can be found in [1].
Applying Corollary 1, the machined surface variation at station k defined as the
R
variation of CS Lkj (denoted as Lk hereafter) w.r.t. Rk , denoted as x L k , can be
k
obtained as
R R Mk Mk
x Lk = DLk ⋅ x R + xL , (2.7)
k k k k
Mk
where x R represents the variation of the position and orientation of the
k
workpiece w.r.t. the MCS due to fixture- and datum-induced variations, and
Mk
xL represents the overall cutting-tool path variation due to machining-induced
k
variations when manufacturing feature j. Note that, according to the previous CS
Mk Mk
definition, x L ≡ xP .
k k
Mk
As shown in Fig. 2.8, x R will depend on the CSs variations that define the
k
Mk
chain from M k to Rk . And similarly, x P will depend on the variations of the
k
1
x1R
H ºR1
º1
H ºº12
H ºº12
H ºR2
º2
x12
R x 2R 2
Fig. 2.9. Differential motion vector from CS 2 to CS 1 if both CSs deviate from nominal
values
SoV Based Quality Assurance for MMPs – Modeling and Planning 67
Corollary 2: Consider the CSs R , 1 and 2 as shown in Fig. 2.9, with CSs 1 and
2 deviating from their nominal positions and orientations. Noting the variation of
CS 1 w.r.t. R as x1R and the variation of CS 2 w.r.t. CS 1 as x12 , then, the
variation of CS 2 w.r.t. R can be formulated as
⎛ ⎛ 1 ⎞T T ⎞
⎜⎜ R ⎟ − ⎛⎜ R 1⎞
⎟ ⋅ ⎛⎜ tˆ 1⎞
⎟ I 3×3 0 ⎟ ⎛ R⎞
x 2R = ⎜⎜ ⎝ ⎟ ⋅ ⎜ x1 ⎟,
2⎠ ⎝ 2⎠ ⎝ 2⎠
T ⎟ ⎜ x1 ⎟
⎜⎜ 0 ⎛R 1 ⎞ I 3×3 ⎟⎟ ⎝ 2 ⎠
(2.8)
⎜ 2⎟ 0
⎝ ⎝ ⎠ ⎠
= T21 ⋅ x1R + x12 .
The proof of this Corollary can be found in [1]. Applying repeatedly the Corollary 2,
Mk Mk
the DMVs x R and x P can be respectively rewritten as
k k
Mk F Mk F
xR = TR k ⋅ x F + x Rk , (2.9)
k k k k
Mk C S Az Mk Az S C
xP = TP k ⋅ (TC k ⋅ (TS k ⋅ x + x S k ) + xCk ) + x Pk . (2.10)
k k k k Akz k k k
Substituting Eqs. (2.10) and (2.9) in Eq. (2.7), the variation of the feature Lk
w.r.t. Rk can be expressed as a function of the DMVs of all the CSs that are
Mk Az S C
involved between Lk and Rk CS, such as x , x S k , xCk and x Pk . These
Akz k k k
Mk
The DMV x F refers to the variations of the FCS due to fixture inaccuracies,
k
Fk
and it can also be described as the DMV x F since M k is not deviated. In
k
Fk
order to derive x F as a function of fixture layout and fixture variations, the
k
following assumptions are considered: i) locating surfaces are assumed to be
perfect in form (without form errors) and there is no deformations of locators; ii)
locators are assumed to be punctual and distributed according to the 3-2-1
workholding principle [19, Chapter 3]; iii) only variations based on small
displacements from nominal values are considered.
68 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
Let us consider a 3-2-1 fixture device where ri defines the contact point
between the ith locator and the workpiece, and ni defines the normal vector of the
workpiece surface at the ith contact point, all expressed w.r.t. Fk . Following the
Fk
research work in [20], a small perturbation in the location of Fk ( x F ) due to a
k
small variation in a fixture locator in the direction of movement constraint
(direction normal to the locating surface defined by ni ), denoted as Δli , can be
mathematically expressed as
F
Δli = w iT ⋅ x F k , (2.11)
k
where
where Δri is the position variation of the locator i. The deterministic localization
Fk
condition (a unique solution of x F ) requires that Eq. (2.11) is satisfied for all
k
locators. Thus, considering all locators, Eq. (2.11) becomes
F
Δl = G T ⋅ x F k , (2.14)
k
= (GT ) −1 ⋅ Δl.
Fk
xF (2.15)
k
f
For a generic 3-2-1 locating scheme shown in Fig. 2.10a, matrix B k1 can be
straight forward derived following the methodology explained above, resulting in
the matrix:
⎛ (l 2 y − l 3 y ) ⋅ l 5 z − (l1 y − l 3 y ) ⋅ l 5 z (−l 2 y + l1 y ) ⋅ l 5 z − l5 y l4 y ⎞
⎜ 0⎟
⎜ C C C ( −l 5 y + l 4 y ) (−l 5 y + l 4 y ) ⎟
⎜ − (l − l ) ⋅ l (l1x − l 3 x ) ⋅ l 6 z − (−l 2 x + l1x ) ⋅ l 6 z l6 x − l6x ⎟
⎜ 2x 3x 6z
1⎟
⎜ C C C ( −l 5 y + l 4 y ) (−l 5 y + l 4 y ) ⎟ (2.17)
⎜ (l l − l l ) − (−l 3 x l1 y + l 3 y l1x ) (−l1 y l 2 x + l 2 y l1x ) ⎟
⎜ 3 y 2 x 2 y 3x 0 0 0⎟
f ⎜ C C C ⎟
B k1 = ⎜ − (l − l ) (l1x − l 3 x ) − (−l 2 x + l1x ) ⎟,
2x 3x
⎜ 0 0 0⎟
⎜ − (l C− l ) C
(l1 y − l 3 y )
C
− (−l 2 y + l1 y ) ⎟
⎜ 2y 3y
0 0 0⎟
⎜ C C C ⎟
⎜ −1 1 ⎟
⎜ 0 0 0 0⎟
⎜ ( −l 5 y + l 4 y ) (−l 5 y + l 4 y ) ⎟
⎜ ⎟
⎝ ⎠
Tertiary
datum Workpiece
Secondary
datum ZC
LE XC
CSC
ZB YC
CSB ºD
YB
XB CSA R XA
LF
YA
ZA Primary
LD datum
(a)
(b)
Fig. 2.10. a) 3-2-1 fixture layout based on locators; b) Workpiece datums (primary, second-
ary and tertiary). CSs centered on each face.
For 3-2-1 fixture layouts based on locators (Fig. 2.10a), the deterministic location
of the part is ensured when workpiece touches the six locators. Due to datum
inaccuracies, workpiece location can be deviated from its nominal values.
Assuming prismatic surfaces, the influence of the datum variations on workpiece
location can be obtained as follows [1].
70 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
• Locators l 4 and l5 touch workpiece surface B , and thus the third component
of the coordinate of l 4 and l5 w.r.t. the CS B are zero. Mathematically, this is
expressed as p [ ]
~B
l = 0 and p
4 (3)
~B
l [ ] ~ B = [l B , l B , l B ,1]T and []
= 0 , where p
5 (3) l ix iy iz i
⋅ (3)
[ ]
denotes the third component of the vector.
~C
• Similarly, locator l6 touches workpiece surface C , so p = 0.
l 6 (3)
~B = HB ⋅ H A ⋅ p
p ~F , (2.19)
l
5 A F l 5
~C = HC ⋅ H A ⋅ p
p ~F . (2.20)
l 6A F l 6
( )
H BA = δH BA
−1
·H B
A
,
(2.21)
= (I 4× 4 − Δ BA ) ⋅ H B
,
A
where δH BA is the HTM that defines the small translational and orientational
deviations of CS B w.r.t. A due to the variation from nominal values of B and A;
and Δ BA is the differential transformation matrix (DTM) of B w.r.t. A (see
Appendix 2).
H FA it is defined as
H FA = H A
·δH FA ,
F
(2.22)
=H A
·(I 4× 4 + Δ FA ).
F
[ ]
By neglecting the second-order small values and considering the contact
~B
between surfaces ( p = 0 ), the following equation applies
l 4 (3)
[p~ ]
B
l4
(3)
= ⎡⎢(−Δ BA ⋅ H
⎣
B
F
+H B
F
⋅ Δ FA + H B
F
~F
)⋅p l4
⎤ = 0.
⎥⎦ (3) (2.24)
As the X coordinate of the location of locator 4 w.r.t. the CS F is zero, through the
HTM H B , the term ⎡⎢H B ⋅ p ~ F ⎤ becomes zero and thus, Eq. (2.24) is rewritten
l4 ⎥
F ⎣ F ⎦ (3)
as
⎡(−Δ A ⋅ H B
+H B ~ F ⎤ = 0,
⋅ Δ FA ) ⋅ p (2.25)
⎢⎣ B F F l4 ⎥
⎦ (3)
and thus,
⎡( H B ~ F ⎤ = ⎡Δ A ⋅ H
⋅ Δ FA ) ⋅ p B ~F ⎤ .
⋅p (2.26)
⎢⎣ F ⎦ (3) ⎢⎣
l4 ⎥ B F l4 ⎥
⎦ (3)
Following the same procedure, Eqs. (2.27) and (2.28) can be derived for locator
l5 and l6 , respectively.
⎡(H B ~ F ⎤ = ⎡Δ A ⋅ H
⋅ Δ FA ) ⋅ p B ~F ⎤ ,
⋅p (2.27)
⎢⎣ F ⎦ (3) ⎣⎢
l5 ⎥ B F l5 ⎥
⎦ (3)
⎡( H C ~ F ⎤ = ⎡Δ A ⋅ H
⋅ Δ FA ) ⋅ p C ~F ⎤ .
⋅p (2.28)
⎢⎣ F ⎦ (3) ⎣⎢
l6 ⎥ C F l6 ⎥
⎦ (3)
Note that for Eqs. (2.26-2.28), we are interested in evaluating the DTM Δ FA ,
which shows the effect of datum variations on workpiece location. Furthermore,
note that from the six parameters of Δ FA (three translational and three orientation
deviations), only three are unknown since datum variations of surface B and C
only influence on the X and Y positioning of the workpiece and on the rotation
about the Z-axis, all expressed from CS A . Thus, the three unknown parameters
that depend on datum fixture variations of surfaces B and C can be obtained
solving Eqs. (2.26-2.28). After solving, the variation of FCS w.r.t. CS A can be
rewritten by the effects of each datum feature in a vector form as
x FA = A1 ⋅ x BA + A 2 ⋅ xCA . (2.29)
For the 3-2-1 locating scheme based on locators shown in Fig. 2.10a, Eqs.
(2.26-2.28) were solved obtaining the following matrices A1k and A 2k . These
results can also be found in [1].
⎛ l5 y ⋅ (l5 z − l4 z ) ⎞
⎜0 0 − 1 LE /2 l5 z + LF /2 + 0⎟
⎜ (l4 y − l5 y ) ⎟
⎜ l6 x ⋅ (l4 z − l5 z ) ⎟
⎜0 0 0 − l6 x 0⎟
⎜ l4 y − l5 y ⎟ (2.31)
Ak1 = ⎜ 0 0 0 0 0 0 ⎟,
⎜ ⎟
⎜0 0 0 0 0 0⎟
⎜0 0 0 0 0 0⎟
⎜ − (l4 z − l5 z ) ⎟
⎜0 0 0 1 0⎟
⎜ l4 y − l5 y ⎟
⎝ ⎠
⎛0 0 0 0 0 0⎞
⎜ ⎟
⎜0 0 − 1 − l6 z − LF /2 l6 x − LD /2 0 ⎟
⎜0 0 0 0 0 0⎟
Ak2 = ⎜ ⎟. (2.32)
⎜0 0 0 0 0 0⎟
⎜ ⎟
⎜0 0 0 0 0 0⎟
⎜0 0 ⎟⎠
⎝ 0 0 0 0
Mk Mk Mk
H =H ⋅ δH ,
Akz Akz Akz
y y
(2.33)
Mk Mk Akx Akx A A
=H ⋅ δH ⋅H ⋅ δH ⋅H k ⋅ δH k .
Akx Akx A
y
A
y Akz Akz
k k
SoV Based Quality Assurance for MMPs – Modeling and Planning 73
⎛ 1 − ε zi ε yi δ xi ⎞ ⎛ 1 − ε z (i ) ε y (i ) δ x (i ) ⎞
⎜ ⎟ ⎜ ⎟
Aj ⎜ ε 1 − ε xi δ yi ⎟ ⎜ ε z (i ) 1 − ε x (i ) δ y (i ) ⎟
δH ki = ⎜ zi ⋅ ⋅
Ak −ε ε xi 1 δ zi ⎟ ⎜ − ε y (i) ε x (i ) 1 δ z (i ) ⎟
⎜ yi ⎟ ⎜ ⎟
⎜ 0 1 ⎟⎠ ⎜⎝ 1 ⎟⎠ (2.34)
⎝ 0 0 0 0 0
⎛ 1 − ε zt (t , T1 ,..., Tm , i) ε ty (t , T1 ,..., Tm , i ) δ xt (t , T1 ,..., Tm , i ) ⎞⎟
⎜
⎜ ε zt (t , T1 ,..., Tm , i ) 1 − ε zt (t , T1 ,..., Tm , i ) δ yt (t , T1 ,..., Tm , i) ⎟
⋅⎜ t ⎟.
⎜ − ε y (t , T1 ,..., Tm , i ) ε xt (t , T1 ,..., Tm , i ) 1 δ zt (t , T1 ,..., Tm , i ) ⎟
⎜ ⎟
⎝ 0 0 0 1 ⎠
The first HTM describes the mounting errors of the i-axis w.r.t. the previous j-
axis. The mounting errors are position and orientation errors due to assembly
errors and they are not dependent on the carriage position. Mounting errors can be
represented by three possible angular variations, ε xi (rotation around the X-axis),
ε yi (rotation around the Y-axis) and ε zi (rotation around the Z-axis), and three
offsets ( δ xi , δ yi , δ zi ), as it is shown in Fig. 2.11 for the X-axis carriage. The
second HTM represents the motional variations, which include the terms δ p (q)
and ε p (q ) . δ p (q) refers to the positional variation in the p-axis direction when
the prismatic joint moves along the q-axis and is a function of the position of the
q-axis. ε p (q ) refers to the angular variation around the p-axis when the q-axis
moves and it is also a function of the position of the q-axis. The third HTM
describes the geometrical variations due to thermal effects, whose components are
defined as δ tp (t , T1 ,..., Tm , q) and ε tp (t , T1 ,..., Tm , q) for position and angular
variations around the p-axis when the q-axis moves, respectively, and include
scalar thermal components and position-dependent thermal components [23].
Mathematically, δ tp (t , T1 ,..., Tm , q ) and ε tp (t , T1 ,..., Tm , q) are generally defined as
The term f 0pq (T1 , …, Tm , t ) and g 0pq (T1 , … , Tm , t ) are scalar thermal components
that model the position variation on the p-axis when the q-axis moves and are
function of operation time, t, and the temperatures T1 , …, Tm at different locations
on the machine-tool structure. The position-dependent thermal components are
defined by the terms f1pq (T1 , … , Tm , t ) ⋅ q + f 2pq (T1 , …, Tm , t ) ⋅ q 2 + … and
g1pq (T1 , …, Tm , t ) ⋅ q + g 2pq (T1 , … , Tm , t ) ⋅ q 2 + … and they model the position
variation on the p-axis when the q-axis moves.
74 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
From Eq. (2.34), it is shown that geometrical variations due to kinematic and
thermal effects may present non-linear relationships. In order to include these
sources of variation into the SoV model, a linearization should be conducted based
on three important assumptions. Firstly, it is assumed that the geometric-thermal
variations are modeled when the machine-tool is warmed-up adequately and thus,
the effect of the time on the thermal variations can be neglected. Secondly, it is
assumed that the workpiece is repeatedly placed in the same region inside the
allowable work space of the machine-tool table, so it is only expected small
variations in the placement of the workpiece. Thirdly, it is assumed that
geometric, kinematic and thermal variations do not change drastically along the
travels at any i-axis for the region where the workpiece is repeatedly placed on the
machine-tool table (the experimentation in [23] holds this assumption), so the
geometric-thermal variations in the machine-tool axis can be linearized without
significant loss of precision. Under these assumptions, the motional variations
δ (δ p (q))
δ p (q) and ε p (q ) are linearized as δ p (q0 ) + ·Δq and
δq q=q 0
δ (ε p (q))
ε p ( q0 ) + ·Δq , respectively, where q0 is the nominal placement of
δq q = q0
the workpiece on the q-axis, and Δq is the admissible variation range of the
workpiece placement along the q-axis. The thermal-induced variations
δ pt (t , T1 ,..., Tm , q) and ε tp (t , T1 ,..., Tm , q) from Eq. (2.35) can be linearized as
δ pt (ΔT1 ,..., ΔTm , Δq) = C0pq + C1pq ⋅ ΔT1 + … + Cmpq ⋅ ΔTm + Cmpq+1 ⋅ Δq,
(2.36)
ε tp (ΔT1 ,..., ΔTm , Δq) = D0pq + D1pq ⋅ ΔT1 + … + Dmpq ⋅ ΔTm + Dmpq+1 ⋅ Δq,
⋅) and D(⋅) are constants, ΔTc is the variation of the cth temperature at
where C(pq pq
⎛ Mk Mk Mk ⎞
⎜ 1 −θ θ d ⎟
⎜ Akz z Akz y Akz x⎟
⎜ M M M ⎟
θ k −θ z k
= ⎜ Akz z ⎟.
Mk 1 d zk
δH (2.37)
Akz ⎜ Ak x Ak y
⎟
⎜ Mk Mk M ⎟
⎜ − θ Az y θ
Akz x
1 d zk
Ak z ⎟
⎜ k
⎟
⎝ 0 0 0 1 ⎠
SoV Based Quality Assurance for MMPs – Modeling and Planning 75
δzx
Z1
X1
εzx δxx
δyx εxx
Y1 εyx
O1
Z
X
Y
Fig. 2.11. Position and orientation deviations of a machine-tool carriage system due to
mounting errors
The spindle thermal variations are an important contributor to the total thermal-
induced variations during machining due to the large amounts of heat generated at
high-speed revolutions [27]. Spindle thermal expansion produces three
translational and two rotational drifts to the spindle CS [23]. This variation is
represented as the deviation of S k w.r.t. Akz and is proportional to the increase
of the spindle temperature, denoted as ΔTs , from nominal conditions. At station k,
this variation is defined by the DMV
Az
k
[
x S k = f1k (ΔTsk ) f 2k (ΔTsk ) f 3k (ΔTsk ) f 4k (ΔTsk ) f 5k (ΔTsk ) f 6k (ΔTsk ) ] (2.39)
T
≈ [Cf ]
T m
x
k
Cf yk Cf zk Cfαk Cf βk 0 ⋅ ΔTsk = B k 2 ⋅ ΔTsk ,
76 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
F ⋅ L3 64 ⋅ F ⋅ L3
δr = = , (2.40)
3⋅ E ⋅ I 3⋅π ⋅ E ⋅ D4
where E is the Young's Modulus for the material tool; L3 /D 4 is the tool
slenderness parameter, where D is the equivalent tool diameter [29] and L is the
overhang length; and F is the cutting force perpendicular to the tool axis.
Furthermore, the rotation of the tool tip around the θ -axis perpendicular to the
cutting-tool axis is defined as [31, Chapter 7],
F ⋅ L2 64 ⋅ F ⋅ L2
δθ = = , (2.41)
2 ⋅ E ⋅ I 2 ⋅π ⋅ E ⋅ D4
where F is the force applied at the tool tip perpendicular to the plane defined by
the θ -axis and the cutting-tool axis. As a conclusion, C k is deviated due to the
cutting force-induced deflection. The variation of C k w.r.t. S k can be expressed
by the DMV
S m
xCk = B k 3 ⋅ [ΔFxk ΔFyk ]T , (2.42)
k
m
where B k 3 = [C1 0; 0 C1; 0 0; 0 C2 ; C2 0; 0 0] , and C1 and C2 are
64 L3 3C
defined as C1 = and C2 = 1 . ΔFxk and ΔFyk are the variation of the
3πED 4 2L
cutting force in X and Y direction from nominal conditions, respectively.
SoV Based Quality Assurance for MMPs – Modeling and Planning 77
tan (α )
δz = ⋅ VB , (2.43)
(1 − tan (γ ) ⋅ tan (α ))
78 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
where α is the clearance angle and γ is the rake angle of the cutting inserts.
According to Eq. (2.43), dimensional variations are proportional to the flank wear
magnitude and thus, the dimensional quality variation can be described by a
proportional coefficient that relates the influence of tool flank wear with the
dimensional variation of a manufacturing feature for a specific cutting operation
and cutting-tool geometry.
Assuming that tool flank wear remains constant during the same cutting
operation of one workpiece, the cutting-tool tip presents a constant variation
which is modeled as the DMV of Pk w.r.t. Ck by the expression
C m
x P k = B k 4 ⋅ VBk , (2.44)
k ij
m
where B k 4 = [0 0 CfVk 0 0 0]T , VBk refers to the flank wear of the ith
Bij ij
cutting edge of the jth cutting-tool at the kth machining station and CfVk is the
Bij
proportional coefficient.
Rake angle
(γ )
Rake face
Chip
Cutting speed
Clearance
Workpiece Crater Worn tool angle
wear
(α )
Built-up edge
Flank wear Flank face
adhesion
(VB)
The resulting DMV depends on two components: the feature variation from
previous stations, and the variation added in the station k itself, which can be due
to variations in datum surfaces, fixture locators and machining operations.
The first component is named the relocation term. If there is no variation added
R
at station k, the resulting DMV after station k, denoted as x k k , is the same as the
R
resulting DMV from station k-1 but expressed w.r.t. Rk , denoted as x k k −1 . For a
single feature S i , the relationship between feature variation i w.r.t. Rk −1 and the
same variation but w.r.t. Rk is defined by applying Corollary 1 as
R R R R
x S k = D S k ⋅ x Rk −1 + x S k −1 . (2.45)
i i k i
⎛I Rk
… 06× 6 ⎞⎟
⎜ 6× 6 … 06× 6 … D S1
⎜ … … … … ⎟
R ⎜ R ⎟ R R
xk k = ⎜ 0 6× 6 … I 6× 6 … D S k … 06× 6 ⎟ ⋅ [(x S k −1 )T , … , (x S k −1 )T ]T ,
⎜ i ⎟ 1 M (2.47)
⎜ … … … … ⎟
⎜⎜ 0 Rk
… I 6× 6 ⎟⎟
6× 6 … 0 6× 6 … D S
⎝ M ⎠
R
= A 3k ⋅ x k k −1 .
On the other hand, the second term for deriving the resulting DMV for dimen-
sional variation of part features is related to the variation added at station k due to
the machining operation itself and thus, they only affect on the features machined
at station k. As it was described above, the variation of a machined feature is de-
R Mk Mk
fined as x L k , which depends on the DMVs x P and x R , as it is shown in Fig.
k k k
2.8. Applying Corollary 1, the total variation added to the workpiece at the station
k due to datum-, fixture- and machining-induced variations is defined as
R R Mk Mk
x L k = D Lk ⋅ x R + xP . (2.48)
k k k k
Mk
The DMV x R defined in Eq. (2.9) can be rewritten as
k
Mk F f R R
xR = TR k ⋅ B k1 ⋅ [Δl1 , Δl2 ,…, Δl6 ]T − A1k ⋅ x Bk − A k2 ⋅ x Ck ,
k k k k (2.49)
f2 R
= Bk ⋅ u kf − A 4k ⋅ x k k ,
80 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
Rk R R
where A 4k = [01× 6 , … , A1k ,01×6 , … , A 2k ,01× 6 , …] ; x = [01×6 ,…, ( x Bk )T ,01×6 ,…, ( x Ck )T ,
k k k
R
T
,01×6 , …] , and = [Δl1, Δl2 ,…, Δl6 ] . The DMV
u kf T
defines the variations of xk k
the workpiece w.r.t. the reference CS Rk at the station k . Since features may be
deviated after machining at station k − 1 , the input of feature variations at station
k is the resulting feature variations after station k − 1 but expressed in Rk CS. As
R R
the relationship between x k k −1 and the current x k k is defined in Eq. (2.47) by a
relocation matrix, Eq. (2.49) can be rewritten adding the feature variations from
station k − 1 as
Mk f R
xR = B k 2 ⋅ u kf − A k4 ⋅ A3k ⋅ x k k −1 . (2.50)
k
Mk
On the other hand, the DMV x P defined in Eq. (2.10) can be rewritten to
k
include the machining sources of variation as
Mk C S Az m C S m C m m
xP = [TP k ⋅ TC k ⋅ TS k ⋅ B k 1 TP k ⋅ TC k ⋅ B k 2 TP k ⋅ B k 3 Bk 4 ] ⋅
k k k k k k k
= Bm
k
5
⋅ um
k.
(2.51)
R
Substituting Eqs. (2.50) and (2.51) into Eq. (2.48), x L k is rewritten as
k
R R f R
x L k = D L k ⋅ [B k 2 ⋅ u kf − A 4k ⋅ A 3k ⋅ x k k −1 ] + B m
k ⋅ uk
5 m
k k (2.52)
R f
= A 5k ⋅ x k k −1 + B k 3 ⋅ u kf + B m
k ⋅ uk ,
5 m
R f R f R
where A 5k = [−D L k ·A k4 ·A 3k ] and B k 3 = [D L k ·B k 2 ] . As x L k refers only to the
k k k
feature machined, Eq. (2.52) can be rewritten in a more general form to include all
the features of the part as
R R f
x k k = A 6k ⋅ [ A 5k ⋅ x k k −1 + B k 3 ⋅ u kf + B m
k ⋅ u k ],
5 m
(2.53)
where A 6k = [06×6 ,…, I 6×6 ,…,06×6 ]T is a selector matrix that indicates the feature
machined at station k.
Finally, the resulting DMV after station k that defines the dimensional variation
of part features is obtained summing up both components: the feature variation
SoV Based Quality Assurance for MMPs – Modeling and Planning 81
from previous stations, and the variation added at the station k itself. Thus, the
resulting DMV after station k is defined as
R R R f
x k +k1 = A 3k ⋅ x k k −1 + A 6k ⋅ [ A 5k ⋅ x k k −1 + B k 3 ⋅ u kf + B m
k ⋅ u k ],
5 m
(2.54)
and reorganizing terms
R R f
x k +k 1 = [ A 3k + A 6k ⋅ A 5k ] ⋅ x k k −1 + A 6k ⋅ B k 3 ⋅ u kf + A 6k ⋅ B m
k ⋅ uk ,
5 m
(2.55)
which becomes the so-called SoV model in its generic version
x k +1 = A k ⋅ x k + B kf ⋅ u kf + B m
k ⋅ uk .
m
(2.56)
Besides Eq. (2.56), the SoV model is also composed of the observation
equation, which represents the inspection of KPCs. This equation is formulated as
follows
y k = [C1k ·A 3k ]·x k + v k ,
(2.57)
= Ck ⋅ x k + v k ,
where matrix Ck defines the features that are inspected after the station k which
define the KPCs and v k is a vector that represents the measurement errors during
inspection. The inspection station can be viewed as a special machining station
where only a relocation is conducted since there is no machining operation.
Therefore, C1k is the selector matrix similar to A 6k to indicate the features
inspected, and A 3k is the relocating matrix that relates the primary datum surface at
station k with the primary datum surface at the inspection station, placed after
station k.
where
⎧A ⋅A ·…·A i , i < N
Φ (N•), i = ⎨ N −1 N − 2 (2.59)
⎩ I i=N
and I is the identity matrix. The vector x0 represents the original variations of
workpiece features in its raw state. These original variations are generated by
previous manufacturing processes (i.e. bulk forming processes). Without loss of
generality, it is assumed that the impact of initial workpiece variations on part
quality is negligible in comparison with fixture- and machining-induced variations
and other unmodeled errors. Thus, Eq. (2.58) can be rewritten as
YN = Γ Nf ·U Nf + Γ mN ·U mN + Γ wN ·WN + v N , (2.60)
where
By applying Eq. (2.64), the process planner can estimate the expected part
quality variation given a manufacturing process plan. In practice, the evaluation
and selection among different process plans is conducted by analyzing a capability
ratio that indicates the capability of the manufacturing process to manufacture the
parts within specifications. Since products possess multiple, rather than a single,
KPCs, process capability analysis is based on a multivariate capability index. A
variety of multivariate capability indices, such as those presented in [33-37], have
been proposed for assessing capability.
In this chapter, the multivariate process capability ratio proposed by Chen [34]
is adopted for evaluating the manufacturing capability of a MMP. This capability
ratio was successfully implemented by previous authors [17, Chapter 13] in
similar MMPs with the use of the SoV methodology. Focusing only on the
variations in the KPC measurements, YN , the multivariate process capability
index for the process plan χ can be expressed as
1
MC pχ = , (2.65)
r0
S Yf , process S mY , process
S Yf , process
S mY , process
Cb S Yf , process Cb S mY , process
δH ϕ tol
SϕH = . (2.67)
δϕ H tol
We consider a MMP composed of N stations, each of which is equipped with a
3-2-1 fixture device. For this MMP, the locator variations at station k are defined
as u kf = [u f , … , u f ]T , and the machining variations are defined as
1 6
um
k = [u m , … , u m
T
] , where J k is the number of machining sources of
1 Jk
Locator sensitivity index: This type of indices evaluates the impact of the
variability of each fixture locator at station k . Two indices can be distinguished:
that w.r.t. an individual KPC, as defined in Eq. (2.68), and that w.r.t. a product, as
defined in Eq. (2.69).
⎛ tol ⎞ ⎛ tol ⎞
y ⎜ δyi u f j , k ⎟ ⎜ f u f j ,k ⎟
Su i = abs⎜ ⎟ = abs⎜ Γi , j tol ⎟, (2.68)
⎜ δu f j , k yi
tol
f j ,k ⎟ ⎜ yi ⎟
⎝ ⎠ ⎝ ⎠
M
∑S
1 yi
S uY = u f ,k , (2.69)
f j ,k M i =1 j
86 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
where abs (A) denotes the absolute value of A ; Γi,f j denotes the element at the
ith row and the jth column of the matrix Γ Nf ; yitol is the product dimensional
tolerance of the KPC yi ; and u tol
f , k is the tolerance of the jth fixture locator
j
Fixture sensitivity index: This type of indices evaluates the average impact of
locator variations on part quality at station k . The fixture sensitivity index w.r.t. a
y
KPC ( S f i,k ) and that w.r.t. the product ( S Yf ,k ) are defined respectively as
∑S
y 1 yi
S f i,k = u f ,k , (2.70)
6 j =1 j
∑S
1 yi
S Yf ,k = f ,k . (2.71)
M i =1
Machining source of variation sensitivity index: This type of indices evaluate the
impact of the variability of a specific machining source of variation when
conducting a specific machining operation at station k . Two indices are defined
y
w.r.t. a KPC ( S u i ) and w.r.t. a product ( S uY ), respectively, as
m j ,k m j ,k
∑S
1 yi
S uY = um , k , (2.73)
m j ,k M i =1 j
where Γim, j denote the element in the i th row and j th column of the matrix Γ mN ;
and u mtol ,k defines the variability of the j th machining source of variation at the
j
Operation sensitivity index: This type of indices evaluates the average impact of
machining sources of variations at station k . Two indices are defined w.r.t. a KPC
y
( S mi,k ) and w.r.t. a product ( S mY ,k ), respectively, as
Jk
∑S
y 1 yi
S mi,k = um ,k , (2.74)
Jk j =1 j
∑S
1 yi
S mY ,k = m,k . (2.75)
M i =1
Station sensitivity index: This type of indices evaluates the average impact of the
variability of both fixture and machining-induced variations at station k . Two
y
indices are defined w.r.t. a KPC ( S u i ) and w.r.t. a product ( S uY ), respectively, as
k k
y y
y S f i,k + S mi,k
Su i = , (2.76)
k 2
S Yf ,k + S mY ,k
S uY = . (2.77)
k 2
∑S
y 1 yi
S f i, process = f ,k , (2.78)
N k =1
∑S
1 yi
S Yf , process = f , process . (2.79)
M i =1
y
indices are defined w.r.t. a KPC ( S mi, process ) and w.r.t. a product ( S mY , process ),
respectively, as:
N
∑S
y 1 yi
S mi, process = m, k , (2.80)
N k =1
∑S
1 yi
S mY , process = m , process . (2.81)
M i =1
Note that in the flowchart for process planning improvement shown in Fig. 2.14,
y
the indices S u i and S uY are not evaluated since the purpose is to analyze
k k
separately the variation propagation in the MMP due to fixture- and machining-
induced variations in order to propose process plan adjustments.
Fixture y
S uY Identifies the most influent fixtures on
Su i f ,k KPC/product variability
index f ,k
Machining source y
S uY Identifies the most influent machining
of variation index Su i m j ,k source of variation on KPC/product va-
m j ,k
riability
Machining opera- y
S mY ,k Identifies the most influent machining
tion index S mi,k operation on KPC/product variability
Station y
S uY Identifies the most influent station on
Su i k KPC/product variability
index k
Process index due y
S Yf , process Evaluates the influence on KPC/product
to fixture-induced S f i, process variability due to all fixture-induced
variations variations
Process index due y
S mY , process Evaluates the influence on KPC/product
to machining- S mi, process due to all machining-induced variations
induced variations
this case study, the final part geometry is shown in Table 2.2. The positions of the
locators that compose the fixture at different stations are shown in Table 2.3, and
the characteristics of the MMP in terms of locator tolerances, expected thermal
variations of the machine-tool spindle at each station, and admissible cutting-tool
wear at each cutting-tool are shown in Table 2.4.
For the sake of simplicity and without loss of generality, the extended SoV
model applied in this case study only deals with cutting-tool wear-induced
variations, thermal-induced variations from the machine-tool spindle, fixture-
induced variations and datum-induced variations. The empirical coefficients that
relate the spindle thermal variations with the dimensional spindle expansion and
the cutting-tool wear effects with the cutting-tool tip loss of cut are obtained from
[18]. According to this research work and the results of a set of experiments, the
cutting-tool wear coefficients were adjusted to CfVk = 0.125 and CfVk = 0.135
B B
for frontal and peripheral machining operations, respectively, and the thermal
coefficients were adjusted to Cf xk = Cf yk = Cf αk = Cf βk = Cf γk ≈ 0 and
Cf zk = −0.0052 mm/ C .
Feature
ω S D (rad) t S D (mm)
i i
S2 [0,0,0] [150,75,200]
S3 [0,0,0] [150,225,180]
S4 [π/2,−π/2,−π/2] [150,0,100]
S5 [0,−π/2,0] [0,75,100]
S6 [π/2, π/2, −π/2] [150,250,75]
Table 2.3. Nominal position ( t F D ) and orientation ( ω F D ) of FCS w.r.t. DCS at each
k k
station and fixture layout
ω FD t FD
St. k k Locator position w.r.t. FCS
(rad) (mm) (mm)
1 [−π/2, π ,0] [0,0,0] L1x = 125, L1 y = 50, L2 x = 50, L2 y = 250, L3x = 200, L3 y = 250
p1 y = 50, p1z = −100, p 2 y = 250, p2 z = −100, p3 x = 125, p3 z = −100
2 [π/2,−π/2, π ] [0,0,200] L1x = 100, L1y = 50, L2 x = 50, L2 y = 150, L3 x = 150, L3 y = 150
p1y = 50, p1z = −125, p2 y = 250, p2 z = −125, p3x = 100, p3z = −125
3 [−π/2,0,0] [0,250,200] L1x = 125, L1 y = 50, L2 x = 50, L2 y = 250, L3x = 200, L3 y = 250
p1 y = 50, p1z = −100, p 2 y = 250, p2 z = −100, p3 x = 125, p3 z = −100
4 [ −π/2, π ,0] [0,0,20] L1x = 125, L1 y = 50, L2 x = 50, L2 y = 250, L3x = 200, L3 y = 250
p1 y = 50, p1z = −100, p 2 y = 250, p2 z = −100, p3 x = 125, p3 z = −100
Δl tol
j for j = 4,5,6 refer to tolerances at locators p1 , p 2 and p3 , respectively
other components and uniformly distributed within the expected range of thermal
variations; each component related to cutting-tool wear in U m k was assumed
independent of other components and uniformly distributed within the expected
range of the admissible cutting-tool wear; the measurement error and the term
related to non-linear errors were assumed to be negligible in comparison with
fixture- and machining-induced variations. Eq. (2.60) was evaluated for 1,000
simulations. The resulting distribution of the KPC variations was used to evaluate
the multivariate process capability ratio defined in Eq. (2.65).
After Monte Carlo simulations and considering the expected proportion of
conforming products as 1 − α = 0.99 , the capability index obtained was
MC p = 0.82 . Indeed, Monte Carlo simulations showed that 4.4% of the
simulated parts have nonconforming KPC1 ’s, whereas 0.5% of them have
nonconforming KPC 2 .
SuYf 5 ,4 S uY
S fu ,P
S fu ,P
,4
f
f
f S fu ,P
S fu ,P
6 5
f
4
4
f f
S1 1
Sm
process
SuYf 1,4
S Yf ,1
f
S4
SmY , process S Yf ,4
S uY f ,4
S Yf , 2 S uY f 3
S Yf , process 2
,4 S fu ,P
3
f
Sf
process S fu ,P
f
f 2
S2
S Yf ,3 =0
f
S uY f
S3
6
,4
Sm,S3 Sm,S3
1 4
SmKPC S mKPC
Sm
process 1 1 KPC 1
S mY , process ,1 ,4 KPC 1 S m,S3
S um S um ,1
u
m
Ta
,P
,1 Ts
SmS3,1
S m,S3
u ,P VB
S Yf , process
m
VB
Sf
process
Sm,S3
2
SmKPC
,3
1
,2 = 0
S mKPC
Sm,S3
1 3
12
16 MMP MMP
MMP with improv. I MMP with improv. I
14 10
MMP with improv. I & II MMP with improv. I & II
12
8
10
P.d.f.
P.d.f.
6
8
6 4
4
2 KPC Tolerance
1
2 KPC Tolerance
2
0 0
−0.1 −0.05 0 0.05 0.1 0.15 −0.1 −0.05 0 0.05 0.1 0.15
KPC deviations (mm) KPC deviations (mm)
2 1
(a) (b)
Fig. 2.17. P.d.f. obtained from Monte Carlo simulation results for a) KPC1 and b) KPC 2
variations according to the initial MMP and the MMP with improvement I and I and II
2.5 Conclusions
In this chapter, the derivation of the 3D stream of variation (SoV) model for
MMPs is presented in its extended version. With this model, engineers can
consider machining-induced variations such as those related to machine-tool
inaccuracy, cutting-tool deflections, thermal distortions or cutting-tool wear, as
well as fixture- and datum-induced variations in order to estimate dimensional and
geometrical part variations and their propagation in MMPs. The potential
94 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
Appendix 2.1
where T12 and R12 are the translational and rotational matrix, respectively. For a
HTM, the superscript represents the CS we want the results to be represented in,
and the subscript represents the CS we are transferring from. In Eq. (2.82) the
translational matrix is defined as
⎛1 0 0 t12x ⎞⎟
⎜
2 ⎜0 1 0 t12y ⎟ ⎛⎜ I 3×3 t12 ⎞⎟
T1 = ⎜ ⎟= , (2.83)
⎜0 0 1 t12z ⎟ ⎜⎝ 01×3 1 ⎟⎠
⎜0 0 0 1 ⎟⎠
⎝
around the new Y-axis by angle θ , and finally, the resulting CS is rotated around
the new Z-axis by ψ . According to this representation, the rotational matrix is
formulated as [40, Chapter 2]
⎛ R2 03×1 ⎞⎟
R12 = ⎜ 1 , (2.83)
⎜0 1 ⎟⎠
⎝ 1×3
where
R12 = R z ,φ ⋅ R y ,θ ⋅ R z ,ψ , (2.84)
and
⎛ cosφ − sinφ 0⎞
⎜ ⎟
R z ,φ = ⎜ sinφ cosφ 0 ⎟, (2.85)
⎜ ⎟
⎝ 0 0 1⎠
⎛ cosθ 0 sinθ ⎞
⎜ ⎟
R y ,θ =⎜ 0 1 0 ⎟, (2.86)
⎜ ⎟
⎝ − sinθ 0 cosθ ⎠
⎛ cosψ − sinψ 0⎞
⎜ ⎟
R z ,ψ = ⎜ sinψ cosψ 0 ⎟. (2.87)
⎜ 0 1 ⎟⎠
⎝ 0
For this rotation representation, the rotational matrix is defined as
⎛ cφ cθ cψ − sφ sψ − cφ cθ sψ − sφ cψ cφ sθ ⎞
⎜ ⎟
R12 = ⎜ sφ cθ cψ + cφ sψ − sφ cθ sψ + cφ cψ sφ sθ ⎟, (2.88)
⎜ ⎟
⎝ − sθ cψ sθ sψ cθ ⎠
where c and s refers to cos and sin respectively. As a result, Eq. (2.82) can be
rewritten as
⎛ R2 t12 ⎞⎟
H12 = ⎜ 1 . (2.89)
⎜0 1 ⎟⎠
⎝ 1×3
Using the HTM, an ith point in the CS 1 , defined as p1i = [ p1ix , p1iy , p1iz ] , is
related to the same point expressed in the CS 2 , defined as p i2 , by the following
equation
~ 2 = H2 ⋅ p
p ~1 , (2.90)
i 1 i
~ is equal to [p,1]T .
where p
96 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
Appendix 2.2
A Differential Transformation Matrix (DTM) in the 3D space is a 4 × 4 matrix
that is used to represent the small position and orientation deviation of one CS
w.r.t. another CS. For illustrative purposes, we consider two CSs, 1 and 2. If both
CSs are deviated from nominal values, the HTM between the actual CS 1 and CS
2 can be defined as
H12 = H 1
⋅ δH12 , (2.91)
2
where H 1
is the HTM between the nominal CSs 1 and 2 , and δH12 is the HTM
2
that defines the small position and orientation deviations of CS 2 w.r.t. 1 due to
the deviation from their nominal values, and it is defined as
⎛ 1 − θ 21 z θ 21 y d 21 x ⎞⎟
⎜
⎜ θ1 1 − θ 21 x d 21 y ⎟
δH 2 = ⎜ 21z
1
⎟. (2.92)
⎜ − θ2 y θ 21 x 1 d 21 z ⎟
⎜ 0 1 ⎟⎠
⎝ 0 0
Note that the rotational matrix in δH12 is defined as Eq. (2.88) using the
approximation of cos(θ ) ≈ 1 and sin(θ ) ≈ θ and neglecting second-order small
values since only small position and orientation variations are considered. From
Eq. (2.92), the small position and orientation deviation of CS 2 w.r.t. 1 is defined
as d12 = [d 21 x , d 21 y , d 21 z ]T and θ12 = [θ 21 x ,θ 21 y ,θ 21 z ]T , respectively.
The small position and orientation deviations defined by δH12 can be expressed
as
⎛ θˆ 1 d12 ⎞
Δ12 = ⎜ 2 ⎟, (2.94)
⎜0 ⎟
⎝ 1×3 0 ⎠
⎛ 0 − θ 21 z θ 21 y ⎞⎟
⎜
θˆ 2 = ⎜ θ 21 z
1
0 − θ 21x ⎟. (2.95)
⎜ 1 ⎟
⎜ − θ2 y θ 21 x 0 ⎟
⎝ ⎠
SoV Based Quality Assurance for MMPs – Modeling and Planning 97
⎛ d1 ⎞
x12 = ⎜ 12 ⎟. (2.96)
⎜θ ⎟
⎝ 2⎠
References
[1] Zhou, S., Huang, Q., Shi, J.: State Space Modeling of Dimensional Variation Propa-
gation in Multistage Machining Process Using Differential Motion Vectors. IEEE
Transactions on Robotics and Automation 19, 296–309 (2003)
[2] Ogata, K.: Modern Control Engineering. Prentice Hall, New Jersey (2001)
[3] Ding, Y., Shi, J., Ceglarek, D.: Diagnosability analysis of multi-station manufacturing
processes. Journal of Dynamic Systems Measurement and Control-Transactions of
the Asme 124, 1–13 (2002)
[4] Zhou, S., Chen, Y., Ding, Y., Shi, J.: Diagnosability Study of Multistage Manufactur-
ing Processes Based on Linear Mixed-Effects Models. Technometrics 45, 312–325
(2003)
[5] Zhou, S., Chen, Y., Shi, J.: Statistical estimation and testing for variation root-cause
identification of multistage manufacturing processes. IEEE Transactions on Automa-
tion Science and Engineering 1, 73–83 (2004)
[6] Li, Z.G., Zhou, S., Ding, Y.: Pattern matching for variation-source identification in
manufacturing processes in the presence of unstructured noise. IIE Transactions 39,
251–263 (2007)
[7] Djurdjanovic, D., Ni, J.: Dimensional errors of fixtures, locating and measurement
datum features in the stream of variation modeling in machining. Journal of Manufac-
turing Science and Engineering-Transactions of the Asme 125, 716–730 (2003)
[8] Djurdjanovic, D., Zhu, J.: Stream of Variation Based Error Compensation Strategy in
Multi-Stage Manufacturing Processes. In: ASME Conference Proceedings,
vol. 42231, pp. 1223–1230 (2005)
[9] Djurdjanovic, D., Ni, J.: Online stochastic control of dimensional quality in multista-
tion manufacturing systems. Proceedings of the Institution of Mechanical Engineers
Part B-Journal of Engineering Manufacture 221, 865–880 (2007)
[10] Jiao, Y., Djurdjanovic, D.: Joint allocation of measurement points and controllable
tooling machines in multistage manufacturing processes. IIE Transactions 42, 703–
720 (2010)
[11] Zhong, J., Liu, J., Shi, J.: Predictive Control Considering Model Uncertainty for Var-
iation Reduction in Multistage Assembly Processes. IEEE Transactions on Automa-
tion Science and Engineering 7, 724–735 (2010)
[12] Liu, Q., Ding, Y., Chen, Y.: Optimal coordinate sensor placements for estimating
mean and variance components of variation sources. IIE Transactions 37, 877–889
(2005)
[13] Liu, J., Shi, J., Hu, J.S.: Quality-assured setup planning based on the stream-of-
variation model for multi-stage machining processes. IIE Transactions 41, 323–334
(2009)
98 J.V. Abellan-Nebot, J. Liu, and F. Romero Subiron
[14] Chen, Y., Ding, Y., Jin, J., Ceglarek, D.: Integration of Process-Oriented Tolerancing
and Maintenance Planning in Design of Multistation Manufacturing Processes. IEEE
Transactions on Automation Science and Engineering 3, 440–453 (2006)
[15] Ding, Y., Jin, J., Ceglarek, D., Shi, J.: Process-oriented tolerancing for multi-station
assembly systems. IIE Transactions 37, 493–508 (2005)
[16] Abellan-Nebot, J.V., Liu, J., Romero, F.: Design of multi-station manufacturing
processes by integrating the stream-of-variation model and shop-floor data. Journal of
Manufacturing Systems 30, 70–82 (2011)
[17] Shi, J.: Stream of Variation Modeling and Analysis for Multistage. CRC Press Taylor
and Francis Group (2007)
[18] Abellan-Nebot, J.V., Liu, J., Romero, F.: State Space Modeling of Variation Propaga-
tion in Multi-station Machining Processes Considering Machining-Induced Varia-
tions. Journal of Manufacturing Science and Engineering-Transactions of the Asme
(in press, 2012)
[19] Joshi, P.H.: Jigs and Fixtures Design Manual. McGraw-Hill, New York (2003)
[20] Wang, M.Y.: Characterizations of positioning accuracy of deterministic localization
of fixtures. In: IEEE International Conference on Robotics and Automation, pp.
2894–2899 (2002)
[21] Choi, J.P., Lee, S.J., Kwon, H.D.: Roundness Error Prediction with a Volumetric Er-
ror Model Including Spindle Error Motions of a Machine Tool. International Journal
of Advanced Manufacturing Technology 21, 923–928 (2003)
[22] Lei, W.T., Hsu, Y.Y.: Accuracy test of five-axis CNC machine tool with 3D probe-
ball. Part I: design and modeling, International Journal of Machine Tools and Manu-
facture 42, 1153–1162 (2002)
[23] Chen, J.S., Yuan, J., Ni, J.: Thermal Error Modelling for Real-Time Error Compensa-
tion. International Journal of Advanced Manufacturing Technology 12, 266–275
(1996)
[24] Chen, G., Yuan, J., Ni, J.: A displacement measurement approach for machine geo-
metric error assessment. International Journal of Machine Tools and Manufacture 41,
149–161 (2001)
[25] Yang, S.H., Kim, K.H., Park, Y.K., Lee, S.G.: Error analysis and compensation for
the volumetric errors of a vertical machining centre using a hemispherical helix ball
bar test. International Journal of Advanced Manufacturing Technology 23, 495–500
(2004)
[26] Tseng, P.C.: A real-time thermal inaccuracy compensation method on a machining
centre. International Journal of Advanced Manufacturing Technology 13, 182–190
(1997)
[27] Haitao, Z., Jianguo, Y., Jinhua, S.: Simulation of thermal behavior of a CNC machine
tool spindle. International Journal of Machine Tools and Manufacture 47, 1003–1010
(2007)
[28] Kim, G.M., Kim, B.H., Chu, C.N.: Estimation of cutter deflection and form error in
ball-end milling processes. International Journal of Machine Tools and Manufac-
ture 43, 917–924 (2003)
[29] López de Lacalle, L.N., Lamikiz, A., Sanchez, J.A., Salgado, M.A.: Effects of tool
deflection in the high-speed milling of inclined surfaces. International Journal of Ad-
vanced Manufacturing Technology 24, 621–631 (2004)
[30] Dow, T.A., Miller, E.L., Garrard, K.: Tool force and deflection compensation for
small milling tools. Precision Engineering 28, 31–45 (2004)
SoV Based Quality Assurance for MMPs – Modeling and Planning 99
[31] Gere, J.M., Goodno, B.J.: Mechanics of Materials, 7th edn. Nelson Engineering
(2008)
[32] ISO 8688-1:1989, Tool-life testing in milling – Part 1: face milling
[33] Taam, W., Subbaiah, P., Liddy, J.W.: A note on multivariate capability indices. Jour-
nal of Applied Statistics 20, 339–351 (1993)
[34] Chen, H.F.: A Multivariate Process Capability Index Over A Rectangular Solid To-
lerance Zone. Statistica Sinica 4, 749–758 (1994)
[35] Wang, F.K., Chen, J.C.: Capability indices using principle components analysis.
Quality Engineering 11, 21–27 (1998)
[36] Wang, F.K., Du, T.C.T.: Using principle component analysis in process performance
for multivariate data. Omega 28, 185–194 (2000)
[37] Pearn, W.L., Kotz, S.: Encyclopedia And Handbook of Process Capability Indices: A
Comprehensive Exposition of Quality Control Measures. World Scientific Publishing
Company (2006)
[38] Zhang, M., Djurdjanovic, D., Ni, J.: Diagnosibility and sensitivity analysis for multi-
station machining processes. International Journal of Machine Tools and Manufac-
ture 47, 646–657 (2007)
[39] Okafor, A.C., Ertekin, Y.M.: Derivation of machine tool error models and error com-
pensation procedure for three axes vertical machining center using rigid body kine-
matics. International Journal of Machine Tools and Manufacture 40, 1199–1213
(2000)
[40] Spong, M.W., Hutchinson, S., Vidyasagar, M.: Robot modeling and control. John Wi-
ley & Sons, Inc., New York (2006)
3
Finite element method has gained immense popularity in the area of metal cutting for
providing detailed insight in to the chip formation process. This chapter presents an
overview of the application of finite element method in the study of metal cutting
process. The basics of both metal cutting and finite element methods, being the fore-
most in understanding the applicability of finite element method in metal cutting,
have been discussed in brief. Few of the critical issues related to finite element mod-
eling of orthogonal machining have been cited through various case studies. This
would prove very helpful for the readers not simply because it provides basic steps
for formulating FE model for machining but also focuses on the issues that should be
taken care of in order to come up with accurate and reliable FE simulations.
3.1 Introduction
Metal cutting or machining is considered as one of the most important and versa-
tile processes for imparting final shape to the preformed blocks and various manu-
factured products obtained from either casting or forging. Major portion of the
components manufactured worldwide necessarily require machining to convert
them into finished product. This is the only process in which the final shape of the
product is achieved through removal of excess material in the form of chips from
the given work material with the help of a cutting tool. Basic chip formation
processes include turning, shaping, milling, drilling, etc., the phenomenon of chip
formation in all the cases being similar at the point where the cutting edge meets
the work material. During cutting, the chip is formed by deforming the work ma-
terial on the surface of the job using a cutting tool. The technique by which the
metal is cut or removed is complex not simply because it involves high straining
and heating but it is found that conditions of operation are most varied in the
102 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
that solutions to such problem seldom exist. One prospect, as done in case of analyt-
ical approach, is to make simplifying assumptions in order to ignore the difficulties
and reduce the problem to one that can be solved. This, however, modifies the actual
problem to great extent and leads to serious inaccuracies. Now that more and more
powerful computers have emerged and are being widely used, a more viable alterna-
tive is to obtain approximate numerical solutions rather than exact form solutions.
This enables one to retain the complexity of the problem on one hand, and the de-
sired accuracy on the other. The most popular numerical technique that has evolved
in recent decades for the analysis of metal cutting is FEM. FEM allows the coupled
simulation of plastic and thermal process and is capable of considering numerous
dependencies of physical constants on each other.
The advantages of FEM over empirical and analytical methods in machining
process can be summed up as follows [9]:
• some of the difficult to measure variables, namely, stress, strain, strain
rate, temperature, etc., can be obtained quantitatively, in addition to cut-
ting forces and chip geometry.
• non-linear geometric boundaries such as the free surface of the chip can
be represented and used.
• material properties can be handled as functions of strain, strain rate and
temperature.
• the interaction between the chip and the tool can be modeled as sticking
and sliding conditions.
Thus, FEM-based analysis provides detailed qualitative and quantitative insight
in to the chip formation process that is very much required for profound under-
standing of the influence of machining parameters. While experimental tests and
analytical models serve as the foundation of metal cutting, FEM leads to the ad-
vancement and further refinement of knowledge in the area of metal cutting.
the workpiece and may create a chatter condition. Again due to increased power
consumption, there can be increased heat generation, thus accelerating the wear
rate. The enormous variety in input variables leads to infinite combinations and
understanding the interrelationship between these input variables and output va-
riables again becomes an arduous task.
Chip
tc
bc
bo
γo Contact length
to β
Workpiece Vc
β = Shear plane angle which is defined as the angle between the shear plane
(plane of separation of chip from the undeformed work material) and the cutting
velocity vector
λ = Inclination angle
d = Depth of cut (mm)
s = Feed (mm/rev)
Vc = Cutting velocity (m/min)
to , bo , lo are thickness, width and length (mm) of the uncut chip thickness, re-
spectively, such that
s sin φ p
t0 =
cos λ
(3.1)
d
b0 =
sin φ p
tc , bc , lc are thickness, width and length (mm) of the deformed chip thickness, re-
spectively.
ζ , Chip reduction coefficient is defined as the index of the degree of deformation
involved in the chip formation such that
tc
ζ = (3.2)
t0
The degree of chip thickening can also be expressed in terms of cutting ratio
⎛ 1⎞
r⎜= ⎟ .
⎝ ζ ⎠
Since tc > t0 , ζ is generally greater than one. Larger value of ζ signifies high-
er value of cutting forces that are required to carry out the machining process. The
chip reduction coefficient is affected by the tool rake angle and chip-tool interfacial
106 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
friction coefficient ( μ ). This is clear from the relation obtained from velocity anal-
ysis based on D’Alemberts’ principle applied to chip motion, given as [15]:
ζ =e
(
μ Π 2 −γ 0 ) (3.3)
The ζ can be reduced considerably by using cutting tools with larger positive
rake angles and by using cutting fluids.
Shear angle can be expressed in terms of ζ :
cos γ 0
tan β = (3.4)
ζ − sin γ 0
It is evident from Eq. (3.4) that decrease in ζ and increase in rake angle tend to
increase the shear angle. This suggests that increasing values of shear angle re-
quires lesser forces for cutting resulting in favorable machining.
A single cutting force is known to act in case of a single point cutting tool be-
ing used in turning operation which can be resolved into three components along
three orthogonal directions i.e. X , Y and Z , as shown in Fig. 3.2.
Workpiece
FX
FY FT
Φp
Cutting tool
Feed
FT = FX + FY (3.6)
Fc – This is the main or tangential cutting force acting in the direction of cutting
velocity. This when multiplied with cutting velocity gives the value of cutting
power consumption.
Chip
Shear plane
Fs γo Tool
Fc Machined surface
β
FT Ns R
F
η
N
Workpiece
Fs and N s are called shear force and normal force, respectively, that act on
the chip from workpiece side i.e. in the shear plane. F and N are friction force
at chip-tool interface and force normal rake face, respectively, that act on the chip
from the tool side i.e. in the chip-tool interface. These forces can be determined as
follows [10]:
Fs = Fc cos β − FT sin β
(3.7)
N s = FT cos β + Fc sin β
F = FT cos γ 0 + Fc sin γ 0
(3.8)
N = Fc cos γ 0 − FT sin γ 0
The average coefficient of friction between chip and tool ( μ ) can be deduced
either in terms of friction angle ( η ) or F and N , as given:
F
μ = tan η = (3.9)
N
Finite Element Modeling of Chip Formation in Orthogonal Machining 109
Chip SSDZ
PSDZ
Tool
TSDZ
Workpiece
Elevated temperature in the cutting zone adversely affects the strength, hard-
ness and wear resistance of the cutting tool, thereby inducing rapid tool wear and
reduced tool life. Temperature rise in workpiece material may cause dimensional
inaccuracy of the machined surface and can also damage the surface properties of
the machined component by oxidation, rapid corrosion, burning, etc. Thus, estima-
tion of cutting temperature is a crucial aspect in the study of metal cutting.
Cutting temperatures are more difficult to measure accurately than cutting
forces. No simple analog to the cutting force dynamometer exists for measuring
the cutting temperatures; rather numerous methods have been found in the litera-
ture to experimentally measure the machining temperature [20]. These methods
are thermocouples, radiation methods, metallographic techniques and application
of thermal paints, fine powders and Physical Vapor Deposition (PVD) coatings
[21]. A particular method generally gives only limited information on the com-
plete temperature distribution.
110 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
Creating the geometry of the problem domain is the first and foremost step in any
analysis. The actual geometries are usually complex. The aim should not be simp-
ly to model the exact geometry as that of the actual one, instead focus should be
made on how and where to reduce the complexity of the geometry to manageable
one such that the problem can be solved and analyzed efficiently without affecting
the nature of problem and accuracy of results much. Hence, proper understanding
of the mechanics of the problem is certainly required to analyze the problem and
examine the geometry of the problem domain. It is generally aimed to make use of
2 D elements rather than 3 D elements since this can drastically reduce the number
of degrees of freedom (DOFs).
Finite Element Modeling of Chip Formation in Orthogonal Machining 115
Computation time which is nothing but the time that CPU takes for solving finite
element equation is affected markedly by the total number of DOFs in the FE equ-
ation as shown [29]:
Here, β is a constant which generally lies in the range of 2-3 depending on the
type of solver used and the structure or bandwidth of the stiffness matrix. A small-
er bandwidth yields smaller value of β resulting in faster computation. The Eq.
(3.10) suggests that a finer mesh with larger number of DOFs increases the com-
putation time exponentially. Thus, it is always preferred to create FE model with
elements possessing lower dimensions so that number of DOFs are reduced as far
as possible. In addition, meshing should be done in such a way that critical areas
possess finer meshing while others possess coarse meshing. This is one way of re-
ducing the computation time without hampering the accuracy of results.
Choice of element type: Based on the shape, elements can be classified as one
dimensional (line or beam), two-dimensional or plane (triangular and quadrilater-
al) and three-dimensional (tetrahedral and hexahedral) elements. Each of these
elements can again be either in their simplest forms i.e. linear or higher order
forms such as quadratic or cubic depending upon the number of nodes an element
possess. As discussed earlier, 2D elements are generally preferred over 3D ele-
ments as far as cost of computation is concerned. The linear triangular element
was the first type of element developed for defining 2D geometries, the formula-
tion of which is simplest of all. However, quadrilateral elements are mostly pre-
ferred nowadays for 2D solids [36]. This is not simply because quadrilateral ele-
ment contains one node more than that of triangular elements but also gradients of
quadrilateral elements are linear functions of the coordinate directions compared
to the gradient being constant in triangular elements. Besides, the number of ele-
ments is reduced to half that of a mesh consisting of triangular elements for the
same number of nodes. In nearly all instances, a mesh consisting of quadrilateral
elements is sufficient and usually more accurate than that of triangular elements.
Besides for higher order elements, more complex representations are achieved that
are increasingly accurate from an approximation point of view. However, care
must be taken to evaluate the benefits of increased accuracy against the computa-
tional cost associated with more sophisticated elements.
Mesh density: A mesh of varying density is generally preferred. The critical areas
need to be finely meshed while rest of the areas may contain coarse mesh. The
control of mesh density can be performed by placing the nodes according to the
given geometry and required degree of accuracy in FEM packages.
116 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
Approximating the problem domain using simpler elements is relatively easy, but,
very often formulation of these elements can give inaccurate results due to some
abnormalities namely, shear locking, volumetric locking and hourglassing. Lock-
ing, in general, is the phenomenon of exhibiting an undesirable stiff response to
deformation in finite elements. This kind of problem may arise when the element
interpolation functions are unable to approximate accurately the strain distribution
in the solid or when the interpolation functions for the displacement and their de-
rivatives are not consistent. Shear locking predominantly occurs in linear elements
with full integration and results in underestimated displacements due to undesira-
ble stiffness in the deformation behavior. This problem can be alleviated by using
reduced integration scheme. Volumetric locking is exhibited by incompressible
materials or materials showing nearly incompressible behavior resulting in an
overly stiff response [37].
It is known that the stress comprises of two components: deviatoric (distortion-
al) and volumetric (dilatational). The volumetric component is a function of bulk
modulus and volumetric strain. The bulk modulus is given by:
E
K= (3.11)
3 − 6ν
where E is Young’s modulus and ν is Poisson’s ratio. It is understood that
when ν = 0 , there is no volumetric locking at all. But in the incompressibility
limit, when ν → 0.5 then lim K = ∞ resulting in overly stiff response. In
ν →0.5
this limit, the finite element displacements tend to zero and thus, the volumetric
locking prevails [37]. Hybrid Elements are often used to overcome volumetric
locking [38]. The idea is to include the hydrostatic stress distribution as an addi-
tional unknown variable, which must be computed at the same time as the dis-
placement field. This allows the stiff terms to be removed from the system of fi-
nite element equations. Further details on hybrid elements can be found in existing
literatures [39, 40].
Finite Element Modeling of Chip Formation in Orthogonal Machining 117
It has been also observed that fully integrated elements have higher tendency of
volumetric locking. This is because at each of the integration points the volume
remains nearly constant and when the integration points are more, as in case of
full integration scheme, it results in overconstraining of the kinematically admiss-
ible displacement field. Reduced integration scheme is a possible way to avoid
locking wherein the element stiffness is integrated using fewer integration points
than that are required for full integration scheme [24]. Reduced integration is an
effective measure for resolving locking in elements such as quadratic quadrilateral
and brick effectively but not in elements such as 4 noded quadrilateral or 8 noded
brick elements. The error occurs because the stiffness matrix is nearly singu-
lar which implies that the system of equations includes a weakly constrained de-
formation mode. This phenomenon is known as hourglassing which results in
wildly varying displacement field but correcting stress and strain fields. This can
be cured either by employing selectively reduced integration or by adding an ar-
tificial stiffness to the element that acts to constrain the hourglass mode (Reduced
Integration with Hourglass Control) [37].
Time integration methods are employed for time stepping to solve the transient
dynamic system of equations. There are two main types of time integration me-
thods [29, 31]:
• Implicit
• Explicit
Both the procedures have their own benefits and limitations from solution point of
view. Selection of an appropriate solution method should be done carefully ac-
cording to the nature of the problem. The implicit method is generally suitable for
linear transient problems. In an implicit dynamic analysis, a global stiffness matrix
is assembled and the integration operator matrix must be inverted and a set of non-
linear equilibrium equations must be solved at each time increment. Suitable im-
plicit integrator, which is unconditionally stable, is employed for non-linear for-
mulations and the time increment is adjusted as the solution progresses in order to
obtain a stable, yet time-efficient solution. However, there are situations where
implicit method encounters problems finding a converged solution. In analyses
such as machining which involves complex contact problems, this algorithm is
less efficient due to the simultaneous solving of the equation system in every in-
crement. In such situations, explicit time integration method proves to be robust.
The simulation program seldom aborts due to failure of the numerical algorithm
since the global mass and stiffness matrices need not be formed and inverted.
However, the method is conditionally stable i.e. if the time step exceeds critical
time step, the solution may grow unboundedly and give erroneous results. The
critical time step for a mesh is given by [31]:
where Le is the characteristic length of the element and ce is the current wave
speed of an element.
(a) (b)
Fig. 3.5. (a) Initial mesh configuration and (b) Deformed mesh configuration
Figures 3.6 (a) and 3.6 (b) show two possible mesh adaptivity techniques that
are implemented to control the mesh distortion near the tool tip.
(a) (b)
Fig. 3.6. (a) Mesh refinement and (b) Relocation of nodes near the tool tip
Finite Element Modeling of Chip Formation in Orthogonal Machining 119
In the first case (Fig. 3.6 (a)), it is noted that refinement of mesh takes place at
critical areas i.e. finer mesh in the region closer to the tool tip wherein the size of
elements is adjusted based on selected error indicators and loading history. Here, the
number of elements and connectivity of the nodes are changed. This type of tech-
nique is often known as h–adaptivity [41]. In Fig. 3.6 (b), mesh distortion is con-
trolled by relocation of nodes without altering the number of elements and connec-
tivity of nodes. This approach is often termed as r–adaptivity [41]. Since the number
of elements remains the same, this approach is computationally less expensive
as compared to h–adaptivity. However, both h-refinement and r–refinement are
widely implemented in simulating metal cutting process, the former being used in
Lagrangian framework while the latter being employed in ALE framework.
machining alloy 718, using the commercial software MSC Marc. Davim et al. [59]
made a comparative study between the performances of PCD (polycrystalline di-
amond) and K10 (cemented carbide) tools while machining aluminum alloys using
AdvantEdge software. Attanasio et al. [60] presented 3D numerical predictions of
tool wear based on modified Takeyama and Murata wear model using SFTC De-
form 3D software.
The 2D model comprises a rectangular block representing the workpiece and only
a portion of cutting tool which participates in the cutting. The nose radius is neg-
lected for the simplicity of the problem. Moreover, there is hardly any effect of
nose radius once a steady state is reached in cutting. The cutting tool includes the
following geometrical angles: inclination angle χ = 90º , rake angle γ = −6º
and flank angle α = 5º . An intermediate layer of elements known as damage
zone (highlighted region in Fig. 3.7) has been considered in the workpiece block
such that width of the upper surface is equal to the undeformed chip thickness or
in other words feed in case of orthogonal cutting process. As mechanical boundary
conditions, bottom of work piece is fixed in Y direction and left vertical edge of
Finite Element Modeling of Chip Formation in Orthogonal Machining 121
workpiece is fixed in X direction. The former not only constrains the movement
of workpiece in the Y direction but also aids in calculating feed force during ma-
chining while the latter, not only constrains the movement of workpiece in the
X direction but also aids in calculating cutting force during machining. The reac-
tion force components when added at all the constrained nodes of the left vertical
edge of the workpiece in X direction give the cutting force values while at bot-
tom edge of the workpiece in Y direction give the feed force. Tool is given the
cutting velocity in negative X direction and top edge of the tool is constrained in
Y direction.
Similarly, as thermal boundary conditions the tool and the workpiece are in-
itially considered at the room temperature. Heat transfer from the chip surface to
cutting tool is allowed by defining the conductive heat transfer coefficient (h),
given as:
∂T
−k = h (To − T ) (3.13)
∂n
where k is thermal conductivity and To is the ambient temperature.
The geometric model taken into consideration is shown in Fig. 3.7.
Tool
Y
Damage zone
Z X
Workpiece
Damage zone is a sacrificial layer of elements that defines the path of separation
between chip surface and the machined surface which is going to take place as the
tool progresses. In actual practice, these two surfaces should be same but such as-
sumption has been taken only for the modeling purpose as a chip separation crite-
rion. The choice of the height of the designated damage zone is purely based on
computational efficiency. Generally, a very small value (say, 10–30 µm) which is
computationally acceptable has been taken as the width of the damage zone.
Four-node plane strain bilinear quadrilateral (CPE4RT) elements with reduced
integration scheme and hourglass control are used for the discretization for both
122 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
the workpiece and the cutting tool. The workpiece is meshed with CPE4RT-type
elements by unstructured grid generation which utilizes advancing front algorithm
in ABAQUS/Explicit.
ρ v = f + div σ (3.15)
ρ e = σ : D − div q + r (3.16)
the work materials as multiplicative effects of strain, strain rate and temperature,
given as follows [65]:
⎛ ⎛ T −T ⎞
( )
m
⎞
σ = A + B (ε )
p n
(1 + C lnε ) ⎜⎜1 − ⎜ T − Troom ⎟ ⎟⎟
p∗
(3.17)
⎝ ⎝ m room ⎠ ⎠
εp ⎛ T − Troom ⎞
ε = p for ε 0 p = 1 s-1 and T * = ⎜
∗
⎟ (3.18)
ε0 ⎝ Tm − Troom ⎠
where Troom is the room temperature taken as 25 ºC, Tmelt is the melting temper-
ature of the workpiece, A is the initial yield stress (MPa), B the hardening mod-
ulus, n the work-hardening exponent, C the strain rate dependency coefficient
(MPa), and m the thermal softening coefficient. A, B, C, n and m used in the mod-
el are actually the empirical material constants that can be found from different
mechanical tests. Johnson-Cook model has been found to be one of the most suit-
able one for representing the flow stress behavior of work material undergoing
cutting. Besides, it is also considered numerically robust as most of the variables
are readily acceptable to the computer codes. This has been widely used in model-
ing of machining process by various researchers. [52, 66–68].
A damage model should be incorporated in the damage zone along with the ma-
terial as a chip separation criterion in order to simulate the movement of the cut-
ting tool into workpiece without any mesh distortion near the tool tip. Specifica-
tion of damage model includes a material response (undamaged), damage
initiation criterion, damage evolution and choice of element deletion.
Damage initiation criterion is referred to as the material state at the onset of
damage. In the present case, Johnson–Cook damage initiation criterion has been
employed. This model makes use of the damage parameter ωD defined as the
sum of the ratio of the increments in the equivalent plastic strain Δε to the frac-
p
Δε p
ωD = ∑ (3.19)
εf
The fracture strain εf is of the form as follows [69]:
−P
σ∗ = (3.21)
σ
124 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
σ = (1 − D ) σ (3.22)
where σ is the effective (undamaged) stress tensor computed in the current in-
crement. When overall damage variable D reaches a value 1, it indicates that the
material has lost its load carrying capacity completely. At this point, failure occurs
and the concerned elements are removed from the computation.
p
The effective plastic displacement ( u ), after the damage initiation criterion is
met can be defined with the evolution law as follows:
u p = Leε p
(3.23)
Leε p u p
D= = p
uf p uf
(3.24)
2G f
uf =p
σ y0
The model ensures that the energy dissipated during the damage evolution
process is equal to G f only if the effective response of the material is perfectly
plastic (constant yield stress) beyond the onset of damage. In this study, G f is
provided as an input parameter which is a function of fracture toughness KC,
Young's modulus E and Poisson's ratio ν as given in the equation for the plane
strain condition [70]:
⎛ 1 −ν 2 ⎞ 2
Gf = ⎜ ⎟ KC . (3.25)
⎝ E ⎠
The ELEMENT DELETION = YES module along with the Johnson Cook damage
model of the software enables to delete the elements that fail. This produces the
chip separation and allows the cutting tool to penetrate further into the workpiece
through a predefined path (damage zone).
A contact is defined between the rake surface and nodes of the workpiece materi-
al. Coulomb’s law has been assumed in the present study to model the frictional
conditions as the chip flows over the rake surface.
During the machining process, heat is generated in the primary shear deforma-
tion zone due to severe plastic deformation and in the secondary deformation zone
due to both plastic deformation and the friction in the tool–chip interface. The
steady state two-dimensional form of the energy equation governing the orthogon-
al machining process is given as:
⎛ ∂ 2T ∂ 2T ⎞ ⎛ ∂T ∂T ⎞
k⎜ 2 + 2 ⎟ − ρ Cp ⎜ u x + vy ⎟+q = 0 (3.26)
⎝ ∂x ∂y ⎠ ⎝ ∂x ∂y ⎠
q = qp + qf (3.27)
qp = η pσε p (3.28)
qf = ηf Jτγ (3.29)
where q p is the heat generation rate due to plastic deformation, ηp the fraction of
the inelastic heat, qf is the volumetric heat flux due to frictional work, γ the slip
rate, ηf the frictional work conversion factor considered as 1.0, J the fraction of
the thermal energy conducted into the chip, and τ is the frictional shear stress.
The value of J may vary within a range, say, 0.35 to 1 for carbide cutting tool
[71]. In the present work, 0.5 (default value in ABAQUS) has been taken for all
126 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
the cases. The fraction of the heat generated due to plastic deformation remaining
in the chip, ηp , is taken to be 0.9 [56].
An ALE approach is incorporated to conduct the FEM simulation. This avoids se-
vere element distortion and entanglement in the cutting zone without the use of
remeshing criterion.
ALE formulation: In the ALE approach, the grid points are not constrained to re-
main fixed in space (as in Eulerian description) or to move with the material
points (as in Lagrangian description) and hence have their own motion governing
equations. ALE description is given as follows:
∼ •
( ) = ( )+ c ∇ ( ) (3.30)
c = v − vˆ (3.31)
ρ v + ρ c ∇v = f + div σ (3.33)
ρ e + ρ c ∇e = σ : D − div q + r (3.34)
(
vi = M -1 f ext − f int
(i ) (i)
) (3.35)
Δt (i +1) − Δt i i
v (i +1/ 2) = v ( i −1/ 2) + v (3.36)
2
Δt (i +1) − Δt i (i +1/ 2)
x (i +1) = xi + v (3.37)
2
Finite Element Modeling of Chip Formation in Orthogonal Machining 127
The type of software package chosen for the FE analysis of metal cutting
process is equally important in determining the quality and scope of analysis
that can be performed. There are currently large number of commercial software
packages available for solving a wide range of engineering problems that might
be static, dynamic, linear or non-linear. Some of the dominant general purpose
FE software packages include ABAQUS, ANSYS, MSC/NASTRAN, SRDC-
IDEAS, etc. It is obvious that different packages would possess different
capabilities. This makes it critical to select the suitable software package with
appropriate features required for performing a given analysis successfully. The
present study selects ABAQUS as a platform to explore the capabilities of finite
element method in analyzing various aspects of metal cutting process. ABAQUS
is known to be powerful general purpose FE software that can solve problems
ranging from relatively simple linear analyses to the highly complex non-linear
simulations. This software does not have any separate module for machining as
in the case of Deform or AdvantEdge. As a result, the user has to explicitly de-
fine the tool and the workpiece, process parameters, boundary conditions, mesh
geometry and simulation controls. This may certainly require more skill, effort
and time to set up simulations as no preset controls and assumptions are availa-
ble. But this is the feature that not only ensures very high level of details from
modeling point of view but also a thorough analysis by allowing a precise
control on boundary conditions, mesh attribute, element type, solver type and
so on.
A complete ABAQUS program can be subdivided into three distinct modules,
namely, ABAQUS/CAE, ABAQUS/Standard or ABAQUS/Explicit and ABAQUS/
Viewer as shown in Fig. 3.8. These modules are linked with each other by input and
output files. ABAQUS/Standard and ABAQUS/Explicit are the two main types of
solvers that are available for performing analysis, ABAQUS/Explicit being mainly
used for explicit dynamic analysis. It is said that the strength of ABAQUS program
greatly lies in the capabilities of these two solvers.
128 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
ABAQUS/CAE
(Preprocessing)
Input file
(*.inp)
ABAQUS/Standard or ABAQUS/Explicit
(Analysis)
Output file
(*.odb)
ABAQUS/CAE or ABAQUS/Viewer
(Post-processing)
The model of the physical problem is created in the pre-processing stage, de-
tails of which such as discretized geometry, material data, boundary conditions,
element type, analysis type and output request are contained in the input file.
ABAQUS/CAE is divided into functional units called modules with the help of
which the FE model can be created, input file can be generated and results can be
extracted from the output file. Each module has been designed to serve a specific
portion of the modeling task. The subsequent subsections would discuss about
various modules of ABAQUS/CAE in brief.
Part module: Individual parts are created in the part module either by sketching
their geometry directly in Abaqus/CAE or by importing their geometry from other
geometric modeling programs. Depending upon the analysis the parts can be 2D
or 3D deformable, discrete rigid, analytical rigid or Eulerian parts. In the present
study, both the cutting tool and the workpiece are considered as 2D deformable
bodies. The part tools contained in this module allow editing and manipulating the
existing parts defined in the current model.
Property module: Property module allows assigning sections to a part instance or
region of a part instance to which various material properties are defined. A ma-
terial definition specifies all the property data relevant to a particular analysis. For
a coupled temperature displacement analysis, both the mechanical strength proper-
ties (elastic moduli, yield stress, etc.) and heat transfer properties (conductivity,
specific heat) must be given as inputs. Various plasticity models and damage
models are also contained in the property module. As an input the material model
Finite Element Modeling of Chip Formation in Orthogonal Machining 129
constants of the selected plastic model and damage model such as Johnson–Cook
material model and Johnson–Cook damage model, respectively, are defined in a
tabular form.
Assembly module: The individual parts that are created in part module exist in
their own coordinate system. It is in the assembly module that these parts are as-
sembled by relative positioning with respect to each other in a global coordinate
system.
Step module: The step module is used to create and configure analysis steps, as in
the present case is coupled temperature displacement explicit dynamic analysis.
The associated output requests can also be created. The sequence of steps provides
a convenient way to capture changes that may occur in the model during the
course of the analysis such as changes in the loading and boundary conditions of
the model or changes in the interaction, etc. In addition, steps allow you to change
the analysis procedure, the data output and various controls. An output request
contains information regarding which variables will be output during an analysis
step, from which region of the model they will be output, and at what rate they
will be output. The general solution controls and solver controls can also be cus-
tomized. Furthermore, adaptive mesh regions and the controls for adaptive mesh-
ing in selected regions can be specified in this module. It is noted that implemen-
tation of suitable mesh adaptivity (remeshing in Lagrangian framework or
repositioning of nodes in ALE framework) depends upon the type of analysis taken
into consideration i.e. implicit (ABAQUS/Standard) or explicit (ABAQUS/Explicit).
Adaptive remeshing is available in ABAQUS/Standard while ALE adaptive
meshing is available in ABAQUS/Explicit.
Interaction module: Interaction module allows to specify mechanical and thermal
interactions between regions of a model or between a region of a model and its
surroundings. Surface-to-surface contact interaction has been used to describe
contact between tool and workpiece in the present study. The interaction between
contacting bodies is defined by assigning a contact property model to a contact in-
teraction which defines tangential behavior (friction) and normal behavior. Tan-
gential behavior includes a friction model that defines the force resisting the rela-
tive tangential motion of the surfaces. While normal behavior includes a definition
for the contact pressure–overclosure relationship that governs the motion of the
surfaces. In addition, a contact property can contain information about thermal
conductance, thermal radiation and heat generation due to friction. ABAQUS/ Ex-
plicit uses two different methods to enforce contact constraints, namely, kinematic
contact algorithm and penalty contact algorithm. The former uses a kinematic pre-
dictor/corrector contact algorithm to strictly enforce contact constraints such that
no penetrations are allowed, while the latter has a weaker enforcement of contact
constraints. In this study, a kinematic contact algorithm has been used to enforce
contact constraints in master–slave contact pair, where rake surface of the cutting
tool is defined as the master surface and chip as the slave (node-based) surface.
Both the frictional conditions and the friction-generated heat are included in the
kinematic contact algorithm through TANGENTIAL BEHAVIOUR and GAP
HEAT GENERATION modules of the software.
130 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
Load module: The Load module is used to specify loads, boundary conditions and
predefined fields.
Mesh module: The Mesh module allows generating meshes on parts and assem-
blies created within ABAQUS/CAE as well as allows selecting the correct ele-
ment depending on the type of analysis performed (as in the present case is
CPE4RT) for discretization. Variety of mesh controls are available that help in se-
lecting the element shape (tri, quad or quad dominated), meshing technique (struc-
tured or unstructured) or meshing algorithm (medial axis or advancing front). The
structured meshing technique generates structured meshes using simple predefined
mesh topologies and is more efficient for meshing regular shapes. Free meshing,
however, allows more flexibility than structured meshing. Two commonly used
free mesh algorithms while meshing with quadrilateral elements are medial axis
and advancing front. The advancing front is generally preferable because it gene-
rates elements of more uniform size (area–wise) with more consistent aspect ratio.
Since in ABAQUS/Explicit small elements control the size of the time step,
avoidance of large differences in element size reduces solution stiffness, i.e.,
makes the numerical procedure more efficient.
Job: The Job module allows to create a job, to submit it to ABAQUS/Standard or
ABAQUS/Explicit for analysis and to monitor its progress.
Visualization: The Visualization module provides graphical display of finite ele-
ment models and results. It extracts the model and result information from the
output database.
This section demonstrates the efficiency of finite element models to replicate the
actual phenomena occurring during the cutting process under varied conditions as
well as to understand basic mechanism of chip formation process in terms of vari-
ous numerical results. Two different work materials are considered, namely AISI
1050 and Ti6Al4V, the former producing continuous chips and the latter produc-
ing segmented chips. In the first case study, not only the simulation results for ma-
chining AISI 1050 are presented but the predicted results are confirmed with the
experimental ones. In the second case study, Ti6Al4V is considered for the study
and the predicted results are compared with those of AISI 1050 under similar con-
ditions, thus showing the effect of work material, being one of the machining in-
puts, on various output variables. Apart from machining inputs, being a numerical
model, it is obvious that various FE inputs such as mesh size, material models,
friction models and so on also have a typical impact on the output results. Hence,
effect of FE inputs, namely, mesh size and Johnson-Cook material model con-
stants have also been studied while simulating segmented chip formation during
machining Ti6Al4V.
Finite Element Modeling of Chip Formation in Orthogonal Machining 131
Table 3.1. Thermo-mechanical properties of tungsten carbide tool and AISI 1050 [72, 73]
Table 3.2. Johnson-Cook material and damage constants for AISI 1050 [72]
Material Constants
A (MPa) B (MPa) n C m
D1 D2 D3 D4 D5
Figure 3.9 shows the distributions of stress, strain and temperature while machin-
ing of AISI 1050 using tungsten carbide tool for a cutting speed of 120 m/min and
feed of 0.2 mm/rev.
As the tool touches the workpiece, compression occurs within the workpiece.
With the further advancement of the tool into the workpiece, stresses start develop-
ing near the tool tip and attaining high localized values in a confined region called
primary shear deformation zone or shear plane as shown in Fig. 3.9 (a). The stresses
in these regions are as high as 1.3 GPa. Consequently, such high values of stresses
cause high strains to occur in the shear zone. This allows the workpiece material to
deform plastically and shear continuously in the form of chip that flows over tool
rake face. The type of chip, thus, formed depends largely on the distribution of strain
and temperature within the chip surface. As the cutting continues the effective
strains (especially, around the tool tip) increase and spread over a wider area of the
chip surface with a maximum value not exceeding 1.7 in the shear plane. Conse-
quently, this phenomenon tends to make the temperature distribution uniform in the
chip region, thereby resulting in a steady state continuous chip having unvarying
chip thickness. It is interesting to note that the maximum equivalent plastic strain
and temperature are found along the tool–chip interface.
This can be attributed to the fact that the chip which is entering into the second-
ary shear deformation zone already possesses accumulated plastic strain and heat.
The instant it begins to flow over the rake surface, further plastic straining and lo-
cal heating occur because of severe contact and friction in the contact zone [74],
thus, attaining higher temperature values at the tool-chip interface, specifically in
the sliding region.
In order to validate the developed model, both the cutting speed and feed or uncut
chip thickness values are varied and their effect on predicted cutting forces is
Finite Element Modeling of Chip Formation in Orthogonal Machining 133
studied and compared with the experimental ones. The cutting speed is varied in
the range of 72–164 m/min for feed values of 0.1 and 0.2 mm.
It is known that the cutting force and thrust force increase with increasing feed
rate almost linearly [75] where as decrease with the increasing cutting velocity.
The reason for this can be explained from the expressions of the cutting force and
thrust forces which are given as follows:
where t is the depth of cut (mm), s is the feed rate (mm/rev), τ s is the dynamic
shear strength of the workpiece, γ is the rake angle and ζ is the chip reduction
coefficient, i.e., the ratio of deformed chip thickness to undeformed chip thick-
ness. From the equations, correlation between feed rate and forces is straightfor-
ward. As far as the variation of cutting velocity is concerned, as the former in-
creases, temperature of the shear zone increases. This causes softening of the work
piece, which means the value of τ s decreases and thereby reduces the value of
cutting and thrust force.
Predicted cutting forces also show similar kind of trend with variation of speed
and feed as shown in Fig. 3.10. Cutting forces measured during experimental tests
under conditions similar to that of simulations have also been presented in Fig. 3.10
for comparison. It is noted that at lower feed, predicted results closely match with
experimental ones. At higher feed, a flatter curve is observed for predicted values of
forces implying that there is, no doubt, a decrease in cutting force values with the in-
creasing speed but not as pronounced as in case of the experimental one. However,
the maximum deviation between predicted and measured values of cutting forces
remains within an acceptable range of 15%.
1400
Fc(exp)
Fc(sim)
1200
Feed=0.2 mm
1000
Force (N)
800
600 Feed=0.1 mm
400
Fig. 3.10. Predicted and measured forces while machining AISI 1050
134 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
Meshing: Selection of a suitable mesh size is a critical factor from both accuracy
and computational time points of view [76]. As discussed earlier, finer mesh leads
to greater accuracy but at the cost of higher computational time. It is important to
mesh the model in such a way that the model gives results closer to the experi-
mental ones on one hand and consumes fairly less time on the other hand. Moreo-
ver, it is illogical to start with a very fine mesh, instead a mesh refinement study
should be performed wherein the degree of meshing is gradually changed from
coarser mesh to a finer mesh and the corresponding results are compared among
each other. There exists a limit beyond which if the mesh is refined further, the
CPU time would, no doubt, increase but there would not be any significant
changes in the numerical results. It is the duty of the analyst to consider an opti-
mum meshing that comes as a fair compromise between accuracy and computa-
tional time. This shows the need of mesh refinement study to prove the reliability
of the developed model.
Researchers have pointed that with the decrease in element size, the tempera-
ture at the integration point increases. Since this study primarily deals with the
mechanism of adiabatic shearing during the formation of segmented chip, it is
Finite Element Modeling of Chip Formation in Orthogonal Machining 135
logical to carry out a mesh refinement study. The meshing is mainly varied in the
chip region by considering 5, 10 and 15 elements (on an average) in the chip
thickness direction as shown in Fig. 3.11. The average element size for 5, 10 and
15 elements are: 50x50 µm, 20x20 µm and 12x12 µm, respectively. Since advanc-
ing front algorithm (free mesh algorithm) is employed for meshing the workpiece,
slight skewness has been observed at certain mesh regions of the workpiece with
finer meshing at the right edge of the workpiece.
Fig. 3.11. Mesh configurations for (a) 5 (b) 10 and (c) 15 elements
Figure 3.12 shows the predicted chip morphology and the distribution of tem-
perature within the chip surface for all the three cases for a cutting speed of 210
m/min and uncut chip thickness 0.2 mm.
Fig. 3.12. Temperature distributions for (a) 5 (b) 10 and (c) 15 elements
pattern varies. As the meshing gets finer, the stresses become more concentrated
along the shear plane, as a result of which the strain and temperature not only
get localized in a very narrow zone but also attain very high values. Since an in-
crease in the maximum temperature is found with finer meshing, the tendency to
invoke chip segmentation due to thermal softening by adiabatic shearing be-
comes higher, thus producing segmented chips in the second case. However,
with further refinement of mesh as in the case of 15 elements, not much varia-
tion in chip morphology is observed when compared with that of 10 elements.
The adiabatic shear band, no doubt, appears relatively more distinct (as very thin
strip) in the former case, but the average temperature values in the shear band
( Tadia ) increases not even by 1% (see Table 3.3). This implies that further mesh
refinement may not be required as far as consistency of the results is concerned.
Moreover, there is a limit to the reduction of element size from the software
point of view. Since the time step in ABAQUS/Explicit is controlled by the size
of the smallest element, reducing the element size beyond this may not allow the
simulation to run properly.
From Table 3.4, it can be inferred that the model considering 10 elements along
the uncut chip thickness is the best compromise between accuracy and computa-
tion time. In case of 15 elements, though adiabatic shearing is very prominent, it
was very difficult to run the simulation beyond 0.4 ms.
Johnson–Cook material model constants: Several classical plasticity models have
been widely employed that represent, with varying degrees of accuracy, the material
flow stresses as a function of strain, strain rate and temperature. These models
include the Litonski-Batra model [78, 79], Usui model [80], Maekawa model [81],
the Johnson–Cook model [65], Zerilli–Armstrong model [82], Oxley model [83],
Mechanical Threshold Stress (MTS) model [84] etc. It is very important to carefully
select the appropriate material model that satisfactorily predicts the desired chip
morphology and other output variables. Johnson-Cook model, being most widely
used, is employed to describe the flow stress property of workpiece material
Ti6Al4V in this study (see Eq. 3.17). This material model defines the flow stress as
a function of strain, strain-rate and temperature such that, it not only considers the
strain rates over a large range but also temperature changes due to thermal softening
by large plastic deformation. The work material constants contained in this constitu-
tive equation have been found by various researchers by applications of several me-
thods, thus producing different values of data sets for a specific material. As a result,
selection of suitable data sets along with appropriate material model becomes equal-
ly important [85]. The present study selects two sets of Johnson-Cook material con-
stants from the available literature namely, M1 and M2 as listed in Table 3.4 [86,
87]. Lee and Lin [87] obtained the Johnson–Cook material constants from a high
strain rate mechanical testing using Split Hopkinson Pressure Bar (SHPB) method
under a constant strain rate of 2000 s-1 within the temperature range of 700–1000 ºC
and maximum true plastic strain of 0.3. While Meyer and Kleponis [86] obtained the
material constants by considering strain rate levels of 0.0001, 0.1 and 2150 s-1and a
maximum plastic strain of 0.57.
Besides the material model constants, all other parameters are kept constant
so that the results can be compared on the same conditions. Meshing in both the
cases is similar to the one that was considered as optimum in the previous case
study.
Figures 3.13 and 3.14 show the distribution of stress, strain and temperature for
the models M1 and M2 at a cutting speed of 210 m/min and uncut chip thickness
of 0.2 mm.
138 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
Cutting forces: Figure 3.15 shows the variation of predicted cutting force with
time while machining AISI 1050 and Ti6Al4V.
500
Cutting force (N)
400
300
200
Ti6Al4V
AISI 1050
100
Fig. 3.15. Time signature of cutting forces for Ti6Al4V and AISI 1050
The force signatures obtained for both the materials vary remarkably not only
in magnitude but also in nature from each other. As expected, the cutting force
profile closely resembles the predicted chip morphology i.e. continuous for AISI
1050 and wavy profile for Ti6Al4V. While machining Ti6Al4V, chip formation
process begins with the bulging of the workpiece material in front of the tool as a
result of which the forces increase gradually but drop suddenly when the shear
band begins to form due to thermal softening. Obviously, magnitude of the wavi-
ness (difference between the peak and lower values) affects the surface finish
quality and thermal load that the cutting tool undergoes during cutting process.
(a) (b)
Fig. 3.16. Temperature contours on cutting tool while machining (a) AISI 1050 and
(b) Ti6Al4V
References
[1] Astakhov, V.P.: Tribology of Metal Cutting. Elsevier (2006) ISBN: 978-0-444-52881-0
[2] Shaw, M.C.: Metal Cutting principles, 2nd edn. Oxford University Press, Oxford
(2005)
[3] Hahn, R.S.: On the temperature developed at the shear plane in the metal cutting
process. In: Proceedings of First U.S. National Congress Appl. Mech. ASME 661
(1951)
[4] Chao, B.T., Trigger, K.J.: An analytical evaluation of metal cutting temperature.
Trans. ASME 73, 57–68 (1951)
[5] Leone, W.C.: Distribution of shear-zone heat in metal cutting. Trans. ASME 76, 121–
125 (1954)
Finite Element Modeling of Chip Formation in Orthogonal Machining 141
[6] Loewen, E.G., Shaw, M.C.: On the analysis of cutting tool temperatures. Transactions
of the ASME 71, 217–231 (1954)
[7] Weiner, J.H.: Shear plane temperature distribution in orthogonal machining. Trans.
ASME 77, 1331–1341 (1955)
[8] Rapier, A.C.: A theoretical investigation of the temperature distribution in the metal
cutting process. Br. J. Appl. Phys. 5, 400–405 (1954)
[9] Bagci, E.: 3-D numerical analysis of orthogonal cutting process via mesh-free me-
thod. Int. J. the Physical Sciences 6(6), 1267–1282 (2011)
[10] ASM Handbook, Volume 16-Machining, ASM International Handbook Committee,
ASM International, Electronic (1989) ISBN: 978-1-61503-145-0
[11] Juneja, B.L., Sekhon, G.S., Seth, N.: Fundamentals of metal cutting and machine
tools, 2nd edn. New Age International Publishers, New Delhi (2003)
[12] Merchant, M.E.: Mechanics of the metal cutting process. J. Appl. Phys. 16, 318–324
(1945)
[13] Lee, E.H., Shaffer, B.W.: The theory of plasticity applied to a problem of machining.
Trans. ASME, J. Appl. Mech. 18, 405–413 (1951)
[14] Oxley, P.L.B.: Shear angle solutions in orthogonal machining. Int. J. Mach. Tool.
Des. Res. 2, 219–229 (1962)
[15] Bhattacharya, A.: Metal cutting theory and practice. Central book publishers, Kolkata
(1984)
[16] Lacale, L.N., Guttierrez, A., Llorente, J.I., Sanchez, J.A., Aboniga, J.: Using high
pressure coolant in the drilling and turning of low machinability alloys. Int. J. of Adv.
Tech. 16, 85–91 (2000)
[17] Tobias, S.A.: Machine tool Vibration. Blackie and Sons Ltd, Scotland (1965)
[18] Ghosh, A., Mallik, A.K.: Manufacturing science (1985) ISBN: 81-85095-85-X
[19] Jacobson, S., Wallen, P.: A new classification system for dead zones in metal cutting.
Int. J. Mach. Tool. Manufact. 28, 529–538 (1988)
[20] Trent, E.M., Wright, P.K.: Metal cutting, 4th edn. Butterworth-Heinemann (2000)
[21] Komanduri, R., Hou, Z.B.: A review of the experimental techniques for the measure-
ment of heat and temperatures generated in some manufacturing processes and tribol-
ogy. Tribol. Int. 34, 653–682 (2001)
[22] Groover, M.P.: Fundamentals of modern manufacturing: materials processes, and sys-
tems, 2nd edn. Wiley, India (2002)
[23] Klamecki, B.E.: Incipient chip formation in metal cutting- A three dimensional finite
element analysis. Ph.D. Thesis. University of Illinois, Urbana (1973)
[24] Hughes, J.R.T.: The Finite Element Method. Prentice-Hall International, Inc. (1987)
[25] Reddy, J.N.: An introduction to the finite element method, 2nd edn. McGraw-Hill
Inc. (1993)
[26] Bathe, K.J.: Finite Element Procedures. Prentice Hall, Englewood Cliffs (1996)
[27] Rao, S.S.: The Finite Element in Engineering, 3rd edn. Butterworth-Heinemann
(1999)
[28] Zienkiewicz, O.C., Taylor, R.L.: The Finite Element Method, 5th edn. Butterworth-
Heinemann (2000)
[29] Liu, G.R., Quek, S.S.: The Finite Element Method: A Practical Course. Butterworth
Hienemann (2003)
[30] Hutton, D.V.: Fundamentals of finite element analysis, 1st edn. Mc Graw Hill (2004)
[31] Belytschko, T., Liu, W.K., Moran, B.: Nonlinear Finite Elements for Continua and
Structures. John Wiley and Sons, New York (2000)
142 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
[32] Strenkowski, J.S., Moon, K.-J.: Finite element prediction of chip geometry and
tool/workpiece temperature distributions in orthogonal metal cutting. ASME J. Eng.
Ind. 112, 313–318 (1990)
[33] Raczy, A., Elmadagli, M., Altenhof, W.J., Alpas, A.T.: An Eulerian finite-element
model for determination of deformation state of a copper subjected to orthogonal cut-
ting. Metall Mater. Trans. 35A, 2393–2400 (2004)
[34] Mackerle, J.: Finite element methods and material processing technology, an adden-
dum (1994–1996). Eng. Comp. 15, 616–690 (1962)
[35] Rakotomalala, R., Joyot, P., Touratier, M.: Arbitrary Lagrangian-Eulerian thermome-
chanical finite element model of material cutting. Comm. Numer. Meth. Eng. 9, 975–
987 (1993)
[36] Pepper, D.W., Heinrich, J.C.: The Finite Element Method: Basic Concepts and Appli-
cations. Hemisphere Publishing Corporation, United States of America (1992)
[37] Bower, A.F.: Applied mechanics of solid. CRC Press, Taylor and Francis Group,
New York (2010)
[38] Dhondt, G.: The finite element method for three-dimensional thermomechanical ap-
plications. John Wiley and Sons Inc., Germany (2004)
[39] Pian, T.H.H.: Derivation of element stiffness matrices by assumed stress distribu-
tions. AIAA J. 2, 1333–1336 (1964)
[40] Zienkiewicz, O.C., Taylor, R.L.: The Finite Element Method. Basic formulations and
linear problems, vol. 1. McGraw-Hill, London (1989)
[41] Liapis, S.: A review of error estimation and adaptivity in the boundary element me-
thod. Eng. Anal. Bound. Elem. 14, 315–323 (1994)
[42] Tay, A.O., Stevenson, M.G., de Vahl Davis, G.: Using the finite element method to
determine temperature distribution in orthogonal machining. Proc. Inst. Mech.
Eng. 188(55), 627–638 (1974)
[43] Muraka, P.D., Barrow, G., Hinduja, S.: Influence of the process variables on the tem-
perature distribution in orthogonal machining using the finite element method. Int. J.
Mech. Sci. 21(8), 445–456 (1979)
[44] Moriwaki, T., Sugimura, N., Luan, S.: Combined stress, material flow and heat analy-
sis of orthogonal micromachining of copper. CIRP Annals - Manufact. Tech. 42(1),
75–78 (1993)
[45] Kim, K.W., Sin, H.C.: Development of a thermo-viscoplastic cutting model using fi-
nite element method. Int. J. Mach. Tool Manufact. 36(3), 379–397 (1996)
[46] Liu, C.R., Guo, Y.B.: Finite element analysis of the effect of sequential cuts and tool-
chip friction on residual stresses in a machined layer. Int. J. Mech. Sci. 42(6), 1069–
1086 (2000)
[47] Ceretti, E., Falbohmer, P., Wu, W.T., Altan, T.: Application of 2D FEM to chip for-
mation in orthogonal cutting. J. Mater Process Tech. 59, 169–180 (1996)
[48] Li, K., Gao, X.-L., Sutherland, J.W.: Finite element simulation of the orthogonal met-
al cutting process for qualitative understanding of the effects of crater wear on the
chip formation. J. Mater Process Tech. 127, 309–324 (2002)
[49] Arrazola, P.J., Ugarte, D., Montoya, J., Villar, A., Marya, S.: Finite element modeling
of chip formation process with abaqus/explicit. VII Int. Conference Comp., Barcelona
(2005)
[50] Davies, M.A., Cao, Q., Cooke, A.L., Ivester, R.: On the measurement and prediction
of temperature fields in machining AISI 1045 steel. Annals of the CIRP 52, 77–80
(2003)
Finite Element Modeling of Chip Formation in Orthogonal Machining 143
[51] Adibi-Sedeh, A.H., Vaziri, M., Pednekar, V., Madhavan, V., Ivester, R.: Investigation
of the effect of using different material models on finite element simulations of metal
cutting. In: 8th CIRP Int Workshop Modeling Mach Operations, Chemnitz, Germany
(2005)
[52] Shi, J., Liu, C.R.: The influence of material models on finite element simulation of
machining. J. Manufact. Sci. Eng. 126, 849–857 (2004)
[53] Ozel, T.: Influence of Friction Models on Finite Element Simulations of Machining.
Int. J. Mach. Tool Manufact. 46(5), 518–530 (2006)
[54] Filice, L., Micari, F., Rizzuti, S., Umbrello, D.: A critical analysis on the friction
modeling in orthogonal machining. International Journal of Machine Tools and Man-
ufacture 47, 709–714 (2007)
[55] Haglund, A.J., Kishawy, H.A., Rogers, R.J.: An exploration of friction models for the
chip-tool interface using an Arbitrary Lagrangian-Eulerian finite element model.
Wear 265(3-4), 452–460 (2008)
[56] Mabrouki, T., Deshayes, L., Ivester, R., Regal, J.-F., Jurrens, K.: Material modeling
and experimental study of serrated chip morphology. In: Proceedings of 7th CIRP Int
Workshop Model. Machin, France, April 4-5 (2004)
[57] Coelho, R.T., Ng, E.-G., Elbestawi, M.A.: Tool wear when turning AISI 4340 with
coated PCBN tools using finishing cutting conditions. J. Mach. Tool Manufact. 47,
263–272 (2006)
[58] Lorentzon, J., Jarvstrat, N.: Modelling tool wear in cemented carbide machining alloy
718. J. Mach. Tool Manufact. 48, 1072–1080 (2008)
[59] Davim, J.P., Maranhao, C., Jackson, M.J., Cabral, G., Gracio, J.: FEM analysis in
high speed machining of aluminium alloy (Al7075-0) using polycrystalline diamond
(PCD) and cemented carbide (K10) cutting tools. Int. J. Adv. Manufact. Tech. 39,
1093–1100 (2008)
[60] Attanasio, A., Cerretti, E., Rizzuti, S., Umbrello, D., Micari, F.: 3D finite element
analysis of tool wear in machining. CIRP Annals – Manufact. Tech. 57, 61–64 (2008)
[61] ABAQUS Analysis User’s manual. Version 6.7-4 Hibbitt, Karlsson & Sorensen, Inc.
(2007)
[62] ABAQUS Theory manual, Version 6.7-4 Hibbitt, Karlsson & Sorenson, Inc. (2007)
[63] ABAQUS/CAE User’s manual. Version 6.7-4 Hibbitt, Karlsson & Sorensen, Inc.
(2007)
[64] Wu, H.-C.: Continuum Mechanics and Plasticity. Chapman and Hall/CRC (2004)
[65] Johnson, G.R., Cook, W.H.: A constitutive model and data for metals subjected to
large strains, high strain rates and high temperatures. In: Proceedings of 7th Int Symp
Ballistics, the Hague, The Netherlands, pp. 541–547 (1983)
[66] Umbrello, D., M’Saoubi, R., Outeiro, J.C.: The influence of Johnson–Cook material
constants on finite element simulation of machining of AISI 316L steel. Int. J. Mach.
Tool Manufact. 47, 462–470 (2007)
[67] Davim, J.P., Maranhao, C.: A study of plastic strain and plastic strain rate in machin-
ing of steel AISI 1045 using FEM analysis. Mater Des. 30, 160–165 (2009)
[68] Vaziri, M.R., Salimi, M., Mashayekhi, M.: A new calibration method for ductile frac-
ture models as chip separation criteria in machining. Simulat Model Pract. Theor. 18,
1286–1296 (2010)
[69] Johnson, G.R., Cook, W.H.: Fracture characteristics of three metals subjected to vari-
ous strains, strains rates, temperatures and pressures. Eng. Fract. Mech. 21(1), 31–48
(1985)
144 A. Priyadarshini, S.K. Pal, and A.K. Samantaray
[70] Mabrouki, T., Girardin, F., Asad, M., Regal, J.-F.: Numerical and experimental study
of dry cutting for an aeronautic aluminium alloy. Int. J. Mach. Tool Manufact. 48,
1187–1197 (2008)
[71] Mabrouki, T., Rigal, J.: -F A contribution to a qualitative understanding of thermo-
mechanical effects during chip formation in hard turning. J. Mater. Process
Tech. 176, 214–221 (2006)
[72] Duan, C.Z., Dou, T., Cai, Y.J., Li, Y.Y.: Finite element simulation & experiment of
chip formation process during high speed machining of AISI 1045 hardened steel. Int.
J. Recent Trend Eng. 1(5), 46–50 (2009)
[73] Priyadarshini, A., Pal, S.K., Samantaray, A.K.: A Finite Element Study of Chip For-
mation Process in Orthogonal Machining. Int . J. Manufact., Mater. Mech. Eng. IGI
Global( accepted, in Press, 2011)
[74] Shi, G., Deng, X., Shet, C.: A finite element study of the effect of friction in ortho-
gonal metal cutting. Finite Elem. Anal. Des. 38, 863–883 (2002)
[75] Lima, J.G., Avila, R.F., Abrao, A.M., Faustino, M., Davim, J.P.: Hard turning: AISI
4340 high strength alloy steel and AISI D2 cold work tool steel. J. Mater. Process.
Tech. 169, 388–395 (2005)
[76] Priyadarshini, A., Pal, S.K., Samantaray, A.K.: Finite element study of serrated chip
formation and temperature distribution in orthogonal machining. J. Mechatron Intell.
Manufact. 2(1-2), 53–72 (2010)
[77] Wang, M., Yang, H., Sun, Z.-C., Guo, L.-G.: Dynamic explicit FE modeling of hot
ring rolling process. Trans. Nonferrous Met. Soc. China 16(6), 1274–1280 (2006)
[78] Litonski, J.: Plastic flow of a tube under adiabatic torsion. Bulletin of Academy of
Pol. Science, Ser. Sci. Tech. XXV, 7 (1977)
[79] Batra, R.C.: Steady state penetration of thermo-visoplastic targets. Comput Mech. 3,
1–12 (1988)
[80] Usui, E., Shirakashi, T.: Mechanics of machining–from descriptive to predictive
theory: On the art of cutting metals-75 Years Later. ASME PED 7, 13–55 (1982)
[81] Maekawa, K., Shirakashi, T., Usui, E.: Flow stress of low carbon steel at high tem-
perature and strain rate (Part 2)–Flow stress under variable temperature and variable
strain rate. Bulletin Japan Soc Precision Eng 17, 167–172 (1983)
[82] Zerilli, F.J., Armstrong, R.W.: Dislocation-mechanics-based constitutive relations for
material dynamics calculations. J. Appl. Phys. 61, 1816–1825 (1987)
[83] Oxley, P.L.B.: The mechanics of machining: An analytical approach to assessing ma-
chinability. Ellis Horwood Limited, Chichester (1989)
[84] Banerjee, B.: The mechanical threshold stress model for various tempers of AISI
4340 steel. Int. J. Solid Struct. 44, 834–859 (2007)
[85] Priyadarshini, A., Pal, S.K., Samantaray, A.K.: On the Influence of the Material and
Friction Models on Simulation of Chip Formation Process. J. Mach. Forming Tech.
Nova Science (accepted, 2011)
[86] Meyer, H.W., Kleponis, D.S.: Modeling the high strain rate behavior of titanium un-
dergoing ballistic impact and penetration. Int. J. Impact Eng. 26(1-10), 509–521
(2001)
[87] Lee, W.S., Lin, C.F.: High temperature deformation behaviour of Ti6Al4V alloy ev-
luated by high strain rate compression tests. J. Mater. Process. Tech. 75, 127–136
(1998)
[88] Baker, M.: The influence of plastic properties on chip formation. Comp. Mater.
Sci. 28, 556–562 (2003)
4
This chapter presents various techniques using the combination of fuzzy logic and
genetic algorithm (GA) to construct model of a physical process including manu-
facturing process. First, an overview on the fundamentals of fuzzy logic and fuzzy
inferences systems toward formulating a rule-based model (called fuzzy rule based
model, FRBM) is presented. After that, the working principle of a GA is discussed
and later, how GA can be combined with fuzzy logic to design the optimal know-
ledge base of FRBM of a process is presented. Results of few case studies of
modeling various manufacturing processes using GA-fuzzy approaches conducted
by the author are presented.
4.1 Introduction
Optimal selection of machining parameters is an imperative issue to obtain a better
performance of machining, cost effectiveness as well as to achieve a desired accu-
racy of the attributes of size, shape and surface roughness of the finished product.
Selection of these parameters is traditionally carried out on the basis of the expe-
rience of process planners with the help of past data available in machining hand-
books and tool catalogs. Practitioners continue to experience great difficulties due
to the lack of sufficient data on the numerous new cutting tools with different ma-
terials. Specific data on relevant machining performance measures such as tool
life, surface roughness, chip form, etc. are very difficult to find due to the lack of
reliable information or predictive models for these measures. In automated manu-
facturing processes, it is required to control the machining process by determining
the optimum values of machining parameters online during machining. Therefore,
it is important to develop a technique to predict the attribute of a product before
machining to evaluate the robustness of machining parameters for keeping a de-
sired attribute and increasing product quality. Construction of suitable machining
process model and evaluation of the optimal values of machining parameters using
this model as predictor are essential and challenging tasks.
146 A.K. Nandi
The model of a machining process represents a mapping of input and output va-
riables in specific machining conditions. The input variables differ corresponding
to the type of machining process and the desired output. For example, in turning,
the surface roughness (output variable) is dependent on a number of variables that
can be broadly divided into four groups: major variables which include cutting
speed, feed rate, depth of cut and tool wear; flow of coolant, utilization of chip
breaker, work-holding devices and selection of tool type belong to the second
group. The third group includes machine repeatability, machine vibration and
damping, cutting temperature, chip formation and chip exit speed, thermal expan-
sion of machine tool and power consumption; room temperature, humidity, dust
content in the air and fluctuation in the power source are involved in the fourth
group. Among these four groups of input variable, the major variable can be
measured and controlled during machining process. Though the other variables are
not directly involved and uncontrolled during machining but their effect cannot be
neglected to obtain a desired surface roughness. In a specific machining condition,
these variables are assumed to be fixed at a particular state.
Various approaches have been proposed to model and simulate the machining
processes. Analytical methods, which are generally based on the established
science principles, are probably the first modeling approach. Experimental or em-
pirical approaches use experimental data as the basis for formulating the models.
Mechanistic and numerical methods integrate the analytical and empirical me-
thods, generally by the use of modern computer techniques.
Due to the complex and nonlinear relationship among the input–output variables,
influence of uncontrollable parameters and involvement of random aspect, predic-
tion of an output of machining processes using mathematical/analytical approaches
is not accurate. This leads to the development of empirical equations for a particular
machine tool, machining parameters and work piece-cutting tool material combina-
tion. Empirical models do not consider the underlying principles and mathematical
relationships. These are usually obtained by performing statistical analysis or
through the training of data-driven models to fit the experimental data [1].
The significant drawback of empirical models is their sensitivity to process var-
iation though they have the advantages of accuracy due to the use of experimental
data. The accuracy of a model degenerates rapidly due to the variation of experi-
mental data as the machining conditions deviate from the experimental settings. In
addition, quality characteristics of machined parts exhibit stochastic variations
over time due to changes in a machine tool structure and the environment. There-
fore, the modeling techniques should have enough capability to adapt the varia-
tions in machining process. Most of the statistical process control models do not
account for time-varying changes. Involvement of uncertainty and imprecision in
machining processes is another aspect affecting the variation of machining output.
In such cases, techniques of modeling using fuzzy logic are most useful because
fuzzy logic is a powerful tool for dealing with imprecision and uncertainty [2].
The basic concept of fuzzy logic is to categorize the variables into fuzzy sets with
a degree of certainty in the numerical interval (0 and 1), so that ambiguity and
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 147
vagueness in the data structure and human knowledge can be handled without
constructing complex mathematical models. Moreover, fuzzy logic-based control
system has the capability to adapt the variations of a process by learning and ad-
justing itself to the environmental changes by observing current system behavior.
Fuzzy logic is an application of fuzzy set theory, and was first proposed by
Prof. L.A. Zadeh [3]. Fuzzy logic rules, which are derived based on fuzzy set
theory, are used in fuzzy inference system toward formulating a rule-based model
(called fuzzy rule-based model, FRBM). The performance of a FRBM mainly de-
pends on two different aspects: structure of fuzzy logic rules and the type/shape of
associated fuzzy subsets (membership function distributions, MFDs) those consti-
tute the knowledge base (KB) of FRBM. Manually constructed KB of a FRBM
may not be optimal in many cases since it strongly demands a thorough know-
ledge of the process which is difficult to acquire, particularly in a short period of
time. Therefore, design of an optimal KB of a fuzzy model needs the help of other
optimization/learning techniques. Genetic algorithm (GA), a population-based
search and optimization technique is used by many researchers to design the op-
timal KB of FRBM for various processes. The systems of combining Fuzzy logic
and genetic algorithm are called genetic-fuzzy systems.
The function μà (x) is called membership function (MF) of the (fuzzy) set Ã
and defined as μÃ(x)→[0,1]. The value of μÃ(x) is called the degree of member-
ship of x in the fuzzy set Ã.
i) Having two fuzzy sets à and B̃ based on X, then both are equal if
their membership functions are equal, i.e.
~ ~
( ) ( )
A = B ⇔ μ A x = μ B x for all x X
~ ~ ∈
ii) ∈
Given a fuzzy set à defined on X and any number α [0,1], the α-
cut, αÃ, and the strong α-cut, α+Ã, are the crisp sets:
α
{ }
A = x | A(x ) ≥ α and
~ ~ α+
{ ~
}
A = x | A(x ) > α
~
iii) The height of a fuzzy set is the largest membership grade obtained
~
()
by any element in that set i.e., height A = max μ A~ (x )
x∈X
iv) The crossover points of a membership function are defined as the
elements in the universe for which a particular fuzzy set A has values
equal to 0.5, i.e., for which μ A~ (x ) = 0.5
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 149
The common types of membership function (MF) used in FRBM are triangular,
(higher order) polynomial, trapezoidal, Gaussian, etc.
Trapezoidal MF: Mathematically a trapezoidal MF can be represented as shown in
Figure 4.2(a).
0 x < a, x > d
x−a
a≤x≤b
μ A~ (x, a, b, c, d ) = b − a for
1 b<x<c
d−x
c≤x≤d
d−c
150 A.K. Nandi
0 x < a, x > c
f 1 (x, b1) a≤x<b
μ A~ (x, a, b, c, d ) = for
1 x=b
f 2 (x, b 2) b<x≤c
where the functions f1 and f2 are the polynomial type. Polynomial MF is treated as
triangular type MF when the functions, f1 and f2 of the above empirical expression
are linear. f1 and f2 may be also exponential or any other kind of functions.
The controlling parameters toward the configuration of polynomial MF (as
shown in Figure 4.2(b)) are b1=b-a and b2=c-b. Mathematically, the second order
μ
polynomial function can be represented as Ã(x)= c0x + c1x2, where x is the
distance measured along the base-width of membership function distributions,
μÃ(x) is the fuzzy membership function value and, c0 and c1 are the coefficients
which can be determined based on some specified conditions, such as
⎧1 at x = b1 ∂ μ A~
μ A~ = ⎨ and = 0, at x = b1,
⎩ 0 at x = 0 ∂x
(a) (b)
2
Finally the coefficients of the 2nd order polynomial function become c0 =
b1
1
and c1 = − 2
.
b1
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 151
From Figure 4.2, it has been seen that for a given value of support of the mem-
bership function of a fuzzy set, only one parameter, b1 is required to describe tri-
angular and polynomial type MFDs (membership function distributions) with 3
fuzzy sub-sets, whereas two parameters, b1 and b2 are required to explain the
(semi) trapezoidal MFDs with two fuzzy subsets. The number of controlling pa-
rameter increases with increasing the number of fuzzy sub-sets involved in the
MFDs.
Intersection: μ A~ ∩ B~ (x ) = μ A~ (x ) ∧ μ B~ (x )
Union: μ A~ ∪ B~ (x ) = μ A~ (x ) ∨ μ B~ (x )
Complement: μ A~ (x ) = 1 − μ A~ (x )
Fig. 4.3. (a) Intersection of fuzzy sets à and B̃ (b) Union of fuzzy sets à and B̃ (c) Com-
plement of fuzzy set Ã
Disjunction (OR):
P ∨ Q : x ∈ A or x ∈ B, Hence, T(P ∨ Q ) = max (T(P ), T (Q ))
Conjunction (AND):
P ∧ Q : x ∈ A or x ∈ B, Hence, T(P ∧ Q ) = min (T (P ), T(Q ))
Negation: If T (P ) = 1, then T (P ) = 0; if T(P ) = 0, then T (P ) = 1
Implication: P → Q : x ∉ A or x ∈ B, Hence, T (P → Q ) = T (P ∪ Q )
1 for T (P ) = T (Q )
(P ↔ Q ) : T(P ↔ Q ) = ⎧⎨
⎩0, for T (P ) ≠ T(Q )
Equivalence:
( )
R = (A × B) ∪ A × Y ≡ IF A, THEN B
IF x ∈ A, where x ∈ X and A ⊂ X
THEN y ∈ B, where y ∈ Y and B ⊂ Y
The other connectives are applicable to two different universes of discourse as
usual. Classical logical compound propositions that are always true irrespective of
the truth values of the individual simple propositions are called tautologies.
Fuzzy propositional logic generalizes the classical propositional operations by us-
ing the truth set [0, 1] instead of either 1 or 0. The above logical connectives are also
defined for a fuzzy logic. Like classical logic, the implication connective in fuzzy
logic can be modeled in rule-based form: P̃→Q̃ is, IF x is à THEN y is B̃ (where IF
(~ )
part is called antecedent and THEN part is called consequent) and it is equivalent to
~
(~ ~ )
the fuzzy relation R = A × B ∪ A × Y where the fuzzy proposition P̃ is as-
signed to fuzzy set à which is defined on universe X, and the fuzzy proposition Q̃ is
described by fuzzy set B̃, which is defined on universe Y. The membership function
[( )( )]
of R̃ is expressed by μ R~ (x, y ) = max μ A~ (x ) ∧ μ B~ (y ) , 1 − μ A~ (x ) . The im-
plication connective can be defined in several distinct forms. While these forms of
implication are equivalent in classical logic, their extensions to fuzzy logic are not
equivalent and result in distinct classes of fuzzy implications.
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 153
product: μ (x, y) = μ (x ) • μ (y )
~
R
~
A
~
B
(4.2)
In Figure 4.4, the MF value, 0.7 of μB(̃ x) corresponds to rule weight obtained after
decomposition of the IF part of rule.
(i) (ii)
Fig. 4.4. Graphical representation of fuzzy implication (i) min (ii) product
~ ~ ~
expressed by B = A R , where à is the input, or antecedent defined on the un-
iverse X, B̃ is the output or consequent defined on universe Y and R̃ is a fuzzy rela-
tion characterizing the relationship between specific input(s), x and specific out-
put(s), y. Among various methods of composition of fuzzy relation, max-min and
max-product are the most commonly used techniques and defined by membership
function-theoretic expressions as follows.
max-min: μ B~ (y ) = max {min [μ A~ (x ), μ R~ (x, y )]} (4.3)
x∈X
max-product: μ B~ (y ) = max μ A
x∈X
[
~ (x ) • μ ~ (x, y )
R ] (4.4)
⎡ ⎤
r (x ) = max min ⎢⎣μ (x ), μ (x )⎥⎦ , j=1, 2
j x∈ X
~
A ~
Aj
(4.5)
156 A.K. Nandi
Step 2: Calculate the conclusion B̃ by truncating each set B̃j by the value of rj(x)
(i.e., min implication method) which expresses the degree to which the antecedent
Ãj is compatible with the given fact Ã1, and taking the union of the truncated sets
as the rules are satisfied independently (i.e., max aggregation method) .
[ ]
μ B~ (y ) = max min r j (x ), μ B~ j (y ) for all y Y
j =1,2
∈ (4.6)
⎡
= max min max min ⎜
j =1,2 ⎢⎣ x∈X μ ~ μ A~ j (x )⎞⎟⎠, μ B~j (y )⎤⎥⎦
⎛ (x ),
⎝ A
⎡
= max max min ⎜
⎢
j =1,2 x∈X ⎣
μ ~ μ A~ j (x )⎞⎟⎠, μ B~j (y )⎤⎥⎦
⎛ (x ),
⎝ A
⎡ ⎛ ⎞⎤
= max max min ⎢ μ ~ (x ), min ⎜ μ ~ ( x ), μ ~ ( y ) ⎟⎥
x∈X y =1, 2 ⎣ A ⎝ Aj Bj ⎠⎦
⎡ ⎛ ⎞⎤
= max min ⎢ μ ~ (x ), max min⎜ μ ~ ( x ), μ ~ ( y ) ⎟⎥
x∈X ⎣ A j =1,2 ⎝ Aj Bj ⎠⎦
= max
x∈X
[
min μ ~ (x ), μ ~ (x, y ) ,
A R
] (4.7)
Fig. 4.6. Illustration of the method of interpolation in fuzzy inferences with multiple inputs
158 A.K. Nandi
In the above illustration of fuzzy inference, we have considered fuzzy value (Ã)
for the variable input1. Figure 4.7 demonstrates the above (max-min) inference
method for a two input and a single output system where the values of input va-
riables are considered as crisp type (for instance, Fact: input1 is x1 AND input2 is
x2) and the (max-product) inference method for the same system is demonstrated
in Figure 4.8.
Fig. 4.7. Graphical representation of max-min inference method with crisp type of input
values
Fig. 4.8. Graphical representation of max-product inference method with crisp type of input
values
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 159
y =
∫ μ (y )⋅ ydy
~
B
(4.8)
COA
∫ μ (y )dy
~
B
160 A.K. Nandi
Mamdani-type [4]:
The structure of Mamdani-type fuzzy logic rule is expressed as follows:
IF x1 is A1 AND x2 is A2 AND……..AND xn is An THEN y is B
where xi (i=1, 2, ……, n) are input variables and y is the output variable. A1, A2,
…, An and B are the linguistic terms (say, Low, Medium, High, etc.) used for the
fuzzy subsets (membership function distributions) of the corresponding input and
output variables, respectively.
Sugeno-type [5]:
The Sugeno-type fuzzy rule is defined as follows:
IF x1 is A1 AND x2 is A2 AND……..AND xn is An THEN y =f(x1, x2, .., xn)
Unlike Mamdani-type, the rule consequent/output is expressed by a function of the
input variables.
Tsukamoto [6]:
The Tsukamoto -type fuzzy rule is defined as follows:
IF x1 is A1 AND x2 is A2 AND……..AND xn is An THEN y =z
where z is a monotonical membership function.
162 A.K. Nandi
j =1
where A1, . . . , An are the fuzzy subsets of the respective input variables, x1, …, xn.
The output function of fuzzy rule is a linear function (say, polynomial) in the form
of
The overall output of the TSK-type fuzzy model can be obtained for a set of inputs
(x1, x2, …., xn) using the following empirical expression.
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 163
( )
R lf
⎛ n
⎞
∑ ⎜ ∏ μ v (x v )⎟ ∑ c j f j x 1,..., n
K
r r
r =1⎝ v =1 ⎠ j=1
Y= (4.11)
⎛ n r ( ⎞ )
R lf
∑ ⎜ ∏ μ v x 1,.., n ⎟
r =1⎝ v =1 ⎠
∏ is the product representing a conjunction decomposition method.
∑
K
j =1
r
cjf
r
j (x ) is the output function of r
1,.., n
th
rule and r
c j are the function coeffi-
cients of the corresponding rule consequent, where K is the number of coefficients
present in the consequent function of each rule.
Unlike Mamdani-type FRBM, TSK-type FRBM includes only the fuzzy rule
base, a fuzzy inference engine, and fuzzification module to determine the output in
Equation (4.11). The performance of a TSK-type fuzzy model is mainly depended
on the optimal values of the rule output (consequent) functions which are de-
pended on the coefficients (cj), the exponential parameters of the input variables
(not shown in the Equation (4.10)) and choice of the fuzzy subsets (membership
function distributions). Thus, the steps of developing FRBM with TSK-type FLRs
are:
• construction of an optimal set of rules (Rf) with the appropriate structures of
rule output/consequent functions
• selection of shapes of fuzzy subsets/MFDs of input variables
• determination of optimal values of coefficients and power terms of rule conse-
quent functions
• tuning of MFDs of the input variables.
Fig. 4.12. A schematic representation of simple genetic algorithm outline population repre-
sentation and initialization
Fig. 4.13. A schematic representation of a chromosome with 5 bits for Sd and 5 bits for Sh
4.3.2.1.3 Initialization of GA
Initial population of a GA is normally determined at random. With a binary popu-
lation of Nind individuals whose chromosomes are Lind bits long, Nind × Lind ran-
dom numbers uniformly distributed from the set {0, 1} would be produced.
4.3.2.3 Selection
Selection guides the tool to find the optimized solution by preferring individu-
als/members of the population with higher fitness over one with lower fitness. It is
the operator which generates the mating pool. This operator determines that the
number of times a particular individual will be used for reproduction and the
number of offspring that an individual will produce. Some of the popularly used
selection methods are as follows:
( )
[ ]
p I j, t =
f I j, t
(Ik,t )
(4.13)
∑ nk =1 f
[ ]
where p I j, t is the probability of getting selected of any jth individual at a genera-
( )
tion t, f I j, t and ∑ nk =1 f (Ik, t ) are corresponding individual fitness and the sum of
the fitness of the population with size n, respectively.
The property as represented by Equation (4.13) is satisfied by applying a ran-
dom experiment that has some similarity with a generalized roulette game. In the
roulette game, the slots are not equally wide that is, why different outcomes occur
with different probabilities. Figure 4.14 gives a graphical representation of how
this roulette wheel game works.
Linear rank selection: In this plan, a small group of individuals is taken from the
population and the individual with best fitness is chosen for reproduction. The size
of the group chosen is called the tournament size. A tournament size of two is
called binary tournament.
168 A.K. Nandi
In addition another scheme for selection is applied along with all three selection
schemes discussed above which is called ‘elitism’. The idea of elitism is to avoid
the observed best-fitted individual dies out just by selecting it for the next genera-
tion without any random experiment. Elitism significantly influences the speed of
the convergence of a GA. But it can lead to premature convergence also.
The basic operator for producing new chromosomes in the GA is that of crossov-
er. Like natural reproduction, crossover produces new individuals so that some
genes of a new child come from one individual while others come from the other
individual. In essence, crossover is the exchange of genes between the chromo-
somes of the two parents. The process may be described as cutting two strings at
a randomly chosen position and swapping the two tails. It is known as the single-
point crossover, and the mechanism is visualized in Figure 4.15. An integer posi-
tion, i is selected at random with a uniform probability between one and the
∈
string length, l, minus one (i.e., i [1, l-1]). When the genetic information is ex-
changed among the parent individuals (represented by the strings, P1 and P2)
about this point, two new offspring (represented by the strings, O1 and O2) are
produced. The two offspring in Figure 4.15 are produced when the crossover
point, i=4 is selected.
For multi-point crossover, multiple crossover positions (m) are chosen at ran-
dom with no duplicates and sorted into ascending order. Then the bits between
two successive crossover points are exchanged between the two parents to pro-
duce two new offspring. The process of multi-point crossover is illustrated in Fig-
ure 4.16 with shaded color.
The idea behind multi-point crossover is that the parts of the chromosome re-
presentation that contributes to the most to the performance of a particular indi-
vidual may not necessarily be contained in adjacent substrings. Further, multi-
point crossover appears to encourage the exploration of the search space, thus
making the search more robust.
4.3.2.5 Mutation
Subject to
π dh 2
g(d, h ) ≡ ≥ 300,
4
d min ≤ d ≤ d max
,
h min ≤ h ≤ h max
where c is the cost of the can material per square cm, which is taken as 0.005 and
the minimum and maximum values of d and h are taken as, dmin=hmin=0 and
dmax=hmax=31.
Therefore, in this problem the number of decision variables is two (d and h).
The population size of GA is considered as six and it is kept constant throughout
the GA-operation. GA operates in a number of iterations until a specified termina-
tion criterion is satisfied and in the Figure 4.18, the maximum number of itera-
tion/generation (max_gen) is treated as the termination criteria.
GA iteration starts with the creation of six random solutions which are treated as
the parent solutions. The chromosome structure of each solution is the same as pre-
sented in Figure 4.13. Then, the fitness values of all the parent solutions in the popu-
lation are calculated using following fitness function (discussed in Section 4.3.2.2).
solution in order to maintain a constant population size through out the GA-iteration.
After that, crossover operator (discussed in Section 4.3.2.4) is applied among the
two randomly chosen solutions from the mating pool based on a given probability
(crossover-probability, say 0.9). Then, bit-wise mutation (discussed in Section
4.3.2.5) is carried out on each of the six solutions obtained after employing the cros-
sover operator using a given mutation probability (say 0.02) and produces six new
(offspring) solutions. The objective function values corresponding to each solution
are depicted in the single-point crossover and bit-wise mutation tables in Figure
4.18. It completes one iteration/generation of GA. Now, the checking of GA-
termination criterion is performed and if it satisfies the termination criteria, stop the
GA-iteration process, otherwise start another generation/iteration with treating the
172 A.K. Nandi
six offspring solutions obtained in the previous iteration as the parent solutions and
then assigning fitness value to all the solutions in the population and continue the
same iteration procedure as described above.
A Mamdani-type fuzzy logic rule for a particular process having say, two input
variables (x1 and x2) and one output variable (y) (each having triangular-type
MFDs with 3 fuzzy subsets) may be expressed as
IF x1 is A1 AND x2 is A2 THEN y is B,
where A1, A2 and B are the fuzzy subsets (those can be expressed by suitable lin-
guistic expression, such as LOW, MEDIUM, HIGH, etc.) of triangular-type mem-
bership function. In Section 4.2.2.2, it was discussed that in order to describe tri-
angular-type MFDS with 3 fuzzy subsets one controlling parameter is required. A
typical binary coded GA-string for optimizing the KB will look as shown in
Figure 4.20.
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 173
Fig. 4.20. A GA-string representing the rule base and the parameters related to membership
functions of input-output variables a FRBM
where b1, b2 and b3 are the (continuous) control (GA) variables related to MFDs
corresponding to the two inputs and a single output variables x1, x2 and y, respec-
tively. The number of bits used for optimizing the RB is equal to the number of
maximum possible rules present in the RB. In this case, the number of rules will
be 3 × 3 = 9 , since each of the two input variables comprise of 3 fuzzy subsets.
The information of b1, b2 and b3 are coded by the next bits in GA-string.
There are, in fact, three different approaches of designing genetic-fuzzy system
(GFS), according to the KB components including in the learning process. These
are as follows
Fig. 4.21. A GA-string representing the parameters related to membership functions of in-
put-output variables of a FRBM
Fig. 4.23. A GA-string representing the rule base and automated rule development of a
FRBM
Fig. 4.24. A GA-string representing the rule base with automated rule development and the
parameters related to membership functions of input-output variables of a FRBM
Besides the way how to construct/design the KB of FRBM, selection of the ap-
propriate shape of fuzzy subsets/membership function distributions (MFDs) for
both the input and output variables in case of Mamdani-type FRBM and selection
of shapes of fuzzy subsets of input variables as well as the appropriate structure(s)
of rule output/consequent function(s) and determination of optimal values of coef-
ficients and power terms of rule consequent functions are important issues. In
order to overcome these problems, a rigorous study with different choices is
required in order to obtain a good model for a manufacturing process.
GA is also used in the genetic-fuzzy system where the TSK-type fuzzy logic
rules (as defined in Section 4.2.11.2.1) are employed. Genetic Linear Regression
(GLR) approach [11] is one of the most popular approaches in designing the KB
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 175
• Step-I: Set an initial set (population) of values of power terms of a given re-
gression function at random
• Step-II: Evaluate the function coefficients based on least square method
• Step-III: Checking of fitness value (if satisfied terminate the iteration proce-
dure)
• Step-IV: Update the values of power terms of regression function using GA-
operators
• Step-V: Repeat Step-II
Fig. 4.25. Flow chart of genetic linear regression approach to construct KB of a FRBM
176 A.K. Nandi
Fig. 4.26. A GA-string representing the rule base, parameters related to membership func-
tions of input variables and exponential parameters of rule consequent functions of a TSK-
type FRBM
The proposed GLR has an added facility to carryout the task of tuning MFDs of
input variables simultaneously in the same framework of GA.
In order to determine the coefficients of the output functions of TSK-type fuzzy
rules, a general expression of multiple linear regression system with TSK-fuzzy
model is derived as follows. The Equation (4.11) may be rewritten by denoting
∏ μ v (x1,..., n ) = η r
n
for simplicity, in the following form:
v =1
Y = F (x 1 , .. x n )
= r =1
R lf
∑ ηr
r =1
(
+ ηtr a1tr f 1tr (x1,..,n ) + .. + a ttrj f ttrj (x1,..,n ) + .. + a 2k f 2k (x1,.,n ) + . )
=
(
. + ηRl a1 f 1 (x1,.,n ) + .. + a k f k (x1,.,n )
Rlf Rlf Rlf Rlf
) (4.15)
η1 + η2 + ........+ ηRlf
Let us assume we have a set of input-output tuple (D) of S number of sample data
(i )
where the output y is assigned to the input (x 1(i ) , x (2i ) ,......... , x (ns ))
Now, the total quadratic error that is caused by the TSK-type FRBM with respect
to the given data set:
s
l =1
(
E = ∑ f (x 1(l ) , x (2l ) ,.... x (nl )) − y l
()
)
2
(4.16)
To determine the above parameters, we take the partial derivatives of E with re-
∂E
spect to each parameter and require them to be zero, i.e., = 0 , where
∂ a rj
j = {1,2,...., k} and r = {1,2,....., R f }.
Now, we obtain the partial derivation of E with respect to the parameter a tt rj ,
(l ) ∂f (x1 ,........, x n )
(l ) (l )
∂E
=
S
∑
∂ a ttrj l=1
2 ⋅ f ( (l )
x1 (,......., (l ))
xn y − ⋅ )
∂ a tt rj
⎛ R lf ⎞
⎜ ∑ ηr (a 1r f 1r (x1,.., n ) + ... + a k f k (x1,.., n ))
⎟
l ⎟ η t r f t j ( x1,.., n )
l r r l
S
⎜ ⋅ tr l
= 2⋅∑ r =1
− y ⋅
l =1 ⎜ R lf ⎟ R lf
⎜ ∑ ηr ⎟ ∑ ηr
⎝ r =1 ⎠ r =1
⎛⎛ ⎞ ⎛ ⎞⎞
⎜ ⎜ S R lf ⎟ ⎜ S R lf ⎟⎟
⎜ ⎜ ∑ ∑ η r (a 1r )f 1r (x 1,..,
l
)n η t r f tt rj (x1,..,l n ) ⎟ ⎜ ∑ ∑ η r (a rk )f rk (x 1,..,
l
) ( )
n η t r f t j x 1,.., n ⎟
tr l ⎟
= 2 ⎜ ⎜ l =1 r =1 ⎟ + .. + ⎜ l =1 r =1 ⎟ ⎟
⎜⎜ ⎛ R lf ⎞
2
⎟ ⎜ ⎛ R lf ⎞
2
⎟⎟
⎜⎜ ⎜ ∑ ηr ⎟ ⎟ ⎜ ⎜ ∑ ηr ⎟ ⎟⎟
⎜⎜ ⎜ r =1 ⎟ ⎟ ⎜ ⎜ r =1 ⎟ ⎟⎟
⎝⎝ ⎝ ⎠ ⎠ ⎝ ⎝ ⎠ ⎠⎠
⎛ S l ⎞
⎜ ∑ y ⋅ η f tt rj (x1,..,
l
) ⎟
⎜ ⎟=0,
tr n
−2 l =1 (4.17)
⎜ R lf ⎟
⎜ ∑ ηr ⎟
⎝ r =1 ⎠
Thus, the Equation (4.16) provides the following system of linear
equations from which we can compute the coefficients
{(a 1
1 ,...., a 1k ), (a 12 ,..... a 2k ) (
,......, a 1R f ,...., a Rk f : )}
∏ μ v (x1,.., n)
n
rl
⎜ r =1 v =1 ⎟
⎝ ⎠
y ∏ μ vt r (x 1,.., n)
n
l l
f tt rj (x 1,.., n )
S
=∑ v =1 l (4.18)
∑ ∏ μ v (x 1,.., n)
l
l =1 R f n
l r
r =1 v =1
178 A.K. Nandi
⎤⎡ r ⎤ ⎡β ⎤
r
⎡ r
⎢ α 11 α α 1Kr ⎥ ⎢ a 1r ⎥ ⎢ 1r ⎥
r r
12
.
⎢ α 21 α α 2K ⎥ ⎢ a 21 ⎥ = ⎢⎢ β 2 ⎥⎥
r r
22
.
(4.19)
⎢ . . . . ⎥⎢ . ⎥ ⎢ . ⎥
⎢ r ⎥⎢ r ⎥
⎢⎣ α K1 α α KK ⎥⎦ ⎢⎣ a K ⎥⎦ ⎢⎢⎣β K ⎥⎥⎦
r r r
K2
.
S
∑ f (x , x
S l
α tj =
r r l l l r l l l r r
where j 1 2
.., x n ) f t ( x 1 , x 2 .., x n ) ; βt = ∑ y f t ,
l =1 l =1
(a)
(b)
Fig. 4.27. Comparison of performances of FRBMs (with different types MFs) with those of
experimental results (a) surface roughness (b) power requirement
180 A.K. Nandi
In the above study, the GA was used to optimize the manually-defined KB of the
FRBM. The manually-defined KB is designed based on the expert’s knowledge of
the process that may not be complete. Sometimes, it becomes difficult to gather
knowledge of the process beforehand. To overcome this difficulty, the method for au-
tomatic design of fuzzy KB is adopted to model power requirement and surface
roughness in plunge grinding process [10]. Table 4.1 describes the comparative re-
sults of root mean square (RMS) percentage deviations exhibited by fuzzy rule-based
models (FRBMs) from those of real experimental values. It is found that the approach
of automatic design of RB and tuning of MFs simultaneously using GA provides bet-
ter result over the approach of tuned manually constructed RB and MFs simulta-
neously using GA. It happens because all the manually-designed fuzzy rules may not
be good, whereas the GA has the capability of finding the good fuzzy rules through
extensive search. Moreover, the main disadvantage of using later approach (the ap-
proach of tuned manually constructed RB and MFs) lies in the fact that the designer
is required to have a thorough knowledge of the process to be controlled. Thus, a
considerable amount of time is spent on manual construction of fuzzy RB. In the ap-
proach of automatic design of RB and tuning of MFs simultaneously using GA, no
effort is made for designing the fuzzy rule-base manually and a good KB of the
FRBM is designed automatically using a GA from a set of example (training) data.
Table 4.1. Comparison of RMS percentage deviations exhibited by fuzzy rule-based mod-
els from those of real experimental values
Power requirement
Mathematical model FRBM based on the FRBM based on the approach of
approach of tuned automatic design of rule base and
rule base and MFs tuning of MFs simultaneously
simultaneously using GA (approach 1)
using GA
(approach 2)
RMS percen- 31.51 8.13 5.34
tage error
Surface roughness
RMS percen- 16.44 10.22 6.32
tage error
y = c1 V c p1 + c 2 Fr p2 + c3 L r p3 (4.20)
where c1, c2 and c3 are the function coefficients and p1, p2 and p3 are the exponen-
tial parameters of rule consequent function. Vc, Fr and Lr are the input variables,
cutting speed, feed rate and rate of lubricant, respectively.
A helical K10 drill (R415.5-0500-30) was manufactured according to DIN6537
by Sandvik(R). The drill has a point angle of 140º, 28 mm of flute length and is of
10% cobalt grade. The drills possess a diameter of 5 mm and are coated with
TiAlN. A Kistler® piezoelectric dynamometer 9272 with a load amplifier was
used to acquire the torque and the feed force. Data acquisitions were made through
piezoelectric dynamometer by interfacing RS-232 to load amplifier and PC using
the appropriate software, Dynoware Kistler(R). The surface roughness was eva-
luated (Ra according to ISO 4287/1) with a Hommeltester T1000 profilometer.
Here, 4 different models related to surface roughness and, other 4 different
models for cutting power/specific cutting force requirement) are developed. The
four different FRBMs are constructed using two different types of fuzzy logic
rules (Mamdani-type and TSK-type) and two different shapes of MFs.
The comparative results of surface roughness, cutting power and specific cut-
ting force with different lubricant flow rates for different cutting speed and feed
rate are described in Figure 4.28, Figure 4.29 and Figure 4.30, respectively. In this
study, nine different cases (of cutting speed and feed rate) are considered based on
which the effects of lubricant flow rate on machining performances are analyzed.
In these Figures, Model I indicates FRBM with Mamdani-type FLR and 2nd order
polynomial MFs, Model II represents FRBM with Mamdani-type FLR and trape-
zoidal MFs, Model III shows FRBM with TSK-type FLR and 2nd order polynomi-
al MFs and Model IV indicates FRBM with TSK-type FLR and trapezoidal MFs.
In the following subsections the evolutions of the prediction performances of these
models toward the effects of surface roughness, cutting power and specific cutting
force with lubrication rate are discussed.
Surface roughness
In Figure 4.28, the predicted values of surface roughness by FRBMs are compared
with the experimental values for 9 cases (Figure 4.28(i) to Figure 4.28(ix)). It has
been observed that the performance of Model II is better than Model I for first 5
cases and case number 6. For other cases Model I outperforms over model II. But,
the consistencies of deviations of results from that of the experimental values are
not good for both the Model I and Model II in all the cases. In contrast, the results
of Model III shows better than Model I and Model II for some cases (Figure 4.28
(iv), (v) and (vii)), but for other cases that are deteriorated compared to Model I
and Model II. On the other hand it is noticed that the results of Model IV (FRBM
with TSK-type FLR and trapezoidal MFs) yield less error (deviation from the
182 A.K. Nandi
Fig. 4.28. Comparative results of surface roughness with different lubricant flow rates for
different cutting speed and feed rate
experimental values) in majority than that of the other models for all (9) cases.
Furthermore, it is noticed that the results of Model IV are also consistent for dif-
ferent values of lubrication rate. The maximum value of percentage error exhibited
by Model IV is 2.2428, which is well accepted in industrial practice.
By analyzing the experimental values as well as results obtained by Model IV, it
has been observed that the surface roughness is improved by increasing the flow rate
for lower values of cutting speed (60 m/min) and constant feed rate. But, for higher
values of cutting speed, the surface roughness deteriorated with increasing flow rate
(75 and 90 m/min). In contrast, for a constant cutting speed, the rate of change of
surface quality with flow rate is minimized as feed rate increase.
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 183
Cutting power
By analyzing the results of various models and experimental values as depicted in
Figure 4.29, it has been observed that Model I as well as Model II provides poor
results than other two models (Model III and Model IV). In contrast, it is found
that both the models, Model III and Model IV obtain the best performance for
predicting cutting power with the quantity of lubricant for a given cutting speed
and feed rate. However, in cases Vc=90; f=0.15 and Vc=90; f=0.25, the Model IV
shows better results than Model III.
Fig. 4.29. Comparative results of cutting power with different lubricant flow rates for dif-
ferent cutting speed and feed rate
184 A.K. Nandi
By analyzing the experimental values as well as results obtained by Model III and
Model IV, it has been revealed that for a fixed value of cutting speed and feed rate,
the cutting power increase to a certain value of lubrication flow rate. After that the
value of cutting power decreases with increasing flow rate. From Figure 4.29, it is
found that for constant cutting speed, the cutting power requirement increase with
feed rate. It is also observed that the value of cutting power increases with cutting
speed when feed rate is kept as a constant value.
Specific cutting force
As like cutting power, here also Model III and Model IV show the best result in
predicting specific cutting force with the quantity of lubricant for a given cutting
speed and feed rate (Figure 4.30). This is because; both the cutting power and spe-
cific force are depended on the same parameter, torque and a linear relationship is
maintained among them. The variation of specific cutting force requirement with
lubricant flow rate and other input parameters, cutting power and feed rate exhibit
the same phenomena as found in cutting power.
Fig. 4.30. Comparative results of specific cutting force with different lubricant flow rates
for different cutting speed and feed rate
GA-Fuzzy Approaches: Application to Modeling of Manufacturing Process 185
From above discussions, it may be pointed out that FRBMs with TSK type fuzzy
logic rules provide best result in predicting surface roughness, cutting power and
specific cutting force. Specifically for surface roughness, trapezoidal MFs is well
suited, while trapezoidal as well as second order polynomial MFs give almost simi-
lar performances in predicting cutting power/specific cutting force requirements in
drilling of Aluminium AA1050 with emulsion with oil Microtrend 231L lubricant.
The above techniques may be adopted for developing FRBMs for other machining
(drilling) performance parameters. Once the model is developed, it may be used on-
line in drilling machine to control the MQL as per desired outputs.
References
[1] Groover, M.: Automation, Production System, and Computer Integrated Manufactur-
ing. Prentice-Hall Int’l, Upper Saddle River (2001)
[2] Kosko, B.: Neural Network and Fuzzy Systems. Prentice-Hall, New Delhi (1994)
[3] Zadeh, L.A.: Fuzzy sets. Information and Control 8(3), 338–353 (1965)
[4] Mamdani, E.H., Assilian, S.: An experiment in linguistic synthesis with a fuzzy logic
controller. International Journal of Man-Machine Studies 7(1), 1–13 (1975)
[5] Sugeno, M., Kang, G.T.: Structure identification of fuzzy model. Fuzzy Sets and Sys-
tems 28(1), 15–33 (1988)
[6] Tsukamoto, Y.: Fuzzy information theory. Daigaku Kyoiku Pub. (2004)
[7] Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to model-
ing and control. IEEE Transactions on Systems, Man, and Cybernetics 15(1), 116–
132 (1985)
[8] Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning.
Addison-Wesley, Reading (1989)
[9] Deb, K.: Multi-Objective Optimization using Evolutionary Algorithms. John Wiley &
Sons Ltd, England (2001)
[10] Nandi, A.K., Pratihar, D.K.: Automatic Design of Fuzzy Logic Controller Using a
Genetic Algorithm – to Predict Power Requirement and Surface finish in Grinding.
Journal of Material Processing Technology 148(3), 288–300 (2004)
[11] Nandi, A.K.: TSK-Type FRBM using a combined LR and GA: surface roughness
prediction in ultraprecision turning. Journal of Material Processing Technolo-
gy 178(1-3), 200–210 (2006)
[12] Chandrasekaran, M., Muralidhar, M., Krishna, C.M., Dixit, U.S.: Application of soft
computing techniques in machining performance prediction and optimization: a lite-
rature review. Int. J. Advance Manufacturing Technology 46, 445–464 (2010)
[13] Nandi, A.K., Pratihar, D.K.: Design of a Genetic-Fuzzy System to Predict Surface
finish and Power Requirement in Grinding. Fuzzy Sets and Systems 148(3), 87–504
(2004)
[14] Nandi, A.K., Davim, J.P.: A Study of drilling performances with Minimum Quantity
of Lubricant using Fuzzy Logic Rules. Mechatronics 19(2), 218–232 (2009)
5
5.1 Introduction
A large number of techniques have been implemented in the field of the manufacturing
parameters optimization and others are considered candidate optimization tools.
Optimization problems include product design, process engineering, quality control,
process planning and different machining operations numerically, or conventionally,
controlled. Moreover, optimization has also been investigated in other highly
sophisticated and/or non-conventional manufacturing processes, such as Electro-
discharge machining (EDM) or abrasive water jet machining (AWJM) [1, 2].
188 N. Fountas et al.
n
The domain R of f is the search space. Each parameter vector of this domain
is a candidate solution in the search space, with x̂ being the optimal solution.
The value n represents the number of dimensions of the search domain and hence,
the number of parameters involved in the optimization problem. The function f is
the objective (or fitness) function that maps the search space to the function space.
If the objective function has one output, then the respective function space is one-
dimensional and thus provides a single fitness value for each set of parameters.
This single fitness value specifies the optimality of the parameter set for the
desired task. Usually, the function space can be directly mapped to the fitness
space. However, the distinction between function space and fitness space is
important in the case of multi-objective optimization problems, which include
several objective functions drawing input from the same independent variable
space [7, 8].
For a known differentiable function f, calculus may easily provide local and
global optima of f. However, in machining problems this objective function f is
not known. When addressing such problems, the objective function is treated as
“black-box”; input and output parameter values are obtained. The result of a
candidate solution evaluation is the solution’s fitness. Then, the final goal is to
point out parameter values that maximize or minimize this fitness function [7].
In order to suitably approach the optimization of a given problem, components
of candidate solutions should subject to certain constraints. In manufacturing
environments, such constraints may be the maximum available process values and
limitations of involved machine tools and equipment, product quality demand,
minimum manufacturing cost, etc. This statement implies that constraints can be
either economical or technological depending on the nature of the optimization
problem. Specifically, in machining processes optimization, constraints are both
economical and technological. Examples of technological constraints are
maximum available motor power of a CNC machine tool, its maximum capacity
in terms of cutting force load, the generated heat during the machining process,
the maximum available torque and the maximum range of feeds and speeds, etc;
see for details [9].
F ( x ) = g ( f (x )) (4)
where
f : Objective Function
g: Transformation of the objective function values to nonnegative
F: Resulting Fitness Function.
In CNC machining applications, objective functions vary as far as process
characteristics and constraints are concerned. During different machining stages,
objective functions tend to optimize attributes referring either to productivity or
quality or even both. Posed process constraints are also chosen depending on the
studied machining stage, namely roughing, semi-finishing or finishing stage.
In most cases, objective functions of machining processes are discontinuous
and nondifferentiable. Stochastic optimization methodologies can be applied for
quality objectives optimization, since they are unresponsive to discontinuities and
they do not need derivative information to converge.
Quality objectives usually optimized by suitably developed objective functions
are determined in each machining stage. Considering that in most case a
machining process follows a two-stage scheme (roughing and finishing), the
following quality objectives are specified:
- Roughing Operations:
During rough machining operations, the primary target is to rapidly remove
material from the raw stock, until the roughed part geometry is close to its final
shape. Thus, quality characteristics are mainly related to productivity and time.
Targets, such as High Material Removal Rate, Minimum Roughing Time and
Minimum Remaining Volume for the finishing process to take up on, are the most
common optimization attributes.
- Finishing Operations:
During Finish machining Operation, the primary goal is to achieve final
specifications of the part geometry in terms of surface quality (low surface
roughness), dimensional accuracy and geometrical features within specified
tolerance, regarding the 3D target model, the blueprints and the engineering
drawings. As a matter of fact, quality targets like cutting forces, surface roughness
and machining time are the quality objectives to be minimized.
Single and Multi-objective Optimization Methodologies in CNC Machining 191
Fig. 5.1. Feed-forward multi-layer perceptron (MLP) with two hidden layers.
194 N. Fountas et al.
divided in three subsets: training subset, which is used for parameter fitting
(learning), validation subset, which is used for network architecture tuning, and
test subset, which is used for accessing the generalization ability of a trained
network. In literature, the use of validation and test sets is often reversed [24, 25].
Finding an ANN that performs optimally in new cases, while it does not just
memorizes the already known cases with which it was trained, means that its
performance is measured by an error function (e.g. mean square error, total
absolute error, etc) when unknown –independent– data is presented to the network
[24]. The validation set consists of these new cases. However, ANN efficiency is
measured by a third –test– set, since validation procedure may lead to ANN over-
fitting (data memorizing).
begin
Initialization
repeat
Roulette Wheel Selection
Crossover
Mutation
Evaluation
until Termination_condition = True
end.
5.4.1.1 Encoding
When an optimization problem is solved with GAs, the solution space should be
encoded into a string space. In other words, encoding means to map variables
from the solution space into a finite-length string space. Good encoding schemes
are thus required, so as to efficiently solve an optimization problem. Several
encoding methods have been proposed so far; see [33-35]. The most important
issue during encoding is to cover all the solution space with the mapped string
198 N. Fountas et al.
space without redundancy. Consequently, the phenotype of the string space should
be equal to the problem solution space, in order to make the problem simpler.
What is more, string space should be generated as a set of feasible candidate
solutions in order to avoid unnecessary search from the algorithm.
Genetic operators (see Section 4.1.2), however, directly work on the genotype
(often called “chromosome”) of GAs. Therefore, the performance of the genetic
search highly depends on symbolic operations for the genotype. GAs perform
better when substrings have consistent benefit throughout the string space. This is
based on the concept that an encoding method is deeply involved in crossover
operators. The neighborhood of a candidate solution in the solution space should
be similar to the neighborhood of the respective string in the string space. If the
offspring generated by a crossover operator is not similar to its parents, the
population would proceed towards a different evolutionary direction and genetic
search would probably result in failure. On the contrary, if good offspring is
generated, genetic search results in success. When relationships between encoding
methods and genetic operators are taken into account, two principles of the
encoding rule are applied [31]:
5.4.1.2 Selection
and has strong elitist behavior. Several selection schemes have been proposed in
order to prevent a population from genetic drift.
Selection schemes are classified into two main categories; the proportional
selection scheme and the competition selection scheme. In the first category,
selection is based on the fitness value of an individual compared to the total fitness
value of the overall population. In the second category, selection is based on the
fitness values of some other individuals. Some of the most commonly used
selection schemes are presented next [31]:
1. Roulette wheel selection scheme (or Monte Carlo method): This scheme
selects an individual with probability in proportion to its fitness values.
2. Elitist selection scheme: This scheme preserves the fittest individual
through all generation t, that is, the fittest individual is certainly selected
prior to others into the next generation.
3. Tournament selection scheme: the individual with the highest fitness
value between m randomly pre-selected individuals is selected. Note that
m is the number of competing members.
4. Ranking selection scheme: This scheme is based on the rank of the
individual's fitness value. Additionally, the individuals are selected by the
number of their reproduction into the next generation based on the
ranking table predefined earlier.
5. Expected value selection scheme: This scheme is based on the expected
value of the individual's fitness. According to a respective probability, the
expected value of an individual is calculated. Then, the probability of the
selected individual is decreased by 0.5. Thus, this selection scheme
relatively prevents an individual from being selected more than twice in
the population.
5.4.1.3.1 Crossover
The Crossover operator generates new individuals as solution candidates in GAs
[31]. GAs can search the solution space mainly by using one of the crossover
operators. With the absence of crossover operators, GAs would be random search
algorithms. The crossover operator exchanges each substring between two
individuals and replaces old individuals with others of a new genotype. The
recombination between two strings is performed according to the type of
crossover operator. Depending on the number of break-points among individuals,
the crossover mechanism recombines the strings of the genotype.
200 N. Fountas et al.
Firstly the starting point on the string is chosen and then the selection of
locus 2 of the P1 parent is done. A closed round of substring begins from
this starting point. The characters 1 and 4 in the locus 2 of the P1, P2 to
the same locus of offspring are copied, respectively:
O1: *1***
O2: *4***
Next, the character of the locus existed the character 4 in the locus 2 of
P2 is copied as follows:
O1: *1*4*
O2: *4***
Finally, characters are filled from the former parents to the latter
offspring. The complete offspring becomes:
O1: 21345
O2: 34251
5.4.1.3.2 Mutation
Mutation occurs as the replicating error in nature. In GAs, the mutation operator
replaces a randomly selected character on the string with the other one [31].
Mutation is performed regardless of individual fitness values. A classic mutation
operation is the one-point changing per individual. Several mutation types are
occurred in nature such as inversion, translocation and duplication. These are also
the mutating mechanisms applied to GAs to simulate these phenomena.
202 N. Fountas et al.
Example:
Assume a string included 5 characters. Two points on the locus 2 and 4
are selected:
1|234|5
Example:
Assume a string included 5 characters. The segment from the locus 1 to 2
is chosen as a substring.
|12|345
Example:
Assume a string included 5 characters. The segment from the locus 1 to 2
is chosen as a substring.
|12|345
The substring is overwritten over the locus 4 and 5 and the remained
substring in not operated as follows:
12312
However the above generated individual does not satisfy the constraint for
permutation problems. Therefore, duplication is not suitable to address this
problem.
Single and Multi-objective Optimization Methodologies in CNC Machining 203
1. Delete Least Fit. Deletion of the least fit individual from the population.
2. Exponential Ranking: The worst individual has a probability p of being
deleted. If it is not selected, then the next to the last also has a
probability of p chance and so on.
3. Reverse Fitness: Each individual has a probability of being deleted
according to its fitness value.
finding global optima rapidly, in other words augmenting both exploration and
exploitation abilities of the algorithm. In the same sense, other knowledge-based
systems can be implemented in EAs, such as Expert Systems.
5.4.2.2.1 Parallelism
Parallelism of GAs has been proposed by several researchers; see for example [38,
39]. Parallelism can be divided into two types. The first type works with a
population divided into several sub-populations. The genetic operators in a sub-
population prevent local minima from widely propagating to other sub-
populations. The second type is to facilitate the rapid computation by working
with parallel computer systems. Both types may be realized simultaneously. A
Parallel Genetic Algorithm where individuals in a population are placed on a
planar grid was proposed in [38]. Both Selection and Crossover are limited to
operate on individuals in neighbourhoods on that grid. During the next generation,
individuals from a specific location are selected. An individual of the old
population is replaced with the selected one. Crossover is performed by mating
individuals from the same neighborhood.
The main difference to the basic GA is the selection by replacement Operator.
Each individual is replaced with a selected individual with a higher fitness value.
The pseudocode of this parallel GA is:
begin
Initialization
repeat
Selection by Replacement
Crossover
Mutation
Evaluation
until Termination_condition = True
end.
5.4.2.2.2 Migration
In Parallel GAs, genetic operations are performed locally and independently in the
divided subpopulations. Each subpopulation tries to locate good local minima.
New offspring is generated by genetic operators within each subpopulation.
Therefore, each subpopulation evolves toward different direction, much like a hill-
climbing algorithm. After some generations, the best solutions in a locus
(subpopulation) are propagated to neighboring subpopulations. This is called
Migration.
During Migration, the best individual in a generation is sent to its neighbors as
a migrating individual. The migration frequency is an essential parameter for the
efficient performance of this special operation. If the migration phenomenon
appears with high probability, the group of sub-populations works equivalently to
a single population. On the contrary, if no migration appears for a number of
generations, sub-populations only perform local hill climbing.
The division of a population into sub-populations prevents premature convergence
to local optima. GAs consisting of sub-populations are often run on parallel computer
systems with multi-processors, since each subpopulation can easily be assigned to a
different processor.
solution). Further on, PSO algorithm maintains the best solution in the group of
particles until the optimization procedure terminates. This best solution is known
as the global best position or global best candidate solution [41].
There are three major steps in the PSO algorithm development. These steps are
repeated until termination/convergence criteria are met:
of the two components causes each particle in the swarm to move in a semi-
random manner greatly influenced in the directions of the particle’s individual
best solution and hence, the swarm’s global best solution.
The velocity clamping technique offers the ability of keeping the particles from
moving too far beyond the search domain. This is achieved by limiting the
maximum velocity of each particle [7]. If [-xmax, xmax] is a specific search space,
then velocity clamping tends to limit the velocity to the range [-vmax, vmax], where
vmax = k * xmax. The k value is a user-defined parameter and represents the velocity
clamping vector taking values in the range 0.1≤ k ≤ 1.0. It has been noticed that
the search space in many optimization issues is not centered around 0 and, hence,
the range [-xmax, xmax] is not an adequate definition of the search domain. When it
comes to such problems, one may define vmax = k * (xmax-xmin)/2. After the
calculation of the particles velocities, the positions are updated by applying the
new velocities to the particles' previous positions. Finally,
xi (t + 1) = xi (t ) + vi (t + 1) (7)
This procedure is repeated until some stopping criteria are met. Some common
stopping conditions include a specific number of iterations of the PSO algorithm,
a number of iterations since the last update of the global best candidate solution,
or a predetermined fitness value of a quality target.
Davidson and Harell [47], summarize the parameters that are needed in the
implementation of SA algorithm. These parameters are listed below.
The basic Simulated Annealing Algorithm follows the 18 steps described in [48].
5.5.5 Tribes
A tribe is a sub-swarm formed by particles which have the property that all
particles inform all others belonging to the tribe (a symmetrical clique in graph
theoretical language). The concept is therefore related to the “cultural vicinity”
(information neighborhood) and not on “spatial vicinity” (parameter-space
neighborhood). It should be noted that, due to this definition, the set of informers
of a particle (its so-called i-group) contains the whole tribe but is not limited to it
[54]. Note that Tribes mechanism can be also auto-parameterized.
Single and Multi-objective Optimization Methodologies in CNC Machining 213
5.6 Conclusions
Machining processes, especially detailed CNC machining of high quality parts,
involve a large number of process parameters. Classical optimization schemes
often fail to produce global optima owing to the enormous calculation load. Thus,
stochastic optimization methodologies offer great advantages in this field. This is
mainly due to the probabilistic nature of their operators, which do not need any
derivative information of –in most cases- unknown functions, while they have
proven very efficient in searching the solution space. Artificial Neural Networks
are a relatively simple tool in predicting machining process values after proper
training or even classifying solutions. They can be utilized either as stand-alone
methods for high speed calculations or as internal functions in other optimization
algorithms.
Genetic and Evolutionary Algorithms are very powerful optimization tools that
perform only in a fraction of the calculation time of classical methods. Provided that
a machining problem is well defined, as far as independent and dependent variables
are concerned, and appropriately constrained, they yield optimal results with low
calculation cost. This chapter gives an overview of these algorithms, as well as their
variations, such as Simulated Annealing, Tabu Search, Particle Swarm
Optimization, Ant-Colony Optimization etc. Researchers have implemented these
methods in many scientific fields, including the CNC machining field, providing
users with optimal results in varying calculation times.
Owing to the large number and the different types of process parameters,
machining optimization problems are highly sophisticated; especially when 5-axis
CNC sculptured surface milling is involved. This calls for more complex
optimization schemes, which are named “hybrids”. Hybridization of stochastic
algorithms is the procedure of blending together methods of Artificial Intelligence
or lending operators from one algorithm to another. To that extend, Game Theory
helps in declaring interactions among operators and gives them priorities, as well
Single and Multi-objective Optimization Methodologies in CNC Machining 215
as defines the scope of each part of the algorithm. Overall, producing new hybrids,
or improving the existing ones, is an open field and can push optimization
algorithm domain to a whole new level.
References
[1] Vaxevanidis, N.M., Markopoulos, A., Petropoulos, G.: Artificial Intelligence in
Manufacturing Research. In: Paulo Davim, J. (ed.) Artificial neural network
modelling of surface quality characteristics in abrasive water jet machining of trip
steel sheet, ch. 5, pp. 79–99. Nova Publishers (2010)
[2] Rao, R.V.: Advanced Modeling and Optimization of Manufacturing Processes.
Springer, London (2011)
[3] Petropoulos, P.G.: Optimal selection of machining rate variable by geometric
programming. International Journal of Production Research 11, 305–314 (1973)
[4] Sönmez, A.İ., Baykasoğlu, A., Dereli, T., Filiz, İ.H.: Dynamic optimization of multi-
pass milling operations via geometric programming. International Journal of Machine
Tools and Manufacture 39, 297–320 (1999)
[5] Kiliç, S.E., Cogun, C., Şen, D.T.: Short Note: A computer-aided graphical technique
for the optimization of machining conditions. Computers in Industry 22, 319–326
(1993)
[6] Diwekar, U.: Introduction to Applied Optimization, 2nd edn. Springer (2008)
[7] Kennedy, J., Eberhart, R., Shi, Y.: Swarm Intelligence. Elsevier, Burlington (2001)
[8] Zitzler, E., Laumanns, M., Bleuler, S.: A tutorial on evolutionary multi-objective
optimization. In: Metaheuristics for Multiobjective Optimisation, pp. 3–37. Springer
(2004)
[9] Dixit, P.M., Dixit, U.S.: Modeling of Metal Forming and Machining Processes by
Finite Element and Soft Computing Methods. Springer, London (2008)
[10] Melin, P., Castillo, O.: Hybrid Intelligent Systems for Pattern Recognition Using Soft
Computing. Springer, Berlin (2005)
[11] De Jong, K.A., Spears, W.M.: A formal analysis of the role of multi-point crossover
in genetic algorithms. Annals of Mathematics and Artificial Intelligence 5(1), 1–26
(1992)
[12] Haykin, S.: Neural networks, a comprehensive foundation. Prentice-Hall, Englewood
Cliffs (1999)
[13] Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific,
Belmont (1996)
[14] Fletcher, R.: Practical Methods of Optimization. Wiley, NY (1987)
[15] Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press,
London (1981)
[16] Levenberg, K.: A method for the solution of certain problems in least squares. m
Quarterly of Applied Mathematics 2, 164–168 (1944)
[17] Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters.
SIAM Journal of Applied Mathematics 11, 431–441 (1963)
[18] Masters, T.: Advanced Algorithms for Neural Networks: A C++ Sourcebook. John
Wiley and Sons, NY (1995)
[19] Jain, A.K., Mao, J., Mohiuddin, K.M.: Artificial neural networks: a tutorial. IEEE
Computer 29(3), 31–44 (1996)
216 N. Fountas et al.
[20] Valiant, L.: Functionality in Neural Nets. In: Proceedings of the American
Association for Artificial Intelligence, St. Paul, Minnesota, August 21-26, vol. 2, pp.
629–634 (1988)
[21] Siegelmann, H.T., Sontag, E.D.: Turing Computability with Neural Networks.
Applied Mathematics Letters 4, 77–80 (1999)
[22] Orponen, P.: An overview of the computational power of recurrent neural networks.
In: Proceedings of the 9th Finnish AI Conference - STeP 2000 (2000),
http://www.math.jyu.fi/~orponen/papers/rnncomp.ps
[23] Sima, J., Orponen, P.: Computing with continuous-time Liapunov systems. In:
Proceedings of the 33rd Annual ACM Symposium on Theory of Computing - STOC
2001, Heraklion, Crete, Greece, July 06 - 08, pp. 722–731 (2001)
[24] Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press,
Oxford (1995)
[25] Ripley, B.D.: Pattern Recognition and Neural Networks. Cambridge University Press,
Cambridge (1996)
[26] Montgomery, D.C.: Design and analysis of experiments, 5th edn. John Wiley and
Sons, USA (2001)
[27] Chen, S.-L., Chang, C.-C., Chang, C.-H.: Application of a neural network for
improving the quality of five-axis machining. Proceedings of the Institution of
Mechanical Engineers, Part B: Journal of Engineering Manufacture 214(1), 47–59
(2000)
[28] Karpat, Y., Özel, T.: Multi-objective optimization for turning processes using neural
network modeling and dynamic-neighborhood particle swarm optimization.
International Journal of Advanced Manufacturing Technology 35(3-4), 234–247
(2007)
[29] Davim, J.P., Gaitonde, V.N., Karnik, S.R.: Investigations into the effect of cutting
conditions on surface roughness in turning of free machining steel by ANN models.
Journal of Materials Processing Technology 205, 16–23 (2008)
[30] Pontes, F.J., Ferreira, J.R., Silva, M.B., Paiva, A.P., Balestrassi, P.P.: Artificial neural
networks for machining processes surface roughness modeling. International Journal
of Advanced Manufacturing Technology 49(9-12), 879–902 (2010)
[31] Goldberg, D.E.: Genetic Algorithms in Search. Addison-Wesley, Reading (1989)
[32] Holland, J.: Adaptation in Natural and Artificial Systems, 2nd edn. The MIT Press,
Massachusetts (1992)
[33] Fogel, D.B.: Phenotype, Genotype and Operators in Evolutionary Computation. The,
IEEE International Conference on Evolutionary Computation, 193–198 (1995)
[34] Hinterding, R.: Mapping, Order-independent Genes and the Knapsack Problem. In:
Proceedings of the 1st IEEE Conference on Evolutionary Computing, vol. 1, pp. 13–
17 (1994)
[35] Tamaki, H., Kita, H., Shimizu, N., Maekawa, K., Nishikawa, Y.: A Comparison
Study of Genetic Codings for the Travelling Salesman Problem. In: The 1st IEEE
Conference on Evolutionary Computing, Florida, vol. 1, pp. 1–6 (1994)
[36] Bui, T.N., Moon, B.: A New Genetic Approach for the Travelling Salesman Problem.
In: Proceeding of the 1st IEEE Conference on Evolutionary Computing, vol. 1, pp. 7–
12 (1994)
[37] Syswerda, G.: A Study Reproduction in Generational and Steady-State Genetic
Algorithms. In: Rawlins, G.J.E. (ed.) Foundations of Genetic Algorithms, pp. 94–101.
Morgan Kaufmann Publishers, San Mateo (1991)
Single and Multi-objective Optimization Methodologies in CNC Machining 217
[38] Manderick, B., Spoessens, P.: Fine-grained parallel Genetic Algorithms. In: The 4th
International Conference on Genetic Algorithms, Virginia, pp. 428–433 (1991)
[39] Wang, Z.G., Rahman, M., Wong, Y.S., Sun, J.: Optimization of multi-pass milling
using parallel genetic algorithm and parallel genetic simulated annealing.
International Journal of Machine Tools and Manufacture 45(15), 1726–1734 (2005)
[40] Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of the IEEE
International Conference on Neural Networks, IV, pp. 1942–1948 (1995)
[41] Poli, R., Kennedy, J., Blackwell, T.: Particle swarm optimization - An overview.
Swarm Intelligence 1(1), 33–57 (2007)
[42] Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: Proceedings of the
IEEE International Conference on Evolutionary Computation, pp. 69–73 (1998)
[43] Kirkpatrick, S., Gelatt Jr., C., Vecchi, M.: Optimization by Simulated Annealing.
Science 220, 671–680 (1983)
[44] Fleischer, M.: Simulated Annealing: Past, Present, and Future. In: Alexopoulos, C.,
Kang, K., Lilegdon, W., Goldsman, G. (eds.) Proceedings of the, Winter Simulation
Conference, pp. 155–161. ACM (1995)
[45] Ryan, C.: Evolutionary Algorithms and Metaheuristics. In: Meyers, R.A. (ed.)
Encyclopedia of Physical Science and Technology, 3rd edn., pp. 673–685. Elsevier
(2001)
[46] Dréo, J., Pétrowski, A., Siarry, P., Taillard, E.: Metaheuristics for Hard Optimization:
Methods and Case Studies. Springer, Berlin (2006)
[47] Davidson, R., Harel, D.: Drawing Graphs Nicely Using Simulated Annealing. ACM
Transactions on Graphics 15(4), 301–331 (1996)
[48] Michalewicz, Z., Fogel, D.: How to Solve It: Modern Heuristics, 2nd edn. Springer,
Berlin (2004)
[49] Glover, F.: Tabu Search, Part I. ORSA Journal on Computing 1(3), 190–206 (1989)
[50] Glover, F.: Tabu Search, Part II. ORSA Journal on Computing 2(1), 4–32 (1990)
[51] Dorigo, M., Stützle, T.: The ant colony optimization metaheuristic: Algorithms,
applications and advances. In: Glover, F., Kochenberger, G. (eds.) Handbook of
Metaheuristics, Kluwer Academic Publishers (2002)
[52] Dorigo, M., Blum, C.: Ant Colony Optimization Theory: A survey. Journal of
Theoretical Computer Science 344, 243–278 (2005)
[53] Maniezzo, V.: Exact and approximate nondeterministic tree-search procedures for the
quadratic assignment problem. INFORMS Journal of Computing 11(4), 358–369
(1999)
[54] Dos Santos Coelho, L., Alotto, P.: Tribes Optimization Algorithm Applied to the
Loney’s Solenoid. IEEE Transactions on Magnetics 45(3), 1526–1529 (2009)
[55] Chen, K., Li, T., Cao, T.: Tribe-PSO: A novel global optimization algorithm and its
application in molecular docking. Chemometrics and Intelligent Laboratory
Systems 82(1-2), 248–259 (2006)
[56] Cooren, Y., Clerc, M., Siarry, P.: Tribes-A parameter-free particle swarm
optimization. In: The 7th EU/Meet, Adaptation, Self-Adaptation, Multi-Level
Metaheuristics, Paris, France (2006)
[57] Krimpenis, A., Vosniakos, G.C.: Optimisation of roughing strategy for sculptured
surface machining using genetic algorithms and neural networks. In: CD Proceedings
8th International Conference on Production Engineering, Design and Control,
Alexandria, Egypt, December 27-29 (2004), paper ID: MCH-05
[58] Mahfoud, S.W., Goldberg, D.E.: Parallel recombinative simulated annealing: A
genetic algorithm. Parallel Computing 21(1), 1–28 (1995)
218 N. Fountas et al.
M.P. Henriques, T.J. Grilo, R.J. Alves de Sousa, and R.A.F. Valente
The goal of the present work is to analyse distinct numerical simulation strategies,
based on the Finite Element Method (FEM), aiming at the description of wrinkling
initiation and propagation during sheet metal forming. From the FEM standpoint,
the study focuses on two particular aspects: a) the influence of a given finite ele-
ment formulation as well as the numerical integration choice on the correct predic-
tion of wrinkling in walls and flange zones of cup drawing formed parts; and b)
the influence of the chosen anisotropic constitutive model and corresponding pa-
rameters on the correct prediction and propagation of wrinkling deformation
modes during forming operations. In this sense, this work infers about the influ-
ence of accounting for distinct planar anisotropy behaviours within numerical
simulation procedures. Free and flange-forming examples will be taken into con-
sideration, with isotropic and anisotropic material models. Additionally, the influ-
ence on wrinkling onset and propagation as coming from different numerical
formulations will be accounted for shell and tridimensional continuum finite ele-
ments, along with implicit numerical solution procedures. Doing so, the present
work intends to provide some insights into how numerical simulation parameters
and modelling decisions can influence FEM results regarding wrinkling defects in
sheet metal formed parts.
6.1 Introduction
Over the last years investigators have paid special attention to wrinkling defects in
products coming from sheet metal-forming processes, particularly from the point
of view of the numerical simulation and prediction of such problems in metallic
parts. Nevertheless, when compared to other common defects in plastically formed
220 M.P. Henriques et al.
the metal grading and the type of dies employed. Still relying on experimental data,
it turns out to be difficult to avoid wrinkles in metal formed parts, particularly for
some materials used in automotive industry, and therefore the use of blank-holder
is important.
A study on variable blank-holder force, and its influence on wrinkling onset,
was carried out, for instance, in reference [24]. Experimental results from two dif-
ferent shapes binder (a flat and cone-shaped ones) were compared with results
coming from the finite element method and verified. Even though more researches
were performed in the same study, only rigid shell modelling was used for the
numerical simulation models of the deformable sheet. In this study a box speci-
men was the chosen shape, which, contrasting to cylindrical and conical cups,
needed different amount of material to flow on different regions of the binder. To
simulate the binder in the variable blank-holder force forming process, three mod-
els were built: flat-shaped binder with elastic solid elements, cone-shaped binder
with also elastic solid elements and simplified segment binder with rigid shell
elements. The cone-shaped binder was shown to provide a better control over the
normal stress distribution, with a better formability of the blank in terms of the
thickness and major strain distribution [24].
Morovvati et al. [25] investigated the plastic wrinkling of a circular two-layer
blank to obtain the minimum blank-holder force required to avoid wrinkles, theo-
retically calculated by means of the energy method. The blank-holder force was de-
pendent on the material properties and geometry, and the study concluded that a
lower (a/b) ratio (where a is the punch plus die edge radius, while b is the blank ra-
dius) tends to increase the minimum blank-holder force required to avoid wrinkles.
As a consequence, for a certain blank diameter, an increase of the punch diameter
will decrease the blank-holder force. The study also concluded that the resultant
yield stress of the circular two-layer blank was a function of the yield stress of the
blank’s components. Additionally, the effect of anisotropy was investigated for three
different cases: a) the same material orientation for the two layers; b) a 45º differ-
ence in direction for the two materials’ orientation; and c) a 90º difference in that di-
rection. The results showed that, for the materials used, the minimum blank-holder
force required has reduced by about 20% from case a) to case c).
An investigation carried out by Stoughton and Yoon [26] tried to find an efficient
method for analysis of necking and fracture limits for sheet metals, combining a
model for the necking limit with fracture limits in the principal stress space by apply-
ing a stress-based forming limit curve and the maximum shear stress criterion. The
fracture model studied was applied on the opening process of a food can. Previous
studies in this area demonstrated that stress-based forming limit curves calculated di-
rectly from the strain-based forming limit curve are substantially less sensitive to
changes in strain path that normally happens in forming processes in industry. A new
failure model was presented in that reference, taking into consideration the stress dis-
tribution through thickness direction and localised necking prior to fracture, being
also capable of distinguishing forming processes where fracture could occur without
necking. Also recently, a new method to test formability, and evaluating various
modes of deformation, was investigated by Oh et al. [27]. The test consisted of three
steps: drawing a blank-holder force vs. punch stroke diagram, measuring the strain
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 223
level at the optimum condition on the stamped part and, finally, grading the test mate-
rials using a formability index. A new tool shape was designed and numerically simu-
lated to optimise the dimensional details. The blank-holder force vs. punch stroke
diagram had three failure loci to better evaluate the formability of the new tool and
also to help find the optimum process condition and formability index. The numerical
simulations were compared with experimental results and strain distribution in the
forming limit diagram.
In reference [28], a user-defined material (UMAT) – accounting for an anisot-
ropic material model based on non-associated flow rule and mixed isotropic-
nonlinear kinematic hardening – was studied and implemented into the commercial
finite element code ABAQUS. Two different forming processes were modelled: a
cylindrical drawing and a channel drawing processes, in order to estimate the capa-
bility of the constitutive model in predicting earing, springback and sheet metal ani-
sotropy effects. The achieved results demonstrated that applying the non-associated
mixed-hardening material model both anisotropy and hardening descriptions signifi-
cantly improved the prediction of earing in the cup drawing process and the predic-
tion of springback in the side wall of drawn channel sections, even though a simple
quadratic constitutive model and a single-backstress kinematic hardening model
were used [28]. Although focusing on important geometric dimensional defects, the
previous references lack in carrying out an in-depth study of wrinkling effects in the
formed parts.
Although the onset of wrinkling takes place when the ratio of strain increments
(dεr/dεθ) or the ratio of strain (εr/,εθ) reaches a critical value during forming, an at-
tempt was made to try to find a theory that predicted wrinkling based on results
obtained in the form of wrinkling limit diagram [29]. An aluminium alloy was
studied, for four different annealing treatments, and it was found that the annealed
sheet with higher n-values, R-value and UTS/σy ratios showed improved resistance
against wrinkling, along with a clear curve separating the safe and wrinkling re-
gions of service.
In reference [30], a study of different shapes for dies and blank-holders was
performed in order to observe the distribution of the blank-holder force, the punch
load at different drawing depths as well as the blank’s thickness reduction during
forming. The main goal of this investigation was to attempt to increase the deep
drawing ratio along with decreasing the blank-holder forces involved. On the other
hand, Port et al. [31] focused on surface defects of an industrial upper corner of a
front door panel, as well as in an initially planar L-shaped part designed on pur-
pose to reproduce at a small-scale surface defects after flanging. The simplified
geometry was measured using a tri-dimensional measuring machine. The re-
searchers achieved a good correlation between experiments and simulations, con-
cerning the spatial position of the defects, and a buckling analysis during spring-
back showed that the position of the defects effectively corresponded to a buckling
mode.
In the present contribution, and within the numerical and experimental frame-
work described, a sensitivity analysis of different numerical simulations, as well
as the robustness of the respective results in characterising wrinkling defects, is
carried out. The primary variables to be taken into account are related to numerical
224 M.P. Henriques et al.
simulation models only and correspond to (i) distinct mesh densities; (ii) different
finite element formulations; and (iii) basic and complex anisotropic constitutive
models. It is seen that completely different quantitative and qualitative solutions
(and, therefore, distinct wrinkling predictions) can be obtained for free and flange-
forming examples with small perturbations or variations in the chosen input pa-
rameters, in a somewhat more severe way than the one that occurs in springback
simulations. Also, and most noticeable, the correct prediction of wrinkling onset
and propagation is more related to a given finite element formulation than to com-
plex or more elaborate non-quadratic anisotropic constitutive models. All the
analyses performed in this work were carried out using the finite element com-
mercial package Abaqus/Standard [32], using fully implicit solution procedures,
for both shell and solid elements. The anisotropic constitutive models adopted
are not limited to those available in this commercial software, but also include
more recent ones, implemented by the authors by means of user subroutines
(UMAT).
σ = K(ε Y + ε p )n , (6.1)
where K is a material constant, ε Y is the elastic strain at the yield state, n is the
strain-hardening exponent and ε p is given as
ε p = ε − ε e, (6.2)
where ε and ε e represent the logarithmic total and elastic strain terms, respec-
tively. On the other hand, the second model considered follows a Voce's law and
is written in the form
where σ Y is the uniaxial yield stress, Cr is a material constant and Rsat is given
by
In the present work, the anisotropic plastic behaviour was initially described by
means of the Hill's quadratic yield criterion in its original version of 1948 [33],
which represents a generalisation of the von Mises isotropic yield function, being
expressed by a yield function φ = φ (σ ) in the form
φ = F(σ 22 -σ 33 )2 + G(σ 33 -σ 11 )2 + H(σ 11 -σ 22 )2 + 2Lσ 232 + 2Mσ 312 + 2Nσ 122 . (6.5)
(σ 0 )2 1 1 1 (σ 0 )2 1 1 1
F= + 2 − 2 , G = + 2 − 2
2 σ 22 σ 33 σ 11
2
2 σ 332
σ 11 σ 22
2
(σ 0 )2 1 1 1 3 τ0
H= + − , L = (6.6)
2 σ 112 σ 22
2
σ 332 2 σ 23
2
2 2
3 τ0 3 τ0
M= 2 , N= 2
2 σ 13 2 σ 12
For the determination of these parameters, σ Y is the normal yield stress stated
before, while τ Y is the corresponding yield stress on shear. The six anisotropic
parameters involved in the last equation can be determined by three uniaxial ten-
sion tests, performed at 0º, 45º and 90º directions, respecting the rolling direction
(RD). An alternative way of defining the anisotropic criterion is by means of the
so-called Lankford's r-values ( rθ ) for a specific (θ ) direction, in the form
H 2N − (F + G) H
r0 = , r45 = , r90 = . (6.7)
G 2(F + G) F
Since the R13 and R23 coefficients refer to the thickness direction, and along
this direction an isotropic behaviour is assumed (normal isotropy), it comes that
R13 = R23 = 1.
The anisotropic yield criterion of Hill, on its version of 1948 as described be-
fore, is known to be well suited to the generality of steel alloys. Nevertheless, it
provides poor results when characterising the behaviour of aluminium alloys
sheets [34]. To fulfil this requirement in some of the following benchmark prob-
lems, non-quadratic anisotropic constitutive models were also implemented as
user subroutines in Abaqus commercial software. For the sake of completeness,
the implemented models will be summarised in the following.
The non-quadratic yield criteria described in the present work are suited for the
description of anisotropic effects in aluminium alloys, and although a large num-
ber of yield criteria for this purpose exist in the literature (see, for a comprehen-
sive survey on this topic, reference [34]), in the following only the criteria devel-
oped in the last decades by Barlat and co-workers (and particularly the 1991 [35]
and 2004 [36] versions) will be described. The reason for the specific choice of
those two yield criteria (Yld91 and Yld2004-18p, respectively) is related to their
ability to be numerically implemented in a three-dimensional framework, in oppo-
sition to other criteria that implicitly impose plane-stress conditions in the base
formulation. Both three-dimensional criteria were afterwards implemented in
Abaqus commercial finite element software, as UMAT subroutines [37].
The Yld91 criterion is a generalisation of the isotropic criterion of Hershey [38]
to anisotropic materials, with the anisotropic behaviour being considered in the
formulation by replacing the principal values of the stress tensor by the principal
values of an alternative tensor coming from linear transformations over the origi-
nal stress fields. The anisotropy effects are subsequently described by the coeffi-
cients present in the linear transformation operator.
The fourth order linear operator ( L) therefore appears in the formulation as
S = LT
σ = Lσ , (6.9)
where k is a parameter that affects the yield surface shape [35], whereas the ani-
sotropic effects may be reproduced by the knowledge of the coefficients affecting
the operator L , in the form
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 227
c2 + c3 −c3 −c2 0 0 0
−c3 c1 + c3 −c1 0 0 0
−c −c1 c1 + c2 0 0 0
1
L=
2
, (6.11)
3 0 0 0 3c4 0 0
0 0 0 0 3c5 0
0 0 0 0 0 3c6
as a function of the six anisotropic coefficients ci (i = 1,, 6 ) . Therefore, and ac-
counting for the exponent in the yield equation, the criterion is characterised by 7
coefficients, and the determination of the anisotropic coefficients can be carried
out by conventional tension tests (and respective yield stress values) at 0º, 45º and
90º, and also from the yield stress value coming from a biaxial stress state, ob-
tained, for instance, from a “bulge test”. Despite the easy obtaining of these ani-
sotropic coefficients, the ability to account for 3D stress states and the easy im-
plementation of the criterion into FEM codes, the major drawback of the
formulation would be the lack of reproducing distinct r0 and r90 coefficients
when uniaxial stresses in rolling and transverse directions are almost equal [34].
Seeking for a more general yield criterion, and after a succession of more
evolved plane stress models directly applicable to thin aluminium sheets, Barlat
and co-authors [36] have presented a more evolved formulation with a yield func-
tion proven to be convex and taking into account a large number of anisotropic
coefficients (in this case, 18 parameters). According to this Yld2004-18p, suited
for aluminium alloys and its anomalies (when compared to steel alloys), the yield
function can be given in the form
3,3
S (i1) − S (j2) = 4σ Ya ,
a
φ= (6.12)
i=1, j=1
where index i, j = 1,, 3 , whereas tensor fields S (i1), S (j2) are defined by linear ( )
transformations of the type S ( k) (k)
= L S , for operators L (k) in the form
0 − L(12k ) − L(13k ) 0 0 0
− L( k ) 0 − L(23k ) 0 0 0
21
− L31k )
(
− L(32k ) 0 0 0 0
( k) =
L . (6.13)
0 0 0 ( k )
L44 0 0
0 0 0 0 L(55k ) 0
0 0 0 0 0 L(66k )
Therefore, the two combined linear transformations allow for the characterisation
of anisotropy based on a total of 18 parameters, which proves to lead to a very
228 M.P. Henriques et al.
general 3D constitutive modelling. For the special case of plane stress analysis, the
criterion degenerates in a simpler version involving 14 parameters. Furthermore,
where these coefficients are equal altogether to 1.0, the criterion turns out to be
equal to the isotropic criterion of Hershey [38], and for the particular case of
L (1) = L ( 2) , then the Yld91 anisotropic criterion is obtained.
The 18 coefficients involved in the Yld2004-18p yield criterion come from a series
of experimental analyses seeking for the determination of the uniaxial yield stress
values in tension (σ Y ) , as well as the Lankford’s coefficients ( rθ ) for seven distinct
directions, in the plane of the sheet, respective to the rolling direction (at 0º, 15º, 30º,
45º, 60º, 75º, 90º), the yield stress on biaxial loading solicitations (σ b ) and the ani-
sotropy coefficient ( rc ) for the compression loading experiment of a metallic disc.
The remaining factors are related to mechanical properties in the out-of-plane
directions of the metallic sheet. Since standard experimental essays for out-of-
plane properties are quite difficult to be obtained, a simplified version of the crite-
rion, involving less material parameters, is also available (Yld2004-13p, see refer-
ence [36]), pointing to an yield function in the form
a
{
φ = S 1(1) − S (21) + S (21) − S (31) + S (31) − S 1(1) − S 1(1) + S (21) + S (31) +
a a a a a
} (6.14)
+ S 1( 2) + S (22) + S (32) = 2σ Ya
a a a
( )
where S (1), S ( 2) are now obtained by means of the transformation S ( ) = L
k ( k )S , for
the linear operators
0 −1 − L(131) 0 0 0
− L(1) 0 − L(231) 0 0 0
21
(1) = −1 −1 0 0 0 0 (6.15)
L
0 0 0 L(441)
0 0
0 0 0 0 L(551) 0
0 0 0 0 0 L(661)
and
0 − L(122) − L(132) 0 0 0
− L( 2) 0 − L(232) 0 0 0
21
( 2) = −1 −1 0 0 0 0
L . (6.16)
0 0 0 L(442)
0 0
0 0 0 0 L(552) 0
0 0 0 0 0 L(662)
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 229
It can be seen that for this simplified anisotropic criterion a lower number of anisot-
ropic coefficients (13) are needed for a full 3D model, whereas for the reduction to
plane stress problems this number turns out to be equal to 9. Although the approxima-
tion to experimental results is not completely perfect with these modified versions,
the Yld2004-13p can be a valid alternative to the original Yld2004-18p yield criterion.
More discussion and details on this can be found in references [36] and [37].
zones along the perimeter of the circular blank, after forming, as experimentally
verified in references [39[41]. From the point of view of geometric and FEM model-
ling, the tools (the punch and conical die) were considered as rigid bodies. The tools'
dimensions follow those represented in Figure 6.1, that is: 49 mm for the punch di-
ameter, 5 mm on the punch's radius and 260 mm of height, while the die has a 19º
angle and an opening diameter of 54 mm. A detailed view of the tools involved can
be inferred from Figure 6.2.
The punch stroke, responsible to form the final metallic part, was taken as equal
to 60 mm. The metallic circular blank is therefore the only part to be considered as
deformable, and being meshed by distinct configurations of finite elements, as
seen in the following sections. Due to symmetry reasons, only one-quarter of the
total blank will be discretized by finite elements.
Fig. 6.1. Dimensions of the tools for the free-forming example, from references [39[41].
Fig. 6.2. Detailed description of the relevant dimensions for the tools in the conical cup
drawing problem (dimensions in millimetres).
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 231
Regarding the constitutive modelling for the application of the anisotropic yield
criterion of Hill (1948) [33], and following the references before, it was initially
considered an aluminium alloy with anisotropy coefficients ( rθ ) with values
r0 = 0.17, r45 = 0.58, r90 = 0.46 , and, from equation (6.8), leading to values of
R12 = 1.09, R22 = 1.47, R33 = 0.92 . The elastic part of the constitutive behaviour is
characterised by a Young's modulus and Poisson's ratio as E=69.0 GPa and υ=0.3,
respectively. The effective plastic stress-strain relationship used in the numerical
simulations is considered to be given by a Swift's law, with as main parameters
K = 127.83 MPa, ε Y = 0.0003 and n = 0.03 .
On the other hand, for the case of modelling this problem by means of the
Yld91 and Yld2004-18p constitutive criteria, parameters available in the literature
(experimentally obtained) for a 2090-T3 aluminium alloy were considered in the
numerical simulations. The strain-based hardening model is considered to follow
the parameters stated before for Swift’s law, while in both cases a friction coeffi-
cient of μ = 0.15 was assumed between all parts in contact. Furthermore, the
forming process to be simulated involved only one work stage for achieving the
final shape.
Since the tools are modelled as discrete rigid bodies, they were meshed by rigid
elements in Abaqus/Standard FEM program. The FE mesh of the punch was com-
posed of 5925 rigid finite elements of type R3D4 (4-node, 3-D bilinear quadrilat-
eral, rigid element), while the die was discretized by 1770 elements of the same
type. More information on this kind of FE formulation can be found in the pro-
gram manual [32].
Initially, the blank to be plastically formed was assumed to be a solid shape for
modelling purposes, since the adopted thickness values cannot be considered as
extremely small compared to the overall blank dimensions and also in order to
correctly describe the double-sided contact patterns involved in the process. In this
sense, different three-dimensional finite elements were adopted, as available from
the library of the FEM software: C3D8R (8-node, tri-linear 3-D solid element, re-
duced integration, that is, 1 integration point per element), C3D8 (8-node, tri-
linear 3-D solid element, full integration, that is, 8 integration points per element)
and C3D8I (8-node, tri-linear 3-D solid element, full integration and incompatible
deformation modes) [32].
Nevertheless, in a second phase shell elements were also considered, in order it
would be possible to infer about the influence of distinct finite element formula-
tions in the obtained results. In this sense, a second group of simulations were
considered, now including thin shell elements of type S4R (4-node, bilinear shell
element, reduced integration, one integration point in the element reference plane
and multiple integration points through thickness direction) as well as of type S4
(4-node, bilinear shell element, full integration within the element), for the sake of
comparisons [32].
The blank zone to be meshed with finite elements included three mesh parti-
tions in order a more refined mesh could be obtained in its central zone, as can be
seen in Figure 6.3. The partition divides the blank into a square area in the middle
232 M.P. Henriques et al.
of the blank, where the full contact with the punch will take place. A line from the
corner of the square to the perimeter of the blank (45º) divides the rest of the parti-
tion in two parts. The adopted mesh density for both solid and shell elements can
be shown in Table 6.1.
Fig. 6.3. Adopted mesh for the blank, with a refined centre zone.
Fig. 6.4. Dimensions of the tools used in flange - forming (picture adapted from [42]).
Initially the tests start with the reference point of the die being fixed, with the
punch being above 6 mm from the top surface of the blank, while the blank-holder
is 5 mm from that surface. The simulations start with the blank-holder coming
down 1 mm (so the gap between the blank and blank-holder is now 4 mm) and af-
terwards a vertical downward movement of 45 mm is imposed to the punch's ref-
erence point.
Fig. 6.5. The behaviour of the blank with different refinements, constitutive model and one
element through thickness.
Fig. 6.6. The behaviour of the blank with different refinements, constitutive model and two
elements through thickness (the same happens when using three elements along thickness).
Concerns might be raised at this time due to the fact that the Hill 1948 anisotropic
yield criterion would not be the most appropriate constitutive modelling to correctly
infer the plastic deformation of an aluminium sheet. Due to this fact, further analyses
were carried out with the more advanced Yld91 and Yld2004-18p anisotropic non-
quadratic criteria detailed before.
236 M.P. Henriques et al.
To this end, and in order to have the appropriate anisotropic parameters avail-
able, it was assumed that the aluminium alloy was a representative of the series
2090-T3. From the literature, a representative hardening law for this alloy was se-
lected [43[44] in the form
σ Y = 646.0 ( 0.025 + ε )
0.227
(MPa), (6.17)
Fig. 6.7. Wrinkling profiles after forming for a one-quarter area of the initial circular blank.
Results obtained with Yld2004-18p criterion, for 2739 (Mesh 1), 5368 (Mesh 2) and 8109
(Mesh 3) elements in the plane of the blank (one element, i.e., one integration point through
thickness direction).
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 237
Taken the second refinement level (Mesh 2, in Figure 6.7), but now increasing
the mesh refinement through thickness, the results for the deformed configuration
after forming, and still accounting on the Yld2004-18p anisotropic criterion, are
represented in Figure 6.8. In this graph, wrinkling profiles are shown for one, two
and three elements along thickness direction, which directly corresponds to the
same number of integration points.
Fig. 6.8. Wrinkling profiles after forming for a one-quarter area of the initial circular blank.
Results obtained with Yld2004-18p criterion, for one, two and three elements (integration
points) through thickness direction.
From the last two graphs, it can be seen that even for a more sophisticated yield
criterion, the more dominant effect in this example is the number of integration
points through thickness direction. That is, the correct wrinkling profile after
forming can only be numerically attained with a proper low order integration rule
in the out-of-plane direction of the blank.
Since all the results shown before were valid for the same finite element formu-
lation, it would be interesting to infer about the influence of distinct methodolo-
gies into the quality of the numerical results obtained. To this end, keeping the
Yld2004-18p anisotropic criterion and once again the mesh density with one ele-
ment along the thickness direction, in Figure 6.9 the deformed profile for the
C3D8R (reduced integration) element, as well as the deformed configurations ob-
tained in Abaqus by means of the formulations C3D8 (full integration) and C3D8I
(full integration, enhanced strain modes), is again represented. Once again, the
strong influence of the wrinkling appearance (or not) on the choice of the numeri-
cal integration rule adopted is visible.
238 M.P. Henriques et al.
Fig. 6.9. Wrinkling profiles after forming for a one-quarter area of the initial circular blank.
Results obtained with Yld2004-18p criterion, for reduced integrated (C3D8R) as well as
fully integrated (C3D8 and C3D8I) formulations in Abaqus.
For the sake of confirmation of the conclusions being stated until now, a final
analysis must be carried out for this example when analysed by means of the
library of solid elements in Abaqus. In this case, it might be interesting to infer
about the influence of Yld91, Yld2004-18p and an isotropic (von Mises) on
the final configuration after forming, for the mesh system correctly inducing
the wrinkling patterns. To this end, and starting from the Mesh 2 defined before
(5368 elements in the plane of the blank), with one element along the thickness
and adopting the C3D8R formulation (to ensure a single integration point in
the out-of-plane direction), the obtained profiles after forming can be seen in
Figure 6.10.
Once again, and as expected from the previous results, the numerical integra-
tion rule showed to be the dominant factor in the prediction of the correct wrin-
kling pattern in the final conical part, rather than the in-plane refinement or the
constitutive model adopted. Increase on the order of integration along thickness,
for the different yield criteria shown in Figure 6.10, will promote the complete
disappearance of the wrinkling zones in the numerical solution.
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 239
Fig. 6.10. Wrinkling profiles after forming for a one-quarter area of the initial circular
blank. Results obtained with distinct yield criteria (isotropic and anisotropic), for a reduced
integrated formulation in Abaqus.
Considering once again the Hill 1948 yield criterion, for simplicity reasons,
but now focusing on alternative shell formulations in Abaqus, will lead to the
analysis that follows. Although the relative dimensions of the blank would not di-
rectly point to a thin-shell problem, it might be useful anyway to infer about the
performance of distinct shell element formulations and the corresponding ob-
tained results. This can be seen in Figures 6.11, 6.12 and 6.13, where for the
same in-plane mesh density, distinct number of integration points was adopted
through thickness. Since dealing with shell elements, only one element is as-
sumed along thickness, and it is possible to automatically define the integration
order along that direction by an increase in the number of its integration points.
Doing so, Figure 6.11 shows the deformed configuration and the evolution of the
punch force during forming for meshes with three integration points, for both iso-
tropic and anisotropic (Hill 1948) criteria, while Figures 6.12 and 6.13 do the
same for 5 and 7 integration points in thickness direction. In each graph the varia-
tion of results as coming from a full in-plane integration rule (as in S4 shell
element) against a reduced in-plane integration rule (as present in S4R shell
element) [32] is also shown.
It can be seen from the pictures that none of the shell models was able to
correctly predict the wrinkling pattern distribution, with some of the models even
inducing a single non-physical wrinkling mode in the final obtained part (see
Figures 6.11 and 6.12 for reduced integration S4R shell element).
240 M.P. Henriques et al.
Fig. 6.11. Reaction force evolution and sheet metal deformation for elements S4 and S4R
with three Gauss points and distinct constitutive behaviours (isotropic and anisotropic).
Fig. 6.12. Deformed shape of the blank for different elements formulation (S4 and S4R)
with five Gauss points through thickness and distinct constitutive behaviours (isotropic and
anisotropic).
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 241
Fig. 6.13. Reaction force evolution and blank deformation for element S4R and S4R with
seven Gauss points and both constitutive behaviour (isotropic and anisotropic).
For the sake of completeness, Figure 6.14 shows the deformed configuration of
the conical cup inside the die after the whole displacement of the punch, while in
Figure 6.15 the respective dimensions after forming are shown.
Fig. 6.14. Deformed configuration for a solid finite element formulation (C3D8R), one
element through thickness direction and an anisotropic constitutive behaviour.
242 M.P. Henriques et al.
Fig. 6.15. Wrinkling patterns (in mm) for the solid finite element C3D8R mesh, with one
element through thickness and anisotropic behaviour.
Once again, no noticeable wrinkling patterns were attained for those meshes in-
cluding fully integrated formulations. The same happened for shell formulations
employing three or more integration points along the thickness direction.
Fig. 6.16. Evolution of the punch force during forming, for both initial geometries (diame-
ters of 110 and 130 mm), and isotropic and anisotropic (C3DR formulation, one element
through thickness direction).
Fig. 6.17. The deformed shapes (360º) for the two diameters of the steel blank.
In this particular case, and when employing the “solid-shell” formulation avail-
able in Abaqus, some differences can be seen between the results coming from the
use of isotropic or anisotropic models, although in neither case the wrinkling ten-
dency is seen. Figure 6.18, for instance, shows the results for the circular blank with
244 M.P. Henriques et al.
an initial diameter of 110 mm, while in Figure 6.19 the corresponding results for the
larger diameter of 130 mm can be seen. In both cases, distinct number of integration
points through the thickness direction was used, with no distinguishable differences
between the results.
Fig. 6.18. Deformed shape of the 110 mm steel sheet with the element SC8R for 3, 5 and 7
Gauss points through thickness and isotropic and anisotropic constitutive behaviours.
Fig. 6.19. Deformed shape of the 130 mm steel sheet with the element SC8R for 3, 5 and 7
Gauss points through thickness and isotropic and anisotropic constitutive behaviours.
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 245
Fig. 6.20. Mesh partitions for solid elements, for the flange-forming example.
Fig. 6.21. Mesh partitions for shell elements, for the flange-forming example.
For the simulations considering solid finite element formulations, one element
layer was taken into account, and in Fig. 6.22 and 6.23 the evolution of the
punch's force throughout the analysis until the punch stroke is completed, for re-
spectively, the first and second mesh systems considered can be seen.
The graphics lines reproduce the results obtained for both aluminium (AA 6111-
T4) and mild steel (DDQ) alloys, also accounting for isotropic and anisotropic be-
haviours. It can be seen in these graphs that the force needed in the punch is larger
for the aluminium alloy (6111-T4) than for the mild steel (DDQ). It is also shown
that for the aluminium alloy with isotropic behaviour, the punch's force is bigger
when compared with the anisotropic constitutive model. The opposite happens with
the mild steel (DDQ) where the punch's force is lower for isotropic behaviour.
246 M.P. Henriques et al.
Here, and in opposition to the group of results in the last section, the influence
between the isotropic (von Mises) and anisotropic (Hill 1948) criteria, for the ani-
sotropic coefficients defined previously is more noticeable. The results were ob-
tained with the fully integrated solid element of Abaqus library (C3D8), while the
reduced formulation showed to suffer from severe hourglass effects, mainly in the
regions where double-sided contact situations were dominant.
Fig. 6.22. Punch reaction during forming, for 10 solid elements in each edge (total the 700
elements).
Fig. 6.23. Punch reaction during forming, for 15 solid elements in each edge (total the 1235
elements).
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 247
Fig. 6.24 and Fig. 6.25 show the deformed configuration for the same mesh, but
accounting for different constitutive models, and it can be seen that, despite the
differences in the evolution of the punch force against its displacement, the final
aspects of the predicted plastically formed parts are quite similar for both isotropic
and anisotropic models.
Fig. 6.24. Deformed shape for solid finite element formulation and an isotropic constitutive
model.
Fig. 6.25. Deformed shape for solid finite element formulation and an anisotropic constitutive
model.
Focusing on the particular case of the aluminium alloy (6111-T4), and now
taking into account numerical simulations considering shell elements (S4) in
Abaqus, with full in-plane integration but five integration points along the thick-
ness directions, the evolution of the punch force during forming (for an isotropic
behaviour) can be seen in Fig. 6.26, for the two mesh densities represented before
in Fig. 6.21.
248 M.P. Henriques et al.
Fig. 6.26. Punch reaction during forming, for distinct mesh refinements and shell elements.
Distinct wrinkling patterns are formed, for this case, as can be seen in Fig. 6.27
and Fig. 6.28, the first one (coarse mesh) being non-uniform and overlapped,
while the second mesh system (refined mesh) gives rise to a coherent set of wrinkling
patterns, qualitatively in accordance with published numerical and experimental
results.
Fig. 6.27. Deformed shape for shell elements and a coarse mesh.
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 249
Fig. 6.28. Deformed shape for shell elements and a refined mesh.
It seems that for the constrained forming into a cylindrical cup, the results are
quite dependent on the element type, and not only its numerical integration type (a
situation not seen in the free - forming examples) mostly due to the more severe
contact conditions involved in the flange area.
6.6 Conclusions
This work aimed to provide a preliminary insight into the influence of finite ele-
ment formulation, finite element discretization through the thickness direction and
constitutive material modelling in the onset and propagation of wrinkling patterns
in sheet metal formed parts, as reproduced by numerical simulations based on the
Finite Element Method.
It was seen that a correct prediction of wrinkling defects is very sensitive to the
initial decisions on the modelling phase of analysis, and a conclusion coming from
this work can be stated in the sense that - more than the correct constitutive mod-
elling to be adopted - distinct finite element formulations and discretization levels
show high influence on the quality of results obtained.
Nevertheless, and concentrating on the aspects related to the mesh systems and
formulations to be adopted in a given numerical simulation, it is not yet clear what
are the specific main driving effects when considering the correct prediction of
wrinkling effects. Contrary to the correct prediction of springback effects in sheet
metal formed products, where the dominant aspect to be taken into account is
known to be the numerical integration procedure and number of integration points
along the thickness direction, it is shown in the present work that for wrinkling ef-
fects a complex conjunction of (i) in-plane mesh refinement, (ii) out-of-plane mesh
refinement (or, alternatively, increase of integration points through the thickness di-
rection) and, finally, (iii) the finite element formulation itself (shell or solid ele-
ments) have a strong influence on the obtained simulation results, rather than the
constitutive model adopted. Also, and most importantly, these conclusions seem to
be extremely dependent on the examples chosen.
250 M.P. Henriques et al.
Based on that, proposals of future work are related to the research on alterna-
tive solid-shell finite element formulations in complex wrinkling prediction, where
the main advantages of solid and shell formulations alone are gathered in the same
formulation. Doing so, the sensitivity of the results to the mesh refinement levels
would also be inferred. In particular, and trying to avoid the sensitivity to distinct
numerical integration schemes, it would be useful for the development of a wrin-
kling criterion based on the use of enhanced-assumed strain finite solid-shell ele-
ments, following previous works of the authors in this field [46 - [48].
References
[1] Hill, R.: A general theory of uniqueness and stability in elastic-plastic solids. Journal
of the Mechanics and Physics of Solids 6, 236–249 (1958)
[2] Hutchinson, J.W.: Plastic buckling. Advances in Applied Mechanics 14, 67–144 (1974)
[3] Hutchinson, J.W., Neale, K.W.: Wrinkling of curved thin sheet metal. Plastic Instabil-
ity, pp. 71-78. Presses des Ponts et Chaussées, Paris (1985)
[4] Petryk, H.: Plastic instability: criteria and computational approaches. Archives of
Computational Methods in Engineering 4, 111–151 (1997)
[5] Cao, J., Boyce, M.C.: Wrinkling behaviour of rectangular plates under lateral con-
straint. International Journal of Solids and Structures 34, 153–176 (1997)
[6] Cao, J.: Prediction of plastic wrinkling using the energy method. Journal of Applied
Mechanics - Transactions of ASME 66, 646–652 (1999)
[7] Magalhães Correia, J.P., Ferron, G.: Wrinkling of anisotropic metal sheets under
deep-drawing: analytical and numerical study. Journal of Materials Processing Tech-
nology, 155–156, 1604–1610 (2004)
[8] Kawka, M., Olejnik, L., Rosochowski, A., Sunaga, H., Maknouchi, A.: Simulation of
wrinkling in sheet metal forming. Journal of Materials Processing Technology 109,
283–289 (2001)
[9] Wang, X., Lee, L.H.N.: Post-bifurcation behaviour of wrinkles in square metal sheet
under Yoshida test. International Journal of Plasticity 9, 1–19 (1993)
[10] Wang, C.T., Kinzel, Z., Altan, T.: Wrinkling criterion for an anisotropic shell with
compound curvatures in sheet forming. International Journal of Mechanical Scienc-
es 36, 945–960 (1994)
[11] Nordlund, P.: Adaptivity and wrinkle indication in sheet metal forming. Computer
Methods in Applied Mechanics and Engineering 161, 114–127 (1998)
[12] Wang, X., Cao, J.: On the prediction of side-wall wrinkling in sheet metal forming
processes. International Journal of Mechanical Sciences 42, 2369–2394 (2000)
[13] Kim, J.B., Yang, D.Y., Yoon, J.W., Barlat, F.: The effect of plastic anisotropy on
compressive instability in sheet metal forming. International Journal of Plasticity 16,
649–676 (2000)
[14] Kim, J.B., Yoon, J.W., Yang, D.Y.: Investigation into the wrinkling behaviour of thin
sheets in the cylindrical cup deep drawing process using bifurcation theory. Interna-
tional Journal for Numerical Methods in Engineering 56, 1673–1705 (2003)
Numerical Simulation and Prediction of Wrinkling Defects in Sheet Metal Forming 251
[15] Lu, H., Cheng, H.S., Cao, J., Liu, W.K.: Adaptive enrichment meshfree simulation
and experiment on buckling and post-buckling analysis in sheet metal forming. Com-
puter Methods in Applied Mechanics and Engineering 194, 2569–2590 (2005)
[16] Magalhães Correia, J.P., Ferron, G.: Wrinkling predictions in the deep-drawing pro-
cess of anisotropic metal sheets. Journal of Material Processing Technology 128,
199–211 (2002)
[17] Magalhães Correia, J.P., Ferron, G., Moreira, L.P.: Analytical and numerical investi-
gation of wrinkling for deep-drawing anisotropic metal sheets. International Journal
of Mechanical Sciences 45, 1167–1180 (2003)
[18] Kim, Y., Son, Y.: Study on wrinkling limit diagram of anisotropic sheet metal. Jour-
nal of Materials Processing Technology 97, 88–94 (2000)
[19] Obermeyer, E.J., Majlessi, S.A.: A review of recent advances in the application of
blank-holder force towards improving the forming limits of sheet metal parts. Journal
of Materials Processing Technology 75, 222–234 (1998)
[20] Belytschko, T., Moes, N., Usui, S., Parimi, C.: Arbitrary discontinuities in finite ele-
ments. International Journal for Numerical Methods in Engineering 50, 993–1013
(2001)
[21] Belytschko, T., Lu, Y.Y., Gu, L.: Element-free Galerkin methods. International Jour-
nal for Numerical Methods in Engineering 37, 229–256 (1994)
[22] Narayanasamy, R., Loganathan, C.: Some studies on wrinkling limit of commercially
pure aluminium sheet metals of different grades when drawn through conical and
tractrix dies. International Journal of Mechanics and Materials in Design 3, 129–144
(2006)
[23] Loganathan, C., Narayanasamy, R.: Effect of die profile on the wrinkling behaviour
of three different commercially pure aluminium grades when drawn through conical
and tractrix dies. Journal of Engineering & Materials Sciences 13, 45–54 (2006)
[24] Wu-rong, W., Guan-long, C., Zhong-qin, L.: The effect of binder layouts on the sheet
metal formability in the stamping with Variable Blank Holder Force. Journal of Mate-
rials Processing Technology 210, 1378–1385 (2010)
[25] Morovvati, M.R., Mollaei-Dariani, B., Asadian-Ardakani, M.H.: A theoretical, nu-
merical, and experimental investigation of plastic wrinkling of circular two-layer
sheet metal in the deep drawing. Journal of Materials Processing Technology 210,
1738–1747 (2010)
[26] Stoughton, T.B., Yoon, J.W.: A new approach for failure criterion for sheet metals.
International Journal of Plasticity 27, 440–459 (2011)
[27] Oh, K.S., Oh, K.H., Jang, J.H., Kim, D.J., Han, K.S.: Design and analysis of new test
method for evaluation of sheet metal formability. Journal of Materials Processing
Technology 211, 695–707 (2011)
[28] Taherizadeh, A., Green, D.E., Ghaei, A., Yoon, J.W.: A non-associated constitutive
model with mixed iso-kinematic hardening for finite element simulation of sheet met-
al forming. International Journal of Plasticity 26, 288–309 (2010)
[29] Ravindran, R., Manonmani, K., Narayanasmay, R.: An analysis of wrinkling limit di-
agrams of aluminium alloy 5005 annealed at different temperatures. International
Journal of Material Forming 3, 103–115 (2010)
[30] Savaş, V., Seçgin, Ö.: An experimental investigation of forming load and side-wall
thickness obtained by a new deep drawing die. International Journal of Material
Forming 3, 209–213 (2010)
252 M.P. Henriques et al.
[31] Port, A.L., Thuillier, S., Manach, P.Y.: Occurrence and numerical prediction of sur-
face defects during flanging of metallic sheets. International Journal of Material
Forming 3, 215–223 (2010)
[32] Hibbitt, Karlsson, Sorensen: ABAQUS/Standard v.6.5 User’s manual. Habbitt
Karlsson & Sorensen, Inc., USA (1998)
[33] Hill, R.: A theory of the yielding and plastic flow of anisotropic metals. Mathematical
and Physical Sciences 193, 281–297 (1948)
[34] Habraken, A.M.: Modelling the plastic anisotropy of metals. Archives of Computa-
tional Methods in Engineering 11, 3–96 (2004)
[35] Barlat, F., Lege, D.J., Brem, J.C.: A six-component yield function for anisotropic ma-
terials. International Journal of Plasticity 7, 693–712 (1991)
[36] Barlat, F., Aretz, H., Yoon, J.W., Karabin, M.E., Brem, J.C., Dick, R.E.: Linear trans-
formation-based anisotropic yield functions. International Journal of Plasticity 21,
1009–1039 (2005)
[37] Grilo, T.J.: Study of anisotropic constitutive models for metallic sheets. MSc Disser-
tation, University of Aveiro, Portugal (2011) (in Portuguese)
[38] Hershey, A.V.: The plasticity of an isotropic aggregate of anisotropic face centered
cubic crystals. Journal of Applied Mechanics 21, 241–249 (1976)
[39] Narayanasamy, R., Sowerby, R.: Wrinkling behaviour of cold-rolled sheet metals
when drawing through a tractrix die. Journal of Materials Processing Technology 49,
199–211 (1995)
[40] Loganathan, C., Narayanasamy, R.: Wrinkling of commercially pure aluminium sheet
metals of different grades when drawn through conical and tractrix dies. Materials
Science and Engineering A 419, 331–343 (2006)
[41] Narayanasamy, R., Loganathan, C.: The influence of friction on the prediction of
wrinkling of prestrained blanks when drawing through conical die. Materials and De-
sign 28, 904–912 (2007)
[42] Alves, J.L.C.M.: Numerical Simulation of the Sheet Metal Forming Process of Metal-
lic Sheets. PhD Thesis, University of Minho, Portugal (2003) (in Portuguese)
[43] Yoon, J.W., Barlat, F., Chung, K., Pourboghrat, F., Yang, D.Y.: Earing predictions
based on asymmetric nonquadratic yield function. International Journal of Plastici-
ty 16, 1075–1104 (2000)
[44] Yoon, J.W., Barlat, F., Dick, R.E., Chung, K., Kang, T.J.: Plane stress yield function
for aluminum alloy sheets - part II: FE formulation and its implementation. Interna-
tional Journal of Plasticity 20, 495–522 (2004)
[45] Yoon, J.W., Barlat, F., Dick, R.E., Karabin, M.E.: Prediction of six or eight ears in a
drawn cup based on a new anisotropic yield function. International Journal of Plas-
ticity 22, 174–193 (2006)
[46] Valente, R.A.F., Alves de Sousa, R.J., Natal Jorge, R.M.: An enhanced strain 3D el-
ement for a large deformation elastoplastic thin-shell applications. Computational
Mechanics 34(1), 38–52 (2004)
[47] Parente, M.P.L., Valente, R.A.F., Natal Jorge, R.M., Cardoso, R.P.R., Alves de Sou-
sa, R.J.: Sheet metal forming simulation using EAS solid-shell elements. Finite Ele-
ments in Analysis and Design 42, 1137–1149 (2006)
[48] Alves de Sousa, R.J., Yoon, J.W., Cardoso, R.P.R., Valente, R.A.F., Grácio, J.J.: On
the use of a reduced enhanced solid-shell (RESS) element for sheet forming simula-
tions. International Journal of Plasticity 23, 490–515 (2007)
7
Luis M. Alves1, Pedro Santana2, Nuno Fernandes2, and Paulo A.F. Martins1
1
IDMEC, Instituto Superior Técnico, Universidade Técnica de Lisboa,
Av. Rovisco Pais s/n, 1049-001 Lisboa, Portugal
[email protected], [email protected]
2
OMNIDEA, Aerospace Technology and Energy Systems, Tv. António Gedeão,
9, 3510-017 Viseu, Portugal
[email protected], [email protected]
7.1 Introduction
Large-size reservoirs, like silos and tanks, are usually fabricated from curved steel
panels joined by circumferential and meridional welds. The presence of welds
adds defects, residual stresses and geometrical imperfections due to joint mis-
matching between panels that may lead to reduction in the overall performance,
namely the buckling strength [1].
Medium-size reservoirs (diameters up to 1 m) are fabricated by joining panels
or, alternatively, by multiple-stage fabrication processes. For instance, the central
cylindrical section of medium-size cylindrical reservoirs can be fabricated by
254 L.M. Alves et al.
rolling a sheet into a cylindrical surface and then joining the two ends by meri-
dional welding [2, 3], while medium-size spherical reservoirs can be fabricated in
two-half shells by deep drawing or spinning and then joined by circumferential
welding [4].
However, conventional fabrication processes running on panel joining or two-
stage manufacturing technologies are only suitable for producing single or small
numbers of reservoirs because they involve long production lead times and are
usually not appropriate to fabricate reservoirs in other materials than steel. This
prevents conventional fabrication processes from meeting the challenges imposed
by the increased demand of small reservoirs, for a wide variety of applications
such as anaesthetic and analgesic medical systems, supplemental and emergency
oxygen needs for patients, scuba diving tanks, high altitude ‘oxygen aid’ vessels
and compressed gas reservoirs for transportation systems, high pressure gas sto-
rage systems for aeronautical and space applications and compressed air tanks for
paintball and other leisure equipment, among others.
Despite recent efforts in fabricating small-size reservoirs from stainless steel and
aluminium by casting, conventional or hydromechanical deep drawing [5] as well
as explosive forming [6] the need for other undeveloped manufacturing technolo-
gies, able to produce medium to large batches of small-size reservoirs in a wide
range of materials exists, because casting is limited to simple shapes (e.g. cylinder
liners) and its operating costs demand very high production rates, explosive forming
suffers from industrialisation problems and conventional or hydromechanical deep
drawing, although being a flexible solution, is required subsequently for joining op-
erations for the produced half-shells into a reservoir by means of welding (tungsten
inert gas or, for space applications, more frequently electron beam welding).
This chapter focuses on the above-mentioned problems and presents an innova-
tive manufacturing process for producing seamless, low-cost, axisymmetric metal-
lic reservoirs by tube forming (Fig. 7.1).
Fig. 7.1. Spherical and cylindrical reservoirs made from aluminium AA7050 and AA6063
fabricated by the proposed manufacturing process.
Manufacturing Seamless Reservoirs by Tube Forming 255
(b)
(a) (c)
Fig. 7.2. Forming a tubular preform into a seamless cylindrical reservoir with profile
shaped ends. The photograph in (b) shows the preform and the reservoir with semi-
ellipsoidal ends and the photograph in (c) shows successful and non-successful modes
of deformation that were obtained when forming the reservoir with and without internal
mandrels.
a specific outside radius of the tube r0 and its profile defines the geometry of the
reservoir. The container constrains material from flowing outwardly in order to
avoid the occurrence of buckling and helps minimizing the errors due to misalign-
ment between the tubular preforms and the individual dies. The mandrel provides
internal support to the tubular preform during plastic deformation in order to avoid
collapse by wrinkling and local instability at the equatorial region. Figure 7.3
shows an exploded view drawing and a picture of the tool with its major active
components.
Manufacturing Seamless Reservoirs by Tube Forming 257
Fig. 7.3. Tool for fabricating seamless reservoirs with profile shaped ends from tubular
preforms.
(a)
(b) (c)
Fig. 7.4. Tubular preforms with internal mandrels made from (a) polyvinyl chloride (PVC),
(b) low melting point alloy MCP70 and an (c) aluminium alloy.
Manufacturing Seamless Reservoirs by Tube Forming 259
Typical commercial alloys utilized in mandrels made from low melting point
alloys are MCP70 and MCP137 with melting temperatures equal to 70ºC and
137ºC, respectively. The mandrels are cast and their edges deburred with slotted
angles (around 15º, Fig.7.4b) in order to avoid premature flow of the low melting
point alloy into the polar openings during forming. This would greatly increase the
compression force required at the end of the process and give rise to undesirable
material flow at the poles.
In case of forming reservoirs made from stainless steel AISI 316 it is sometimes
needed to employ internal mandrels made from aluminium alloys (Fig. 7.4c).
7.2.3 Lubrication
Forming seamless metallic reservoirs by means of the proposed manufacturing
process is consistent with the three basic mechanisms governing material flow be-
haviour in tube processing; (i) bending, (ii) compression along the circumferential
direction and (iii) friction. Bending takes place where the tubular preform contacts
the dies while circumferential compression and friction develop gradually as the
preform deforms against the profile shaped dies.
In case of friction, previous research work on tube forming put into evidence that
operating parameters giving rise to successful modes of deformation can easily lead
to unsuccessful modes of deformation if lubrication is inexistent or simply inappro-
priate [7]. In case of the process development described in this chapter, lubrication
with zinc stearate proved efficient in a wide range of operative conditions.
Figure 7.5 presents the stress-strain curves for the aluminium and low melting
point alloys. Two different stress responses with increasing strain are observed;
(i) the aluminium alloys present strain hardening while (ii) the low melting point
alloys present evidence of strain softening for values of strain above 0.2.
Fig. 7.5. True stress-strain curves obtained from conventional and stack compression tests
of aluminium alloys AA6063-T0, AA7050-T0 and low melting point alloys MCP70 and
MCP137.
Fig. 7.6. Experimental evolution of the load–displacement curve for the axial compression
of thin-walled AA6063T0 short and long thin-walled tubes between flat dies.
The experimental value of the critical instability load for the occurrence of lo-
cal buckling in thin-walled tubes subjected to axial loading can be determined by
compressing tubular specimens with different initial lengths between flat dies.
Figure 7.6 shows the critical instability load as a function of the displacement
of the upper flat die. As can be seen, the load increases sharply from zero and lo-
cal buckling occurs upon reaching a critical experimental value equal to 27 kN for
aluminium tubes AA6063T0 with 60 mm diameter and 2 mm thickness. The pic-
ture placed inside Fig. 7.6 shows that a diamond shaped instability prevails over
conventional axisymmetric instability when the ratio of initial tube length to di-
ameter is small (say, close to 1).
be performed with the finite element flow formulation and enabled the authors to
utilize the in-house computer program i-form that has been extensively validated
against experimental measurements of metal-forming processes since the end of
the 1980s [8].
The finite element flow formulation giving support to i-form is built upon the
following weak variational form expressed entirely in terms of the arbitrary varia-
tion in the velocity,
δ Π = ∫ σ δε dV + K ∫ ε V δε V dV − ∫ t i δu i dS = 0 (7.1)
V V SF
where V is the control volume limited by the surfaces SU and ST where velocity
and traction are prescribed, respectively, and K is a large positive constant penaliz-
ing the volumetric strain rate component ε v in order to enforce incompressibility.
The utilisation of the flow formulation based on the penalty function method
offers the advantage of preserving the number of independent variables, because
the average stress σm can be computed after the solution is reached through,
σm = K εv (7.2)
The effective stress and the effective strain rate are defined, respectively, by,
3
σ = σ ij′ σ ij′ (7.3)
2
2
ε = ε ij′ ε ij′ (7.4)
3
where σ ij′ is the deviatoric stress tensor and ε ij′ is the deviatoric strain-rate tensor.
The spatial discretization of the weak variational form by means of M finite
elements with constant pressure interpolation, linked through N nodal points,
results in the following set of nonlinear equations [8, 16],
⎧ σ ⎫
M
⎪ ⎪
∑ ∫ ⎨
m =1 ⎪V ε ∫ ∫
δv T K v dV m + K m δv T C T BvCT B dV m − δv T NTdS m ⎬ = 0 (7.5)
⎩ m
V m m
ST ⎪⎭
where,
1
P= ∫ε n −1
K dV m (7.7)
Vm
K = BT D B (7.8)
Manufacturing Seamless Reservoirs by Tube Forming 263
Q= ∫C
T
BC T B dV m (7.9)
Vm
F= ∫ NTdS
m
(7.10)
STm
The symbol N denotes the matrix containing the shape functions of the element,
B is the velocity-strain rate matrix, C is the matrix form of the Kronecker symbol
and D is the matrix relating the deviatoric stresses with the strain rates according
to the rate - form of the Levy-Mises constitutive equations.
The nonlinear set of Eq. 7.6 derived from the flow formulation based on the
penalty function approach can be efficiently solved by a numerical technique re-
sulting from the combination between the direct iteration and the Newton–Raphson
methods.
The direct iteration method, which considers the Levy–Mises constitutive equa-
tions to be linear (and therefore constant) during each iteration, is to be preferen-
tially utilized for generating the initial guess of the velocity field required by the
Newton-Raphson method. The Newton-Raphson method is an iterative procedure
based on a Taylor linear expansion of the residual force vector R(v) of the nonlin-
ear set of Eq. 7.6,
⎡∂ R ⎤
( )
R v n ≅ R n = R n −1 + ⎢
∂
⎥ Δv = 0
n
(7.12)
⎣ v ⎦ n −1
where Δv is the first-order correction of the velocity field, the symbol n denotes
the current iteration number,
and α is a parameter that controls the magnitude of the velocity correction term
Δv . This procedure is only conditionally convergent, but converges quadratically
in the vicinity of the exact solution.
The aforementioned numerical techniques are designed in order to minimise the
residual force vector R(v) to within a specified tolerance and control and assess-
ment is performed by means of appropriate convergence criteria.
The numerical evaluation of the volume integrals included in Eq. (7.6) is per-
formed by means of a standard discretization procedure. Due to the rotational
symmetry and as no anisotropy effects were taken into account, the finite element
models set up to replicate the experimental test cases were accomplished by
discretizing only the cross-section of the tubular preform and mandrel by means of
axisymmetric quadrilateral elements (Fig. 7.7).
264 L.M. Alves et al.
Mandrel Tube
Fig. 7.7. Finite element model of the manufacturing process. Discretization of the preform
and mandrel by means of quadrilateral elements.
Manufacturing Seamless Reservoirs by Tube Forming 265
⎧ σ ⎫
∫
⎪ ε K v dV + K ∫ ∫
C T B v C T B dV m − N T dS m
m m
⎪
M
⎪V m Vm STm ⎪
∑ ⎨
m =1 ⎪ ⎡ ⎤
⎬=0
(7.14)
2 Nv m⎪
∫
−1
+ mk N tan ⎢ r
⎥ dS ⎪
⎪ π ⎣ v0 ⎦
⎩ m
S FR ⎭
⎧⎪ 2 ⎛ | u | ⎞ ⎫⎪ u
τ f = mk ⎨ arctan ⎜⎜ r ⎟⎟ ⎬ r (7.15)
⎪⎩ π ⎝ u 0 ⎠ ⎪⎭ | u r |
where v0 is an arbitrary value within the range from 10-3 to 10-4 in order to avoid
numerical difficulties.
The contact algorithm implemented in the finite element computer program
solves the interaction between the tubular specimens and tooling by means of an
explicit direct method. The algorithm requires the discretization of the tool surface
into contact–friction linear elements and is based on two fundamental procedures;
(i) identification of the nodal points located on the boundary of the mesh and (ii)
determination of the minimum increment of time Δtmin for a free nodal point lo-
cated on the boundary of the tubular preform to go in contact with the surface of
the tool. The minimum increment of time Δtmin can be computed in accordance
with the procedure described elsewhere [8].
The contact interface between tubular preforms and recyclable mandrels was
modelled by means of a nonlinear procedure based on a penalty approach. The ap-
k
proach is built upon the normal gap velocity gn for a nodal point k, contacting an
element side ab of the adjacent element, (Fig. 7.8a),
where subscript n indicates normal direction and β and 1-β are the fractions of the
element side lab defining the velocity projection of nodal point k on the element
side ab. The penalty contact approach adds the following extra term to Eq. (7.1),
Nk
δ Πc = γ ∑g
k =1
k
n δg nk (7.17)
where Nk is the total number of contacting points and γ is a large positive constant
enforcing the normal gap velocity gn ≥ 0 in order to avoid penetration.
k
(a)
(b)
Fig. 7.8. Contact between deformable tube and mandrels. (a) Modelling the contact be-
tween nodal point k of the tubular preform (or mandrel) and element side ab of the mandrel
(or tubular preform) and (b) schematic illustration of the modifications that are performed
on the global stiffness matrix of the finite element model due to the contact between nodal
point k and element side ab.
The extra term in Eq. (7.17) gives rise to additional contact stiffness terms Kc in the
original stiffness matrix σ P + K m Q resulting from the minimization of Eq. (7.1),
K ijmn
c = γ α mα n ni n j
(7.18)
(i, j ) = 1, 2 (m, n ) = k , a, b α k = 1, α a = −β , α b = −(1 − β )
Manufacturing Seamless Reservoirs by Tube Forming 267
The positions ijmn of the contact stiffness terms in the overall stiffness matrix
are schematically illustrated in Fig. 7.8b for typical skyline storage. It is worth
noting that skyline storage usually needs to be expanded during numerical simula-
tion in order to include new contacting pairs. The penalty contact method has the
advantage of being purely geometrically based and therefore no additional degrees
of freedom have to be considered as in case of alternative approaches based on
Lagrange multipliers.
The numerical simulation of the manufacturing process was accomplished
through a succession of displacement increments each of one modelling approxi-
mately 0.1% the initial height of the test specimens. No remeshing operations
were performed and the overall CPU time for a typical analysis containing around
2500 elements was below 5 min on a standard laptop computer.
Small Depression
(a) (b)
Fig. 7.9. Spherical reservoir fabricated by means of the proposed manufacturing process.
(a) Spherical reservoir and internal mandrel after being formed and (b) finite element pre-
dicted geometry at the end of the process.
Subsequent removal of the mandrel by melting, while leaving the shell intact,
installation of the upper valve and the lower end cap, polishing and painting, re-
sults in the spherical reservoir depicted in Fig. 7.1.
Manufacturing Seamless Reservoirs by Tube Forming 269
(a) (b)
Fig. 7.10. (a) Experimental and (b) finite element predicted collapse by local bucking due
to compressive instability in the axial direction.
Figure 7.10 shows the specimen and the predicted finite element geometry resulting
from an attempt to shape a tubular preform into a spherical reservoir by means of the
proposed manufacturing process without using an internal mandrel. As seen, formabil-
ity is limited by local buckling due to compressive instability in the axial direction.
7.5.2 Formability
The technique utilised for obtaining the experimental strain loading paths in the
principal strain space involved electrochemical etching of a grid of circles with
1 mm initial radius on the surface of the preforms before forming and measuring
the major and minor axes of the ellipses that result from shaping the tubes into
spherical shells. The experimental values of the in-plane strains were determined
from (Fig. 7.11),
⎛ a ⎞ ⎛ b ⎞
ε 1 = ln ⎜ ⎟ ε 2 = ln ⎜ ⎟ (7.19)
⎝ 2R ⎠ ⎝ 2R ⎠
where the symbol R represents the original radius of the circle and the symbols a
and b denote the major and minor axes of the ellipse.
The in-plane components of strain resulting from the application of the above
mentioned procedure in different locations taken along the meridional direction of
the reservoir and plotted in the principal strain space allow us to determine the
strain loading path resulting from the gradual deformation of the tubular preform
against the dies (Fig. 7.12).
The principal strain space is of major importance in the analysis of forming proc-
esses because it allows foreseeing if a loading path resulting from a manufacturing
process is likely to produce admissible or inadmissible modes of deformation. In
case of the proposed manufacturing process, measurements and finite element
270 L.M. Alves et al.
R
2
a
(a) (b)
Fig. 7.11. (a) Grid of circles that were utilized for obtaining the local values of strain and
(b) schematic deformation of a circle into an ellipse during the forming process.
predicted values of strain allow us to conclude that the strain loading path is similar
to that of pure compression. So lying thus, close to the onset of wrinkling (refer to
the photograph included in Fig. 7.12 that shows a spherical reservoir with wrinkles
at the upper end).
Under these conditions, the internal mandrel plays a key role in the proposed
manufacturing process because it is capable of avoiding collapse by local buckling
due to compressive instability in the axial direction and impeding the strain loading
path to approach the onset of wrinkling. These conclusions apply to other shapes of
reservoirs than spherical as will be seen in the following section of this chapter.
0.6
FEM
Experimental
Compression Test
Tensile Test
Meridional Strain
0.4
0.2
0.0
-1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2
Circumferential Strain
Fig. 7.12. Experimental and finite element predicted strain loading path in the principal
strain space resulting from forming a tubular preform into a spherical reservoir.
Manufacturing Seamless Reservoirs by Tube Forming 271
350
300
250
Load (kN)
200
150
100
50 FEM
Experimental
0
0 20 40 60 80 100
Displacement (mm)
Fig. 7.13. Experimental and finite element predicted evolution of the load-displacement
curve.
7.6 Applications
High pressure reservoirs are fundamental for several industries. For terrestrial ap-
plications their role is important in markets such as transportation where they are
employed for the storage of compressed natural gas (over 11 million vehicles
worldwide) [17] and hydrogen (hailed as the ‘fuel of the future’, which according
to the DoE should translate to nearly 150 million in-circulation vehicles by 2050)
[18]. Besides transportation, high pressure reservoirs are used for scuba diving ap-
plications, professional paint ball nitrogen high pressure bottles, etc.
272 L.M. Alves et al.
Fig. 7.14. Forming tubular preforms into cylindrical reservoirs with semi-ellipsoidal ends.
Discretization of the tubular preform and mandrel (if exists) by finite elements and com-
puted predicted geometry at the end of the process.
Manufacturing Seamless Reservoirs by Tube Forming 273
Fig. 7.15. Forming a tubular preform into a cylindrical reservoir with semi-ellipsoidal ends.
Finite element predicted distribution of effective stress (MPa) after 45 mm and 90 mm dis-
placement of the upper profile shaped die.
274 L.M. Alves et al.
Figure 7.16 shows the finite element predicted distribution of average stress σm
at the end of the forming process. As seen larger, compression values of the aver-
age stress are found at the semi-ellipsoidal tubular ends and justify the need to
employ internal mandrels for avoiding collapse by wrinkling along the circumfer-
ential direction.
On the contrary, the polar openings of the reservoir and the opposite regions lo-
cated at the internal mandrel show evidence of tensile average stresses.
Fig. 7.16. Forming a tubular preform into a cylindrical reservoir with semi-ellipsoidal ends.
Finite element predicted distribution of average stress (MPa) at the end of the forming
process.
and, as seen in the figure, thickness variation along the cross-section of the reser-
voirs shows a significant growth as the circumferential perimeter decreases with
values above 150% at the open poles.
The initial flat region of the graphic corresponds to nearly unstrained material
placed in the cylindrical region of the reservoir. The final thickness in this region
of the reservoir remains practically identical to the initial thickness of the preform.
The subsequent slight decrease in the variation of thickness is related to the por-
tion of the tubular preform that starts to bend in order to match the contour of the
die. Measurements and numerical predictions can even yield negative values, re-
sulting in local thicknesses smaller than that of the original preform, as can be ob-
served at 40 mm distance from the equatorial region.
The last part of the graphic (say, above 45 mm distance from the equator)
shows a significant growth rate in thickness variation. This is due to compression
in the circumferential direction and the significant increase of thickness at the po-
lar openings of the reservoir being very advantageous for subsequent installation
of valves and end caps by mechanical fixing or welding.
200
FEM
Experimental
160
Thickness Variation (%)
120
80
40
0
0 10 20 30 40 50 60 70
Fig. 7.17. Experimental and finite element predicted variation of thickness in the cross-
section of a cylindrical reservoir with semi-ellipsoidal ends.
The initial flat region of the graphic corresponds to nearly unstrained material
placed in the cylindrical region of the reservoir. The final thickness in this region
of the reservoir remains practically identical to the initial thickness of the preform.
The subsequent slight decrease in the variation of thickness is related to the por-
tion of the tubular preform that starts to bend in order to match the contour of the
die. Measurements and numerical predictions can even yield negative values, re-
sulting in local thicknesses smaller than that of the original preform, as can be
observed at 40 mm distance from the equatorial region.
276 L.M. Alves et al.
The last part of the graphic (say, above 45 mm distance from the equator)
shows a significant growth rate in thickness variation. This is due to compression
in the circumferential direction and the significant increase of thickness at the po-
lar openings of the reservoir being very advantageous for subsequent installation
of valves and end caps by mechanical fixing or welding.
pV
Ip = (7.20)
m
where p is pressure, V is volume and m is the tank’s mass. The performance in-
dex is usually presented in J/kg-1 reflecting the fact that an energy density is
stored.
The typical construction type of storage vessels with high values of the per-
formance index is a hybrid solution in which a metallic liner is wrapped around
with epoxy-embedded composite fibre in an arrangement usually known as
‘composite overwrapped pressure vessels’ (COPVs). The metallic liner provides
mainly the shape, gas tightness and the tank’s toughness, while the composite
overwrapping provides the strength required to withstand the tank’s internal
pressure.
The application here under discussion, xenon storage for electric propulsion in
satellites, requires the production of a metallic liner for a COPV, matching the per-
formance of current state-of-the-art COPVs with significant (over 50%) cost and
manufacturing time reductions versus conventional manufacturing technologies.
For this particular application, aluminium alloys best fit the requirements of fa-
bricating seamless high pressure reservoirs with a high strength-to-weight ratio,
adequate toughness, low cost and considerable availability in seamless extruded
tube form, including a multiplicity of diameters and thicknesses. It is important to
mention that this manufacturing technique cannot be employed without access to
seamless extruded tube. Also important, from a manufacturing point of view, is
Manufacturing Seamless Reservoirs by Tube Forming 277
Thruster Cathode A
(Anode) Cathode B
Pressure
Pressure
Regulation
Power Regulation Unit
Electronics
Processing
Unit
Fig. 7.18. Electric–ionic propulsion system layout in ESA’s Smart-1 probe. The xenon tank
is of US manufacture and a candidate for European replacement.
the very high heat treated-to-annealed strength ratio (see Fig. 7.19), allowing the
forming process to be performed in annealed conditions, which in turn provides
reduce polar apertures, good surface roughness characteristics (important for seal-
ing) while maintaining a relatively small press force. This is not always easy to
achieve with cold forming processes while the subsequent heat treatment process,
applied to the already formed spherical pressure vessels, can avoid the residual
stress remaining from the cold forming operation.
As it is easy to understand from Fig. 7.19 only aluminium alloys are capable
of multiplying the yield strength by a factor of 3 to 4 between annealed and
heat-treated conditions, especially when compared to alternative metals for this
application: Titanium alloys (mainly the α-β Ti-6Al-4V alloy, the current industry
benchmark), Inconel 718 and AISI 316 stainless steel.
Figure 7.20 shows a cylindrical reservoir with hemispherical ends made of
AISI 316 stainless steel that was successfully formed with an internal mandrel to-
gether with a cylindrical reservoir with hemispherical ends also made of AISI 316
that was not successfully formed with an internal mandrel made of MCP137.
278 L.M. Alves et al.
1200
1000
Yield Strength (MPa)
800
600
400
200
0
AA 6061 "O" AA 7050 "O" AA 2219 "O" AISI 316 "O" Ti-6-4 "O"
vs. vs. vs. vs. vs.
"T6" "T73X" "T851" 30% cold red. "STA"
Fig. 7.19. Comparison, for different metallic alloys, of yield strength in annealed vs. treated
conditions. Aluminium alloys exhibit the smaller annealed strengths and greatest response
to treatment.
Fig. 7.20. Successful and non-successful forming of cylindrical reservoirs with hemispherical
ends made from AISI 316 stainless steel.
Manufacturing Seamless Reservoirs by Tube Forming 279
7.7 Conclusions
The proposed manufacturing process was developed to fit the specific require-
ments of InnovGas project, performed between the Portuguese SME Omnidea and
the European Space Agency, which required substantial development of manufac-
turing technologies capable of producing seamless high pressure reservoirs. The
process extends the tools and techniques commonly utilized in tube forming in or-
der to include two innovative features related to the utilization of sharp edge dies
and internal, recyclable, mandrels made from low melting point alloys.
Sharp edge dies and internal mandrels proved crucial to fabricate spherical and
cylindrical reservoirs from both aluminium and stainless steel, in a single forming
operation without the risk of collapse by local buckling and/or wrinkling. Local
buckling and wrinkling are associated with compressive instability in the axial and
circumferential directions during the forming process. Wrinkling can also be attri-
buted to the strain loading paths being close to uniaxial compression as has been
experimentally observed and numerically predicted by means of finite element
analysis.
The increase in thickness at the poles is useful for installing devices, fixing the
outlet ports; also because overwrapping is much more complex near the polar re-
gions, the metallic liner alone might be required to withstand the internal pressure,
thus requiring higher thicknesses. In addition, the ultimate forming load for produc-
ing seamless reservoirs is small enough for enabling the process to be industrialized
in a low-cost, small capacity, press.
References
[1] Hübner, A., Teng, J.G., Saal, H.: Buckling behaviour of large steel cylinders with pat-
terned welds. International Journal of Pressure Vessels and Piping 83, 13–26 (2006)
[2] Kawahara, G., McCleskey, S.F.: Titanium lined, carbon composite overwrapped pres-
sure vessel. In: 32nd AIAA/ASME/SAE/ASEE Joint Propulsion Conference, Lake
Buena Vista, FL, USA (1996)
[3] Teng, J.G., Lin, X.: Fabrication of small models of large cylinders with extensive
welding for buckling experiments. Thin-Walled Structures 43, 1091–1114 (2005)
[4] Lee, H.S., Yoon, H.S., Yoon, J.H., Park, J.S., Yi, Y.M.: A study on failure characteristic
of spherical pressure vessel. Journal of Materials Processing Technology, 164–165,
882–888 (2005)
[5] Ostwald, P., Muñoz, J.: Manufacturing processes and systems. John Wiley & Sons,
New York (1997)
[6] Fengman, H., Zheng, T., Ning, W., Zhiyong, H.: Explosive forming of thin-wall
semi-spherical parts. Materials Letters 45, 133–137 (2000)
[7] Rosa, P.A.R., Alves, L.M., Martins, P.A.F.: Experimental and numerical modelling of
tube end forming processes. In: Davim, J.P. (ed.) Finite Element Methods in Manu-
facturing Processes, pp. 93–136. ISTE –Wiley (2010)
[8] Alves, M.L., Rodrigues, J.M.C., Martins, P.A.F.: Simulation of three-dimensional
bulk forming processes by the finite element flow formulation. Modelling and Simu-
lation in Materials Science and Engineering – Institute of Physics 11, 803–821 (2003)
280 L.M. Alves et al.
[9] Alves, L.M., Martins, P.A.F., Pardal, T.C., Almeida, P.J., Valverde, N.M.: Plastic de-
formation technological process for production of thin-wall revolution shells from tu-
bular billets. Patent request no. PCT/PT2009/000007, European Patent Office (2009)
[10] Everhart, M.C., Stahl, J.: Reusable shape memory polymer mandrels. In: Proceedings
of the SPIE - The International Society for Optics and Photonics, vol. 5762, pp. 27–
34 (2005)
[11] Alves, L.M., Pardal, T.C.D., Martins, P.A.F.: Nosing thin-walled tubes into axisym-
metric seamless reservoirs using recyclable mandrels. Journal of Cleaner Produc-
tion 18, 1740–1749 (2010)
[12] Alves, L.M., Nielsen, C.V., Martins, P.A.F.: Revisiting the Fundamentals and Capa-
bilities of the Stack Compression Test. Experimental Mechanics (in press, 2011)
[13] Alexander, J.M.: An approximate analysis of the collapse of thin cylindrical shells
under axial loading. Quarterly Journal of Mechanics and Applied Mathematics 13,
10–15 (1960)
[14] Allan, T.: Investigation of the behaviour of cylindrical tubes subject to axial compres-
sive forces. Journal of Mechanical Engineering Science 10, 182–197 (1968)
[15] Rosa, P.A.R., Rodrigues, J.M.C., Martins, P.A.F.: External inversion of thin-walled
tubes using a die: experimental and theoretical investigation. International Journal of
Machine Tools and Manufacture 43, 787–796 (2003)
[16] Kobayashi, S., Oh, S.I., Altan, T.: Metal forming and the finite element method. Ox-
ford University Press, New York (1989)
[17] IANGV International Association for Natural Gas Vehicles,
http://www.iangv.org
[18] Report to Congress, Effects of a Transition to a Hydrogen Economy on Employment
in the United States, Department of Energy, US (2008)
Author Index