Advanced Manufacturing Systems and Technology
Advanced Manufacturing Systems and Technology
Series Editors:
The Rectors of CISM
Sandor Kaliszky - Budapest
Mahir Sayir - Zurich
Wilhelm Schneider - Wien
The Secretary General of CISM
Giovanni Bianchi - Milan
Executive Editor
Carlo Tasso - Udine
ADVANCED MANUFACTURING
SYSTEMS AND TECHNOLOGY
EDITED BY
E. KULJANIC
UNIVERSITY OF UDINE
ISBN 978-3-211-82808-3
DOI 10.1007/978-3-7091-2678-3
ORGANIZERS
University of Udine - Faculty of Engineering - Department of
Electrical, Managerial and Mechanical Engineering - Italy
Centre International des Sciences Mecaniques, CISM - Udine - Italy
University of Rijeka - Technical Faculty - Croatia
CONFERENCE VENUE
CISM - PALAZZO DEL TORSO
Piazza Garibaldi, 18 - UDINE
PREFACE
E. Kuljanic
HONOUR COMMITTEE
S. CECOTTI, President of Giunta Regione Autonoma Friuli-Venezia Giulia
M. STRASSOLDO D1 GRAFFEMBERGO, Rector of the University of Udine G.
BIANCHI, General Secretary of CISM
S. DEL GIUDICE, Dean of the Faculty of Engineering, University of Udine
J. BRNIC, Dean of the Technical Faculty, University of Rijeka
C. MELZI, President of the Associazione Industriali della Provincia di Udine
SCIENTIFIC COMMITTEE
E. KULJANIC (Chairman), University of Udine, Italy
N. ALBERTI, University of Palermo, Italy
A. ALTO, Polytechnic of Bari, Italy
P. BARIANI, University of Padova, lta,ly
G. BIANCHI, CISM, Udine, Italy
A. BUGINI, University of Brescia, Italy
R. CEBALO, University of Zagreb, Croatia
G. CHRISSOLOURIS, University of Patras, Greece
M.F. DE VRIES, University of Wisconsin Madison, U.S.A.
R. IPPOLITO, Polytechnic of Torino, Italy
F. JOV ANE, Polytechnic of Milano, Italy
I. KATAVIC, University of Rijeka, Croatia
H.J.J. KALS, University of Twente, The Netherlands
F. KLOCKE, T.H. Aachen, Germany
W. KONIG, T.H. Aachen, Germany
F. LE MAITRE, Ecole Nationale Superieure de Mechanique, France
E. LENZ, Technion, Israel
R. LEVI, Polytechnic of Torino, Italy
B. LINDSTROM, Royal Institute of Technology, Sweden
V. MATKOVIC, Croatian Academy of Science and Arts, Croatia
J.A. Me GEOUGH, University of Edimburg, UK
M.E. MERCHANT, lAMS, Ohio, U.S.A.
G.F. MICHELETTI, Polytechic of Torino, Italy
B. MILCIC, INAS, Zagreb, Croatia
S. NOTO LA DIEGA, University of Palermo, Italy
J. PEKLENIK,University of Ljubljana, Slovenia
H. SCHULZ, T.H. Darmstadt, Germany
N.P. SUH, MIT, Mass., U.S.A.
H.K. TONSHOFF, University of Hannover, Germany
B.F. von TURKOVICH, University of Vermont, U.S.A.
K. UEHARA, University of Toyo, Japan
A. VILLA, Polytechnic of Torino, Italy
ORGANIZING COMMITEE
E. KULJANIC (Chairman)
M. NICOLICH (Secretary)
C. BANDERA, F. COSMI, F. DE BONA, M. GIOVAGNONI, F. MIANI,
M. PEZZETIA, P. PASCOLO, M. REINI, A. STROZZI, G. CUKOR
SPONSORSHIP ORGANIZATIONS
Presidente della Giunta Regione Autonoma Friuli-Venezia Giulia
Croatian Academy of Science and Arts, Zagreb
C.U.M. Community of Mediterranean Universities
SUPPORTING ORGANIZATIONS
Comitato per Ia promozione degli studi tecnico-scientifici
University of Udine
Pietro Rosa T.B.M. s.r.l., Maniago
CONTENTS
Page
Preface
Trends in Manufacturing
by M. E. Merchant ..................................................................................................... 1
by E. Kuljanic ............................................................................................................ 23
High-Speed Machining in Die and Mold Manufacturing
by H. Schulz .............................................................................................................. 37
by T. Mikac ...... ................. ....... ....... ....... ........ ......... ........ ..................................... 291
Texture Evolution During Forming of the Ag 835 Alloy for Coin Production
by F.. De Bona, M. Matteucci, J. Mohr, F.J. Pantenburg and S. Zelenika ............ .487
Part IX Quality
Model-Based Quality Control Loop at the Example of Turning-Processes
by 0. Sawodny and G. Goch .................................................................................. 785
The Effect of Process Evolution on Capability Study
by A. Passannanti and P. Valenti ........................................................................... 793
Measuring Quality Related Costs
by J. Mrsa and B. Smoljan ..................................................................................... 801
M. E. Merchant
Institute of Advanced Manufacturing Sciences, Cincinnati, OH,
U.S.A.
KEYNOTE PAPER
ABSTRACT: In the period from the beginning of organized manufacturing in the 1700s to the
1900s, increasing disparate departmentalization in the developing manufacturing companies resulted
in a long-term evolutionary trend toward an increasingly splintered, "bits-and-pieces" type
operational approach to manufacturing. Then in the 1950s, the "watershed" event of the advent of
digital computer technology and its application to manufacturing offered tremendous promise and
potential to enable the integration of those bits-and-pieces, and thus, through computer integrated
manufacturing (CIM) to operate manufacturing as a system. This initiated a long-term technological
trend toward realization of that promise and potential. However, as the technology to do that
developed, it was discovered in the late 1980s that that technology would only live up to its full
potential if the engineering of it was integrated with effective engineering of the human-resource
factors associated with the utilization of the technology in the operation of the overall system of
manufacturing in manufacturing companies (enterprises). This new socio-technological approach to
the engineering and operation of manufacturing has resulted in a powerful new long-term trend -- one
toward realistic and substantial accomplishment of total integration of both technological and
human-resource factors in the engineering and operation of the overall system of manufacturing in
manufacturing enterprises. A second consequence of this new approach to the engineering of
manufacturing is a strong imperative for change in the programs of education of manufacturing
engineers. As a result, the higher education of manufacturing professionals is now beginning to
respond to that imperative.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
M. E. Merchant
1. INTRODUCTION
In my keynote paper presented at AMST'93 I discussed broad socio-technical and specific
manufacturing long-term trends which had evolved over the years and were at work to
shape manufacturing in the 21st century. Since that time those trends have not only evolved
further, but are playing an even more active and better understood role in shaping
manufacturing. In addition, they are shaping not only manufacturing itself, but also the
education of tomorrow's manufacturing engineers. Therefore, in this paper, which is
somewhat of a sequel to my 1993 paper, we will explore the nature and implications of that
further evolution and understanding.
What has happened in these areas is strongly conditioned by all of the long-term trends in
manufacturing that have gone before, since the very beginnings of manufacturing as an
organized industrial activity. Therefore, we will begin with a brief review of those trends,
duplicating somewhat the review of these which was presented in the 1993 paper.
2. THE EARLY TREND
Manufacturing as an organized industrial activity was spawned by the Industrial Revolution
at the close of the 18th century. Manufacturing technology played a key role in this, since it
was Wilkinson's invention of a "precision" boring machine which made it possible to bore a
large cylinder to an accuracy less than "the thickness of a worn shilling". That precision was
sufficient to produce a cylinder for an invention which James Watt had conceived, but had
been unable to embody in workable form, namely the steam engine. Because ofWilkinson's
invention, production of such engines then became a reality, providing power for factories.
As factories grew in size, managing the various functions needed to carry on the operation
of a manufacturing company grew more and more difficult, leading to establishment of
functional departments within a company. However, the unfortunate result of this was that,
because communication between these specialized disparate departments was not only poor
but difficult, these departments gradually became more and more isolated from one another.
This situation finally lead to a "bits-and-pieces" approach to the creation of products,
throughout the manufacturing industry.
3. A WATERSHED EVENT
Then, in the 1950s, there occurred a technological event having major potential to change
that situation, namely the invention of the digital computer. This was indeed a watershed
event for manufacturing though not recognized as such at the time. However, by the 1960s,
as digital computer technology gradually began to be applied to manufacturing in various
ways (as for example in the form of numerical control of machine tools) the potential of the
digital computer for manufacturing slowly began to be understood. It gradually began to be
recognized as an extremely powerful tool -- a systems tool -- capable of integrating
PRODUCT
DESIGN
(FOR
PRODUCTION
PRODUCTION
PlODUCTIOII
CONTROL
Pl~NNIN(}
lFEEDBAOK,
lPROGRAMMINC.)
SUPtRVISORY,
ADAPTIVE
OPTIMIZING)
PRODUCTION
EQUIPMENT
(INCLUDINO
MACIIIME
TOOLS)
PRODUCTION
PROCESSES
lllEMOVAL,
FORMING-,
CONSOLI-
fiNISHED PRODUCTS
(FULLY ASSfMBLEP,
IMSPfCTlD AND,
READY FOR USE)
M. E. Merchant
technology has the potential to bring to manufacturing. The most significant of these were
found to be the following.
Greatly:
However, a puzzling and disturbing situation also emerged, namely, these potential benefits
were able to be realized fully by only a few pioneering companies, worldwide! The reason
why this should be so was not immediately evident. But by the late 1980s the answer to this
puzzle, found by benchmarking the pioneering companies, had finally evolved. It had
gradually become clear that while excellent engineering of the technology of a system of
manufacturing is a necessary condition for enabling the system to fully realize the potential
benefits of that technology, it is not a sufficient condition. The technology will only perform
at is full potential if the human-resource factors of the system are also simultaneously and
properly engineered. Further, the engineering of those factors must also be integrated with
the engineering ofthe technology. Failure to meet any of these necessary conditions defeats
the technology! In addition, it was also found the CIM systems technology is particularly
vulnerable to defeat by failure to properly engineer the human-resource factors. This fact is
particularly poignant, since that technology is, today, manufacturing's core technology.
5. ENGINEERING OF HUMAN-RESOURCE FACTORS IS INTRODUCED
Efforts to develop methodology for proper engineering of human-resource factors in
modern systems of manufacturing gradually began to be discovered and developed.
Although this process is still continuing, some of the more effective methodologies which
have already emerged and been put into practice include:
empower individuals with the full authority and knowledge necessary to the
carrying out of their responsibilities
use empowered multi-disciplinary teams (both managerial and operational) to
carry out the functions required to realize products
.empower a company's collective human resources to fully communicate and
cooperate with each other.
Further, an important principle underlying the joint engineering of the technology and the
human-resource factors of modern systems of manufacturing has recently become apparent.
This can be stated as follows:
So develop and apply the technology that it will support the
user, rather than, that the user will have to support the
technology.
6. A NEW APPROACH TO THE ENGINEERING OF MANUFACTURING EMERGES
Emergence of such new understanding as that described in the two preceding sections is
resulting in substantial re-thinking of earlier concepts, not only of the CIM system, but also
of the manufacturing enterprise in general. In particular, this had lead to the recognition that
these concepts should be broadened to include both the technological and the humanresource-oriented operations of a manufacturing enterprise. Thus the emerging focus of that
concept is no longer purely technological.
This new integrated socio-technological approach to the engineering and operation of the
system of manufacturing is resulting in emergence of a powerful long-term overall trend in
world industry. That trend can be characterized as one toward realistic and substantial
accomplishment of total integration of both technological and human-resource factors in
the engineering and operation of an overall manufacturing enterprise.
The trend thus comprises two parallel sets of mutually integrated activities. The first of
these is devoted to development and implementation of new, integrated technological
approaches to the engineering and operation of manufacturing enterprises. The second is
devoted to the development and implementation of new, integrated highly human-resourceoriented approaches to the engineering and operation of such enterprises. To ensure
maximum success in the ongoing results of this overall endeavor, both sets of activities must
be integrated with each other and jointly pursued, hand-in-hand.
7. IMPLICATIONS FOR EDUCATION OF MANUFACTURING ENGINEERS
Because the new approach and long-term trend described above are having a revolutionary
impact on the engineering of manufacturing, these also have very considerable implications
for the education of future manufacturing engineers. Quite evidently, these professionals
must not only be educated in how to engineer today' s and tomorrow's manufacturing
technology, as presently. They must now also be educated in how to engineer the humanresource factors involved in development, application and use of that technology in practice.
Further, they must also be educated in how to effectively engineer the interactions and the
integration of the two.
M. E. Merchant
The imperative that they be so educated stems from the fact that, if they are not, the
technology which they engineer will fail to perform at is full potential, or may even fail
completely. Thus, if we do not so educate them, we send these engineers out into industry
lacking the knowledge required to be successful manufactunng engineers.
The higher education of manufacturing professionals is now beginning to respond to this
new basic imperative. For example, consider the tone and content of the SME International
Conference on Preparing World Class Manufacturing Professionals held in San Diego,
California in March of this year. (It was attended by 265 persons from 27 different
countries.) The titles of some of the conference sessions are indicative of the conference's
tone and content; for instance:
8. CONCLUSION
A radical metamorphosis is now underway in the engineering and operation of
manufacturing throughout the world; much of that is still in its infancy. The main engine
driving that metamorphosis is the growing understanding that, for the engineering of
manufacturing's technologies to be successful, it must intimately include the engineering of
manufacturing's human-resource factors as well. Understanding and methodology for
accomplishing such engineering are still in early stages of development. Programs of
education of manufacturing engineers to equip them to be successful in practicing this new
approach to the engineering and operation of manufacturing are even more rudimentary at
this stage today.
However, this new approach is already beginning to show strong promise of being able to
make manufacturing enterprises far more productive and "human-friendly" than they have
ever been before.
That poses to all ofus, as manufacturing professionals, an exciting challenge!
KEYNOTE PAPER
KEY WORDS: Titanium alloys, nickel-based alloys, turning, ceramics, PCD, PCBN
ABSTRACT: At present, the majority of tools used for turning titanium- and nickel-based alloys are
made of carbide. An exceptionally interesting alternative is the use of PCD, whisker-reinforced
cutting ceramic or PCBN tools. Turning nickel-based alloys with whisker-reinforced cutting ceramics is of great interest, mainly for commercial reasons. A change from carbides to PCD for turning
operations on titanium-based alloys and to PCBN for nickel-based alloys should invariably be considered if the advantages of using these cutting materials, e.g. higher cutting speeds, shorter process
times, longer tool lives or better surface quality outweigh the higher tool costs.
1. INTRODUCTION
Titanium- and nickel-based alloys are the materials most frequently used for components
exposed to a combination of high dynamic stresses and high operating temperatures. They
are the preferred materials for blades, wheels and housing components in the hot sections of
fixed gas turbines and aircraft engines (Fig. 1). Current application limits are roughly 600
oc for titanium-based alloys, 650 oc for nickel-based forging alloys and 1050 oc for nickelbased casting alloys [ 1].
Because of their physical and mechanical properties, titanium- and nickel-based alloys are
among the most difficult materials to machine. Cutting operations are carried out mainly
with HSS or carbide tools. Owing to the high thermal and mechanical stresses involved,
these cutting materials must be used at relatively low cutting speeds.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
low-pressure
compressor
nickel-based
alloys
high-pressure
turbine
Ti99,7G
TiAI6V4G
density
{g/100 cm 3 ]
Ck45V
Ti99,7G
thermal conductivity
[W/rn K)
TiAI6V4G
Young's modulus
[GPa)
Ck45V
Ti99,7G
specific heat
[J/100g K]
TiAI6V4G
Ck45V
Ti99,7G
thermal expansion
[~m/ 10m K)
150
100
TiAI6V4G
Ck45V
50
200
400
600
800
Fig. 2: Physical and mechanical properties of pure titanium, TiA16V 4 and Ck45 tempering
steel [3]
yet come close to attammg (Fig. 2). Even high-strength steels with yield point values
ofapproximately 1,000 MPa only achieve about half the ratio reached by TiA16V 4 titanium
alloy [3].
One important physical property goveming the machinability of titanium alloys is their low
thermal conductivity, amounting to only about 10- 20 % that of steel (Fig. 2). In consequence, only a small prop01tion of the generated heat is removed via the chips. As compared to operations on Ck45 steel, some 20 - 30 % more heat must he dissipated via the
tool when working TiA16V4 titanium alloy, depending on the thermal conductivity of the
cutting material (Fig. 3, top left). This results in exceptionally high thermal stresses on the
cutting tools, significantly exceeding those encountered when machining steel (Fig. 3, top
right). In terms of cutting operations on titanium alloys, this means that the cutting tools are
subjected not only to substantial mechanical stresses hut also to severe thermal stress [3, 4].
Another characteiistic feature of cutting operations on titanium alloys under conventional
cutting parameters is the fo1mation of lamellar chips. These are caused by a constant alternation between upsetting and slipping phenomena in the shearing zone (Fig. 3, bottom left).
Owing to this disconinuous chip formation, tools are exposed to cyclic mechanical and
thermal stresses whose frequency and amplitude depend directly on the cutting parameters.
The dynamic components of cutting force may amount to some 20 - 35 % of the static
components. The mechanical and thermal swelling stress may promote tool fatigue or failure through crack initiation, shell-shaped spalling, chipping out of cutting material particles
or cutting edge chipping [3, 4].
10
F. Klocke,
Turning processes on TiA16V 4 rely mainly on carbides in the K20 cutting applications
group. The usual range of cutting speeds is 50 to 60 m/min for roughing and 60 to
80 m/min for finishing. Oxide-ceramic-based cutting materials cannot be considered for this
task (Fig. 3, bottom right) [2 - 4]. Monocrystalline or polycrystalline diamond tools have
proved exceptionally useful for machining titanium alloys (Fig. 3, bottom right). The
diamond tools are characterized by great hardness and wear resistance, excellent thermal
conductivity as compared to other cutting materials (Fig. 3, top left), low thermal expansion
and low face/chip and flank/workpiece friction [4 - 7].
distribution coefficient of heat flow Ow
100,--.--,------,------,---~~
TiAI6V4
900
60~~--4-----~----
700 f---11----1---1--+--+---1
40~~-=-r-----4------+-----~
20
168
o ~~--~----~------~----~
$:' c,
Cl
~ 0
.....
~'11 ~ ~Q.
t--r-:.o~~+--+--..;:-.....,
500
300 1--1-:""""'"----1--+
100 L....:~---1-__l___.L_ _.L__J
0 20 40 60 rnlm in 120
cutting speed vc
CJ
.?'
Q)
PCD
HW-K 10
()
.E
Cl
.~
~ I
vc = 61 rnlmin
f= 0, 125 mm
PCBN
:;
oxide
ceramic
()
111 1
0,5 1
1111
5 10
50 !Jrn/min
1000
The wear-determining interactions between the work material and the cutting material during PCD machining of titanium alloys are extraordinarily complex. They are characterized
by diffusion and graphitization phenomena, thermally-induced crack initiation, surface destruction due to lamellar chip formation and possible f01mation of a wear-inhibiting titanium
carbide reaction film on the diamond grains [5 , 6].
Owing to these varied interactions between the cutting and work materials, the performance
potential of PCD cutting materials in titanium machining processes is heavily dependent on
11
the composition of the cutting material. Of particular interest are the composition of the
binder phase, its volumetric proportion and the size of the diamond grains [6].
Crater wear is a main criterion for assessing the performance potential of a PCD cutting
material in a titanium machining operation. Flank wear is of subordinate importance, especially at high cutting speeds.
The lowest crater wear in plain turning tests on TiA16V 4 titanium alloy was measured for a
PCD grade with SiC as the binder. Crater wear was heavily influenced by binder content
and grain size in the case of PCD grades with cobalt-containing binders (Fig. 4). The
greatest crater wear was observed for the PCD grade with the highest binder content and
the smallest grain size [6, 7].
100
IJm
e--0-
20
Jllr=---
---~-----
0 .I ....
200
~--~-,...
400
600
800
cutting length lc
composition
diamond
binder
grit size
PCO 1
h. .A.
92%
8%
61 0 IJm
92%
8%
26 IJm
80%
20%
0,51 IJm
1200
= 11 0 m/m in
= 2,0 mm
= 0,1 mm
plain turning
cutting speed:
vc
material:
TiAI6V4
depth of cut:
tool:
feed:
aP
f
cooling lubricant:
emulsion
SPGN 120308
PC03
oe
process:
PC02
Fig. 4: Crater and tlank wear in turning operations on TiAl6V 4 as a function of the PCD
grade [6, 71
Because of its catalyzing effect, cobalt also encourages graphitization of the diamond. The
result is low resistance of the cutting mate1ial to abrasive wear. These is demonstrated very
clearly by scratch marks in PCD cutting mate1ials annealed at different temperatures. Unlike
low-binder, large-grain types, high-hinder, small-grain types leave a clear scratch diamond
track on a specimen annealed at 800 oc (Fig. 5). Cobalt and diamond also have different
coefficients of thermal expansion, favouring the development of thermal expansion cracks.
This is particularly observable with fine-grained types. In combination with dynamic stressing of the cutting material through lamellar chip fOimation, these cracks make it easier for
single PCD grains or even complete grain clusters to detach from the binder [5 - 7].
The low crater wear on the SiC-containing or large-grain, cobalt-containing PCD grains
may be due to the f01mation of a wear-inhibiting titanium carbide film on the diamond
grains. It is suspected that a diffusion-led reaction occurs between titanium from the work
mate1ial and carbon from the tool in the crater zone of the face at the beginning of the rna-
12
chining process. The resulting titanium carbide reaction film adheres firmly to the face of the
diamond tool and remains there throughout the remainder of the machining operation. Since
the diffusion rate of carbon in titanium carbide is lower by several powers of ten than that of
carbon in titanium, tool wear is slowed down substantially [3, 8].
To achieve the lowest possible crater wear, PCD grades with large diamond grains, low
cobalt content or a B-SiC binder phase are therefore preferable for turning operations on
titanium alloys.
annealing temperature: 700
t~
100
composition
diamond
binder
grit size
PCD 1
92%
8%
6-10 !Jm
PCD 3
80%
20%
0,5-1 !Jm
t1~
4
100
Fig. 5: Overall views and cross-section profiles of the scratch track on the surface of PCD
cutting materials as a function of annealing temperature [6, 7]
2. MACHINING NICKEL-BASED ALLOYS WITH CERAMICS AND PCD
Inconel 718 and Waspaloy are among the most important and frequently-used nickel-based
alloys. Both materials are vacuum-melted and precipitation-hardenable. They are characterized by their great high-temperature resistance, distinctly above that of steels and titanium alloys (Fig. 6).
13
elements lnconel71 Waspaloy
1200
a;
rn
'I:
0
800
600
.,
c
400
::!2
200
Q)
:;..
--)
"::::::::
.c
0,
l::i. _ l::i..lnconel718
MPa
0~
/::;.
o,
20
20
19
.A _ .A Waspaloy
Co
13
'
Mo
4,5
',
TiAI6V4
400
500
55
Cr
..... 0.
XSN;c.n26-15
>50
Fe
'/::;.
.A -
Ni
600
Nb
700 800
Ta
AI
Ti
oc
1000
0,9
1,4
3
(fractions in wt.-%)
temperature
Fig. 6: Chemical composition of Inconel 718 and Waspaloy and comparison of their high
temperature strength with that of TiA16V 4 and high-alloyed steel
In general, the nickel-based alloys belong to the group of hard-to-machine materials. Their
low specific heat and thermal conductivity as compared to steels, their pronounced tendency
to built-up edge formation and strain hardening and the abrasive effect of carbides and intermetallic phases result in exceptionally high mechanical and thermal stresses on the cutting
edge during machining. Owing to the high cutting temperatures which occur, high-speed
oxide ceramic
mixed ceramic
whisker-reinforced ceramic
Al 20 3 + SiC-whisker
silicon nitride ceramics
Si 3 N4 + MgO, Y2 0 3
Si 3 N4 + Al 20 3 + Y20 3 (Sialon)
PCBN + binder
14
steel and carbide tools can be used only at relatively low cutting speeds. The usual range of
turning speeds for uncoated carbides of ISO applications group K 10/20 on Inconel 718 and
Waspaloy is vc =20- 35m/min.
Alternatives to carbides for lathe tools are cutting ceramics and polycrystalline cubic boron
nitride (PCBN) [9- 12]. These two classes of cutting material are characterized by high red
hardness and high resistance to thermal wear. As compared to carbides, they can be used at
higher cutting speeds, with distinctly reduced production times and identical or improved
machining quality.
Within the group of Al203-based cutting materials, mixed ceramics with TiC or TiN as the
hard component arc used particularly for fmish turning operations (vc = 150 - 400 m/min,
f = 0.1 - 0.2 mm) and ceramics ductilized with SiC whiskers (CW) for finishing and
medium-range cutting parameters (vc = 150 - 300 m/min, f = 0.12 - 0.3 mm). Oxide
cerari)ics are unsuitable for machining work on nickel-based alloys, owing to intensive notch
wear (Fig. 7). The Sialon materials have proved to be the most usable representatives of the
silicon nitride group of cutting ceramics for roughing work on nickel-based alloys (vc = 100
Fig. 8: Characteristic wear modes during turning of nickel-based alloys with ceramics
15
to 200 m/min, f = 0.2 to 0.4 mm). PCBN cutting materials are used mainly for finishing
work on nickel-based alloys.
Characteristic for the turning of nickel-based alloys with cutting ceramics or PCBN is the
occurrence of notch wear on the major and minor cutting edges of the tools (Fig. 8). In
many applications, notch wear is decisive for tool life. Notching on the minor cutting edge
leads to a poorer surface surface finish, notching on the major cutting edge to burring on
the edge of the workpiece. Apart from the cutting material and cutting parameters, one of
the main influences on notching of the major cutting edge is the tool cutting edge angle.
This should be as small as possible. A tool cutting edge angle of Kr =45 has proved favourable for turning operations with cutting ceramics and PCBN.
Current state-of-the-art technology for turning Inconel718 and Waspaloy generally relies
on whisker-reinforced cutting ceramics. They have almost completely replaced Al20~
based ceramics with TiCffiN or Sialon for both finishing and roughing operations. This
trend is due to the superior toughness and wear behaviour of the whisker-reinforced cutting
ceramics and the higher cutting speeds which can be used. These advantages result in longer
reproducible tool lives, greater process reliability and product quality and a drastic
reduction in machining times as compared to carbides (Fig. 9). The arcuate tool-life curve is
typical for turning operations on nickel-based alloys with cutting ceramic or PCBN. It
results from various mechanisms which dominate wear, depending on the cutting speed.
Notch wear on the major cutting edge tends to determine tool life in the lower range of
cutting speeds, chip and t1ank wear in the upper range. The arcuate tool-life curves indicatethat there is an optimum range of cutting speeds. The closer together the ascending and
2000
s;c~;''" J 1A,l
reinforced ceramic
I{)
BOO
600
.c
0,
1\
CD
G.
_..,
E 1000
N_
0
II
400
c:
~
0>
c:
.;::;
:;
u
aP = 3 mm, f = 0,25 mm
emulsion
200
100
...
micrograin
carbide
I
10
20
I I
50
100 m/min
cutting speed vc
500
Fig. 9: Comparative tool lives: turning Inconel 718 with carbide and whisker-reinforced
ceramic
16
descending anns of the tool-life curves, the more important will it be to work in the narrowest possible range near the tool-life optimum.
Excellent machining results are obtained with PCBN-based tools in finish turning work on
nickel-based alloys. Because of their great hardness and wear resistance, PCBN cutting
materials can be used at higher cutting speeds. These range from 300 to 600 m/min for
fmish turning on Inconel 718 and Waspaloy (Fig. 10). As shown by the SEM scans in
Fig. 11, the wear behaviour of PCBN tools at these high cutting speeds is no longer
detennined by notch wear, but chiefly by progressive chip and flank wear. The high
perfonnance of the tools is assisted by the specialized tool geometry. Of interest here are
the large comer radius, which together with the low depth of cut ensures a small effective
tool cutting edge angle of Keff = 300, the cutting edge geometry, which is not bevelled but
has an edge rounding in the order of rn =25 - 50 j.lm and the tool orthogonal rake of
'Yo= oo.
2000
'E
E
II>
N
II
co
c..
_u
.s::::
800
::-,.. ...
Al
1"\.'i
"\.Q
e\
-~>.
I-t
400 I-t
-11
'PCBN='
0>
c:
=50
200
........
\'o
\~\.
i\ i\ 6
Ti
\r
'l.\1
0,
c:
Ti
~'
600 f-<
spectrum
( process: turning
m
1000
microstructure
-11
grades
..
"
--"-
100
200
'---1'
400 m/mm
cutting speed vc
polycrystalline cubic
boron nitride (PCBN)
work material:
lnconel 718
cutting parameters: aP =0,3 mm, f =0,15 mm
coolant:
emulsion
cutting material:
1000
Fig. 10: Influence of microstructure and composition on the perfonnance of PCBN grades
Apart from higher available cutting speeds and excellent wear behaviour, PCBN cutting
materials achieve longer tool lives, allowing parts to be tinished in a single cut and reliably
attaining high accuracies-to-shape-and-size over a long machining time. Because of their
high performance, PCBN cutting mateiials represent a cost-effective alternative to
conventional working of nickel-based alloys with carbides or cutting ceramics, despite high
tool prices.
The choice of a material grade suited to the specific machining task is of special importance
for the successful machining of nickel-based alloys with PCBN cutting materials (Fig. 10
and 11). There are often substantial differences between the PCBN cutting materials
available on the market in tenns of the modific,ltion and the fraction of boron nitride, the
grain size and the structure of the binding phase. The resulting chemical, physical and
17
mechanical properties have a decisive influence on the wear and performance behaviour of
PCBN tools. Fine-grained PCBN grades with a TiC- or TiN-based binder and a binder
fraction of 30 to 50 vol.-% have proved suitable for finishing operations on Inconel 718 and
Waspaloy.
18
300
process:
1Jm
work material:
200
150
100
'0
~....
as
Q)
3:
.r::
'6
-~
grooving
Waspaloy
cutting materials:
carbide (K1 0/1<20)
HW:
whisker-reinforced
CW:
ceramic
polycrystalline
PCBN:
cubic boron nitride
50
0
HW
cw
PCBN
PCBN
PCBN
30
200
300
300
300
vc [m/min)
0,05
0,05
0,05
0,05
0,1
f[mm)
1.
no. of
grooves
groove width:
groove depth:
4,2mm
6mm
turning path
per groove:
coolant:
160m
emulsion
Fig. 12: Tool wear during grooving of Waspaloy as a function of the cutting material
Plastic deformation of the microstructure occurs in the surface zone of the workpiece as a
result of the machining operation, causing hardening and increased tina! hardness of the
work material. The extent of deformation and the size of the hardness increase are dependent on the cutting parameters, tool geometry and tool wear.
Plastic deformation of the surface zone is associated with a change in grain shape. Characteristic for the turning of nickel-based alloys is a pronounced arcuate deformation of the
grain boundaries against the workpiece rotation (Fig. 14). Numerous similarly arcuate slip
lines occur within the individual grains due to slipping of atomic layers along specific crystallographic planes. The plastic deformation of the work material microstructure visible
under the optical microscope generally extends to a depth of 10 or 20 11m from the surface.
It is usually confined to grains lying directly at the surface, but may extend over several
grain layers at a greater workpiece depth, depending on the severity of thermal effects in the
surface zone.
The surface zone hardness increase caused by plastic deformation can be determined by
means of microhardness measurements. Inclusions, grain boundaries and other micro-inhomogeneities result in scatter of the individual measurements. Especially where there are
successive machining operations, hardening of the work material may lead to increased
stress on the tool and to greater wear.
Microhardness measurements reveal a significant increase in hardness in the surface zone
(Fig. 14). This amounts to roughly 100 - 200 Vickers units as compared to the hardness of
the uninfluenced base material, depending on the cutting parameters.
c:
0
-~
:;;
Q)
0,4 -o
0,3
c:
<U
Q)
E
0,2
0,1
-~
~
"(;j'
0 .s::
cw
19
measuringpoint
process:
grooving
-,
~m ~
Q)
2,0 ';ij
>
1,5
1,0 ~
Ql
0,5 ~
o
CW
e
<U
30
200
300
300
300
vc[m'min]
0,05
0,05
0,05
0,05
0,1
f[mm]
no. of
grooves
cutting materials:
HW: carbide (K1 OIK20)
CW: whisker-reinforced
ceramic
PCBN :
polycrystalline
cubic boron nitride
groove width:
groove depth :
turning path
per groove:
coolant:
4,2mm
6mm
160m
emulsion
Fig. 13: Surface finish of grooved Waspaloy as a function of the cutting material
Manufacturing-induced changes in the surface zone can have a substantial effect on component properties. This applies particularly to the fatigue strength of dynamically-stressed
parts. In view of the high standards of safety and reliability demanded for jet engine parts,
the extent to which any change in cutting material and cutting parameters affects part
properties is of decisive importance.
Comparative studies of components produced with PCBN and cutting ceramic tools at high
cutting speeds showed no significant negative effects of these cutting parameters on the
surface zone structure as compared to machining with carbide tools. This conclusion applies
not only to influencing of the the surface zone but also to dynamic stressing of the components. This is evident from a comparison of the mean values and standard deviations in the
number of cycles to failure in pulsating stress tensile strength tests on fatigue test specimens
(Fig. 15). Under the test conditions, the use of PCBN tools leads to a demonstrable but
slight increase in the number of cycles to failure. The smaller number of cycles to failure for
the specimens machined with whisker-reinforced oxide ceramics is due principally to the
increase in the feed rate by a factor of three.
20
650
I{)
N
0
~tl-.l
600
~.-i-1
c5 550
>
cutting material:
PCBN: polycrystalline
cubic boron nitride
L.-I
:; 500
(J)
Q)
. 450
til
.c 400
'
cutting parameters:
vc = 400 m/min
aP = 0,1 mm
f = 0,2 mm
L--L--L-~--~~
10
20 IJm 40 1000
650
I{)
N
0
c5 550
>
"'
-E"'
Q)
cutting material :
CW: whisker-reinforced
ceramic
.-.
600 f-- e
500
f-
~--'-l-
I
450
ca
.c 400
10
cutting parameters:
._I
vc = 400 m/min
aP = 0,3 mm
f = 0,2 mm
....J
20 IJm 40 1000
process:
face turning
work material:
X 103
160
140
___. -----
. _..
100
turbine disk
face turning
HW:
PCBN:
carbide
polycrystalline cubic
boron nitride
CW:
80
whisker-reinforced
ceramic
60
fatigue test:
upperstress:
40
20
workpiece:
process:
cutting materials:
-----
120
Waspaloy
lower stress:
HW
PCBN
cw
threshold frequency :
f = 29 Hz
room temperature
vc (m/min)
30
400
400
f(mm)
0,1
0,1
0,3
Fig. 15: Fatigue strength under pulsating tensile stress of turned Waspaloy specimen
21
REFERENCES
1. Esslinger, P., Smarsly, W.: Intermetallische Phasen; Neue Werkstoffe fiir fortschrittliche
Flugtriebwerke, MTU FOCUS 1(1991), p. 36- 42
2. Konig, W.: Fertigungsverfahren Band 1. Drehen, Frasen, Bohren. 3. Auflage, VDIVerlag, Dusseldorf, 1990
3. Erinski, D.: Metallkundliche Aspekte, technologische Grenzen und Perspektiven der Ultraprazisionszerspanung von Titanwerkstoffen, Promotionsvortrag, RW11I Aachen, 1990
4. Kreis, W.: VerschleiBursachen beim Drehen von Titanwerkstoffen, Dissertation, RW11I
Aachen, 1973
5. Bomcke, A.: Ein Beitrag zur Ermittlung der VerschleiBmechanismen beim Zerspanen mit
hochharten polykristallinen Schneidstoffen, Dissertation, RWTH Aachen, 1989
6. Neises, A.: Eint1uB von Aufbau und Eigenschaften hochharter nichtmetallischer Schneidstoffe auf Leistung und VerschleiB im ZerspanprozeB mit geometrisch definierter Schneide,
Dissertation, RWTH Aachen, 1994
7. Konig, W., Neises, A.: Turning TiA16V4 with PCD, IDR 2(1993), p. 85-88
8. Hartung, P. D., Kramer, B. M.: Tool Wear in Titanium Machining, Annals of the CIRP,
Vol. 3111(1982), p. 75-80
9. Konig, W., Gerschwiler, K.: Inconel 718 mit Keramik und CBN drehen, Industrie-Anzeiger 109(1987)13, p. 24/28
10. Lenk, E.: Bearbeitung von Titan- und Nickelbasislegierungen im Triebwerksbau, Vortrag anlli.Blich des DGM Symposiums "Schneidwerkstoffe, Spanen mit definierten Schneiden, Bad Nauheim, 1982
11. Vigneau, J.: Cutting Materials for Machining Superalloys, VDI-Berichte 762, p. 321330, VDI-Verlag Dusseldorf, 1989
12. Narutaki, N., Yamane, Y., Hayashi, K., Kitagawa, T.: High-Speed Machining of Inconel 718 with Ceramic Tools, Annals of the CIRP, Vol. 4211(1993)
E. Kuljanic
University of Udine, Udine, Italy
KEYNOTE PAPER
1. INTRODUCTION
In recent decades of our modern world the manufacturing technology and environment have
undergone significant changes. However, the changes that will occur in the near future will
be more dramatic. The trend in manufacturing is towards the intelligent machining system
able to utilize experience, indispensable data and know-how accumulated during past
operations, accumulates knowledge through learning and accommodates ambiguous inputs.
The development of intelligent machine tool is given in Figure 1, [1]. How will the
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996. '
24
E. Kuljanic
manufacturing look like in the 21st century could be seen from M. E. Merchant's keynote
paper presented at the AMST'93 [2].
Manual Machine
D
Powered Machine
Analo&f.Mechanical Control
(Mechanism)
NCMachine
Digital Control
(NC/Servo) (Actuator) (Sensor)
-Efficiency
{ -Speed
-Accuracy
{ - Sensor Feed-back of
Machining Process
(Objective Function!Constraint) (Sensor)
ACMachine
through Learning
Detislon Malcine
25
F. W. Taylor was the first researcher who had done an extensive machinability testing. He
was one of the .most creative thinker according to M. E. Merchant [2]. In his well known
work [4] presented at the ASME Winter Conference in New York exactly ninety years ago,
Taylor raised three questions: "What tool shall I use?, What cutting speed shall I use?
and What feed shall I use?", with a following comment. "Our investigations, which were
started 26 years ago with the definite purpose of finding the true answer to these
questions, under all the varying conditions of machine shop practice, have been carried on
up to the present time with this as the main object still in view."
It is significant to point out that, after so many years of new facility available such as an
electron microscope, computer, machining systems, etc., we have had serious difficulties to
find the right answers to the questions. Perhaps, we have to find new approaches in finding
the answers to the above questions.
T = K vc kv
or
(1)
(2)
where Tis tool life in min, C and K are constants, Vc is cutting speed in m/min, kv and m are
exponents, and m = -llkv. In order to obtain equations ( 1) and (2), first tool wear curves are
to be determined using experimental tool wear data measured periodically after the effective
cutting time, for example, after 5, 10, 15, etc. minutes. This procedure is given to point out
that one tool wear curve is obtained with one tool. Yet, there are still misunderstandings in
obtaining the tool wear curve.
The extended tool life equation is:
T = K vc kv j k I a p ka
(3)
where f is feed in mm/rev, aP is depth of cut in mm, and k1 and ka are exponents.
2.2. SHORT MACHINABILITY TESTS
Taylor's machinability tests are called conventional or long machinability tests. Since this
procedure is material and time consuming, short or quick tests of machinability were
introduced.
There are different criteria for quick machinability testing: drilling torque or thrust, drilling
time or rate for penetration, energy absorbed in pendulum-type milling cut, temperature of
cutting tool or chip, the degree of hardening of chip during removal, cutting ratio of chip,
easy of chip disposal, etc.
26
E. Kuljanic
In the 1950s and 1960s some new machinability testing methods came out. For example,
quick tool life testing method was developed by applying face turning. The face turning is
done at constant number of revolutions, starting cutting at a smaller diameter and moving to
a greater diameter. In such a way the cutting speed is increased as the diameter increases
according to:
D!Cn
v =-c
1000
(4)
Cutting speed
Cutting temperature
27
Since the workpiece material properties are influencing tool life or machinability, the
approach is to correlate tool life directly with case measured pr.operties of the materials.
One of the first publications in machinability of steel rating was done by J. Sorenson and W.
Gates [6]. They made a graphical representation of the general relation of machinability
ratings - relative cutting speeds to hardness for hot-rolled SAE steels, Figure 3. A 100%
rating was given to SAE 1112 steel cold rolled.
300-
r--
r-
r--
Hardness
260-
1-
cu
"'0
88
r-72
220-
..s::::
.s...
";1:.
dl
c::
VI
VI
s::
r- 100
Machinability
rating
180-
Ill
140-
10000
'<tll"l
<'l'<tv
r-
00
'<tll"l
MM
MN
...
:0
:.a"'
56 c::
<)
1- ..__ -
1--
r
S.A.E.
No.
) f-
-~
f--
r- -
r- 40
r- 24
00 00
Vl'<t
NM \ 0 MN '<tM
- -"""
0 ll"l 0
M M
\D ll"l
"""""" MN
28
E. Kuljanic
Table 1 Machinability rating of various metals
Class I
Class IV
Ferrous 70% and higher
Ferrous 40% and higher
Rating
Brinell
AISI
AISI
Rating
Brinell
%
%
A2515+
C1109
85
137-166
30
179-229
E3310+
85
147-179
C1115
40
170-229
E9315+
143-179
C1118
80
40
179-229
75
187-229
C1132
Stainless
18-8+
70
187-229
C1137
25
150-160
179-229
90
austenitic
Bllll
179-229
100
B1112
135
179-229
B1113
156-207
A4023
70
Annealed prior to cold drawing or cold rolling in the production of the steel
specially mentioned
The second approach in machinability ratings is in terms of equivalent cutting speed. The
cutting speed number is the cutting speed which causes a given flank wear land in 60
minutes. Such a cutting speed is called economical cutting speed. However, the tool life of
60 minutes is not economical anymore. The economical tool life, i.e. the optimal tool life for
minimum machining cost is about 10 minutes or less in turning. Therefore, the
corresponding cutting speed is much higher than the tool life of 60 minutes.
The third approach in machinability ratings represents relative cutting speed values where
the ratings are given as letters, Table 2. "A" indicates a high permissible cutting speed and
"D" a lower cutting speed.
Table 2. Machinability rating of stainless steel (hotrolled, annealed)
Machinability
Hardness
AISI No.
rating
Brinell
135-165
AISI 410
c
A
145-185
AISI 416
145-185
AISI 430
c
140-185
AISI 446
c
D
135-185
AISI 302
B
130-150
AISI 303
D
135-185
AISI 316
29
The forth approach is the correlation of tool life and the microstructure of the metal.
Generally speaking hard constituents in the structure result in poor tool life, and vice versa.
In addition, the tool life is usually better when the grain size of the metal is larger.
The correlation of microstructure of steels and tool life was studied by Waldman [8].
Averages relations of tool life and surface finish to microstructure of steel were reported as
"good", "fair", "fair to good" and "poor", Table 3.
. h to mtcros tru cureo
Tabl e 3 Average reIat'ton oft 00 I 1'
1 e an d surf:ace fi DIS
f sees
t I
t
Class of steel
Structure
Tool
life
Low - carbon steels
Cold - drawn, small grain size
Good
Normalized
Good
Mild medium - carbon steel
Perlitic, moderate grain size
Good
Perlitic, small grain size
Fair
Perlitic, large grain size
Good
Spheroidized
Fair
Surface
finish
Good
Fair
Good
Good
Fair
Poor
30
E. Kuljanic
In 1986 J. Kahles proposed to conduct a survey in the CIRP, primarily in industry around
the world, to ascertain the principal machinability data needs of industry. The results of the
survey "Machinability Data Requirements for Advanced Machining Systems" were
published in 1987 [17], and the discussion of the obtained results was presented by the
author at the 1st International Conference AMST'87 [18]. The information were gathered
from industries from twelve countries. The conclusion is that reliable machinability data on
tool life, chip control, surface finish and surface integrity are indispensable for efficient
computer integrated manufacturing.
2.5. COMPUTERIZED MACHINABILITY DATA SYSTEMS
The purpose of computerized machinability data systems is a systematic and rapid selection
of machinability data for the user, i.e. to facilitate numer;ical control and conventional
machining.
The development of computerized machinability data systems started in the United States
and in other places at the beginning of 1970s. One of the first machinability data systems
was developed at Metcut Research Associates Inc. - Machinability Data Center in
Cincinnati. This system considers both the cost of operating the machine tool and the cost
of tool and the tool reconditioning, and the total cost and operating time are calculated. In
order .to achieve this, tool life data related to cutting speed - tool life equations for various
sets of machining parameters are required.
A series of programs were designed in different firms to help set time standards. Such a
series of programs were developed at IBM under the name "Work Measurement Aids":
Program 1 - Machinability Programming System, and Program 2 - Work Measurement
Sampling. The basis for the Machinability Programming System is that machining
operations can be related to a standard operation using a standard material and tool.
Different machining operations are related to the standard operation using speed and feed
adjustment factors. The factors are: base material, speed and feed, and adjustment factors
for each operation and for conditions of cut. The Machinability Programming System allows
for users specification of material data, machine group speeds and feeds, and operation
factors.
ABEX Computerized System was developed using the Metcut tool life equations and the
manipulative functions of the IBM Work Measurement Aids program to describe the part,
store the available rates on a wide range of machine tools, and generate cutting conditions
on short cycle operations [19]. In the ABEX system, shop-generated machining data are
organized and selected on the basis of "operation family" concepts, while tool life is defined
on the basis of the tool's ability to hold a specified tolerances and surface finish. For
example, in turning, workpiece diameter is often such a parameter, and all operations
involved in obtaining a given range of diameters make up the "operation family". The
ABEX system provides for the selection of metal cutting conditions based on strategies
other than minimum cost or maximum production.
A mathematical model to determine cutting speed for carbide turning and face milling was
applied in the General Electric "Computerized Machinability Program" [20]. The constants
and exponents for tool life equations were developed from empirical data obtained in
31
laboratory and shop tests. The considered factors affecting the tool life equations were: tool
material and tool geometry, hardness of workpiece materials~ surface conditions, feed, depth
of cut, flank wear and machinability rating. The General Electric Computerized
Machinability Program calculate speeds and feeds for minimum cost, maximum production,
corresponding tool life, needed power, minimum part cost or maximum production rate.
At a same time the EXAPT Computerized Machinability Data System was also developed
[19]. This system is an NC part programming system containing geometrical and
technological features that select operation sequences, cutting tools and collision-free
motions. The EXAPT processors determine optimum machining conditions from the
economic, empirical and theoretical metal cutting conditions. The system stores information
on materials, cutting tools and machine tools. An important and extensive work has been
done by INFOS in machinability testing.
Thus, the basic data have to be obtained by testing for computerized machinability data
systems or for conventional machinability data. The machinability testing methodology will
depend on the predominant technologies and new conditions in the 21st century. Therefore,
let us examine the nature and promise of the new technologies and conditions.
3. NEW TECHNOLOGICAL CONDITIONS IN MANUFACTURING
M. E. Merchant pointed out at the AMST'93 [2] that "the digital computer is an extremely
powerful systems tool made us recognize that manufacturing is a system in which the
operation can be optimized, and not just of those individual activities, but of the overall
system as well. Thus today, as the world industry approaches the 21st century, it is engaged
in striving toward accomplishment of computer integration, automation and optimized
operation of the overall manufacturing enterprise. However, in pursuing that overall trend it
is increasingly recognizing the dual nature of the concept of CIM enterprise, encompassing
both in technological and managerial operations." We will discuss only some technological
methodologies.
3.1. TECHNOLOGICAL METHODOLOGIES
According to M. E. Merchant [2] the first evolving methodology is concurrent
engineering - concurrent engineering of the conception and design of product and of
planning and execution of its manufacturing production and servicing. The second
important evolving methodology is that of artificial intelligence in the manufacturing.
By applying concurrent engineering methodology the product costs are reduced and the
industrial competitiveness is increased. About 70 percent of the cost of the manufacturing
production of a product is fixed when its design is completed. Since the material of the
components has to be chosen in the design, the "frozen" cost of the manufacturing can be
reduced by selecting corresponding material with better machinability. Thus the purpose is
to have more reliable machinability data.
Concurrent engineering also affects other cost savings and makes shorter the iead time
between conceptual design of a product and its commercial production.
32
E. Kuljanic
According to [2] "the artificial intelligence probably has greater potential to revolutionize
manufacturing in the 21st century than any other methodology known to us today".
Artificial intelligence has the potential to transform the non-deterministic system of
manufacturing into an intelligent manufacturing system which is "capable of solving within
certain limits, unprecedented, unforeseen problems on the basis even of incomplete and
imprecise information" [21]- information characteristic of a non-deterministic system.
The realization of this tremendous potential of artificial intelligence to revolutionize
manufacturing in the 21st century is surely the most challenging undertaking. It will require
revolutionary developments in the technology of artificial intelligence and massive research
and development efforts in manufacturing [2]. However, the rewards will be
magnificent.
3.2. BETTER UNDERSTANDING OF MACHINING
PROCESSES
AND
OTHER PRODUCTION
One part of the "massive research in manufacturing" will be carried out for better
understanding of the fabrication processes of the removal and in machinability testing of
conventional and new materials. The possibilities to do such a research are already
enonpous. The identification of the machining process can be easier and more reliable by
applying mathematics of statistics, design of experiments, modeling, identification of the
machining process by the energy quanta and the entropy [22], new software, new computer
generations etc.
For example, this makes possible to determine more reliable models, as tool life equation
which includes significant interactions proposed by the author in 1973 [23]. The equation
could include: stiffness effect of machining system, the effect of number of teeth in the
cutter in milling and interactions. Such tool life equation which has been determined from
the experimental data [23] by applying new analysis facilities is:
T==211.789105 vc-4.0225 fz-!.4538 z-10.2674 s-!.3292 exp(2.39131nvclnz+
+ 0.3380ln vc InS+ 0.8384ln zln S + 0.0190ln vc lnfz InS- 0.1972ln vc In zln S)
(5)
where T is tool life in min, Vc is cutting speed in m/min, j. is feed per tooth in mm, z is
number of teeth and Sis stiffness in N/mm. Multiple regression coefficient is R = 0.91862.
The time needed to determine this equation is less than a second while thirty years ago it
took days without computer. Thus it is easy to determine any equation. There is a need to
obtain empirical surface roughness equations and better understanding of chip formation
and chip control.
4. INTEGRATED MACHINABILITY TESTING CONCEPT
The integrated machinability testing is an approach in which tool life data and/or tool wear,
tool wear images, machining conditions and significant output data as dimension changing
33
of the machined workpiece, surface roughness, chip form etc. are registered and analyzed in
an unmanned system, i.e., in intelligent machining system.
The analysis of the obtained data could be done for different purposes. From the tool wear
or tool life data, the tool life equations can be determined and applied to optimize cutting
conditions on the intelligent machining system. Secondly, an integrated machinability data
bank could be built up by directly transferring machinability data from the intelligent
machining system. The data obtained in this way could be used and analyzed for other
purposes, for example, for process planning, for design, etc.
The integrated machinability testing can be used for roughing and finishing machining. It
should be pointed out that the aim ofthe 26 years F. W. Taylor research work [4] was done
only for roughing work. He emphasized in Part 1 of [4] "our principal object will be to
describe the fundamental laws and principles which will enable us to do roughing work in
the shortest time. Fine finishing cuts will not be dealt with." However, in integrated
machinability testing the emphasis is on finishing or light roughing work due to the trend
that dimensions of forgings and castings or workpieces produced by other methods are
closer to the final dimensions of the part.
The main data that should be quoted to determine the conditions of machining are as
follows:
machine tool and fixturing data
workpiece data
material, heat treatment, geometry and dimensions
tool characteristics
tool material, tool geometry
coolant data
These data will be registered automatically.
The cutting conditions: cutting speed, feed and depth of cut will be chosen by a computer
applying the design of experiments. The advantage of applying the design of experiments in
industrial conditions is to obtain data from the machining process in a shorter time. For
example, the reduction of needed tests is from about fifteen tests to five or seven tests when
Random Strategy Method [24] is applied using computer random number generator.
The tool wear and tool life will be determined by intelligent sensor systems with decision
making capability [25]. The tool wear images and the dimensions of tool wear, like VB- the
average tool wear land, will be measured, analyzed and saved automatically in a computer.
The surface roughness, the hardness of the machined surface, perhaps the surface integrity,
the dimensions of the machined workpiece will be analyzed and saved too. It will be
possible to analyze the chip form and to classifY it according to ISO numeric coding system
[11]. From such data tool wear or the tool life equations (3) or (5), or some other
relationships for the identification of machining process, will be easily determined by
regression analysis.
The tool life equations obtained by integrated machinability testing can be used for different
purposes. First for in-process optimization of machining conditions. Secondly, to building
up the machinability data bank with more reliable data. Such a data bank would receive the
machinability information, as tool life equations etc., directly from intelligent machining
systems. Thus, the data in such a data bank would be more reliable. The machinability data
E. Kuljanic
34
bank would be self regenerative and a small factory could have a proper machinability data
bank.
5. CONCLUSION
In accordance to the considerations presented in this paper, we may draw some conclusions
about how the machinability testing might be in the 21st century. Based on what we already
know the machinability testing will be possible to integrate with the machining in intelligent
machining system in industrial conditions without a direct man involvement.
The integrated machinability testing (IMT) concept is an approach in which tool life data
and/or tool wear, tool wear images, machining conditions, and the significant output data
such as dimension changes of the machined workpiece, surface roughness, chip form etc.,
are registered and analyzed in an unmanned system, i.e. in intelligent machining system.
The machinability data and other information obtained from machining in new industrial
conditions on such machining systems will be used for in-process optimization of machining
conditions and self monitoring. For this purpose, quantitative machinability models for
process physics description should be applied to intelligent machining systems.
An integrated machinability data bank could be built up even in a small factory by directly
transf~rring machinability data, tool wear images and other relevant information from the
intelligent machining system. Thus the reliability of the machinability data could be
increased. The effect of stiffness of the machining system and other significant factors will
be included. The data of the integrated machinability data bank will be useful in design for
material selection to decrease the "frozen" cost, Chapter 3 .1., for process planing, etc.
In order to improve intelligent machining systems and make them more reliable, machining
and process physics researchers should be included more in control teams.
By applying the integrated machinability testing concept in the intelligent machining systems
we could find more adequate answers to F. W. Taylor questions, "what speed shall I use"
and "what feed shall I use", after hundred years.
REFERENCES
1. Moriwaki, T.: Intelligent Machine Tool: Perspective and Themes for Future
Development, Manufacturing Science and Engineering, ASME, New York, 68(1994)2,
841-849
2. Merchant, M.E.: Manufacturing in the 21st Century, Proc. 3rd Int. Conf. on Advanced
Manufacturing Systems and Technology AMST'93, Udine, 1(1993), 1-12
3.
4.
Taylor, F.W.: On the Art of Cutting Metals, Transactions of the ASME, 28(1906)
5. Kuljanic, E.: Effect of Initial Cutting Speed on Tool Life in Short Tool Life Test,
Strojniski Vestnik XIII, 1967, 92-95
35
7. Boston, O.W., Oldacre, W.H., Moir, H.L., Slaughter, E.M.: Machinability Data from
Cold-finished and Heat-treated, SAE 1045 Steel, Transactions ofthe ASME, 28(1906)1
8. Waldman, N.E.: Good and Bad Structures in Machining Steel, Materials and Methods,
25(1947)
9. A Treatise of Milling and Milling Machines, Sec. 2, The Cincinnati Milling Machine
Co., 1946
10. Machinability Data Handbook, Metcut Research Associates Inc., Cincinnati, Ohio,
First Ed. 1966, Second Ed. 1972, Second Printing 1973
11. Kuljanic, E.: Machinability Testing for Advanced Manufacturing Systems, Proc. 3rd
Int. Conf. on Advanced Manufacturing Systems and Technology AMST'93, Udine,
1(1993 ), 78-89
12. Testing for Face Milling, Internal publication, CIRP, Paris, 1977
13. ISO International Standard: Tool Life Testing in Milling Part 1 -Face Milling, 868811,
Stockholm, 1985
14. ISO International Standard: Tool Life Testing in Milling Part 2 - End Milling, 8688/2,
Stockholm, 1985
15. ISO International Standard: Tool Life Testing with Single Point Turning Tools,
Stockholm, 1977
16. Muhren, C., Eriksson, U., Skysted, F., Ravenhorst, H., Gunarsson, S., Akerstom, G.:
Machinability of Materials Applied in Volvo, Proc. 2nd Int. Conf. on Advanced
Manufacturing Systems and Technology AMST'90, Trento, 1(1990), 208-219
17. Kahles, J.: Machinability Data Requirements for Advanced Machining Systems,
Progress report No.2, CIRP S.T.C. Cutting, Paris, 1987
18. Kuljanic, E.: Machining Data Requirements for Advanced Machining Systems, Proc. of
the Int. Conf. on Advanced Manufacturing Systems and Technology AMST'87, Opatija,
1987, 1-8
19. N/C Machinability Data Systems - Numerical Control Series, SME, Dearborn,
Michigan, 1971
36
E. Kuljanic
20. Weller, E.J., Reitz, C.A.: Optimizing Machinability Parameters with a Computer, Paper
No. MS66-179, Dearborn, Michigan, American Society of Tool and Manufacturing
Engineers, 1966
21. Hatvany, J.: The Efficient Use of Deficient Information, Annals of the CIRP,
32(1983)1,423-425
22. Peklenik, J., Dolinsek, S.: The Energy Quanta and the Entropy- New Parameters for
Identification of the Machining Processes, Annals of the CIRP, 44(1995)1, 63-68
23. Kuljanic, E.: Effect of Stiffness on Tool Wear and New Tool Life Equation, Journal of
Engineering for Industry, Transactions ofthe ASME, Ser. B, (1975)9, 939-944
24. Kuljanic, E.: Random Strategy Method for Determining Tool Life Equations, Annals of
the CIRP, 29(1980)1, 351-356
25. Byrne, G., Dornfeld, D., Inasaki, I., Ketteler, G., Konig, W., Teti, R.: Tool Condition
Monitoring (TCM) - The Status of Research and Industrial Application, Annals of the
CIRP, 44(1995)2, 541-568
26. Kuljanic, E.: Materials Machinability for Computer Integrated Manufacturing, Proc.
2nd Int. Conf on Advanced Manufacturing Systems and Technology AMST'90, Trento,
1(1990), 31-45
H. Schulz
Technical University of Darmstadt, Darmstadt, Germany
KEYNOTE PAPER
Molds and dies have to be manufactured at low cost in shorter and shorter time. For this
reason, the entire product generation process must be investigated for opportunities of
reducing the time from idea to final product as well as for simultaneously minimizing the
costs [1-5]. As can be seen from Fig. 1, there are various approaches, with high speed
cutting (HSC) of the mold contour being of special importance. Basically, high speed
cutting of steel and cast-iron molds is however reasonable only for the finishing or prefinishing operations (Fig. 2). Machining of steel or cast-iron molds must therefore be split
up into
rough-machining, on efficient standard NC machines as previously and
finishing or pre-finishing on high speed machines.
H. Schulz
38
manufacturing
defined cutting edges
manual finishing
I~~~~IIIIIII~~OOI]MI
'
CAD/CAM :
HSC -----..:
. . . . . . .. - . . . . . . . - - - - ... .
machining costs
100
machining
80
rough ing
pre-smooth ing
smoothing
manual tm 1shmg
adapting/finish ing
total
#. 60
VI
.....
VI
costs(% ]
40
20
series production
12
25
25
16
22
100
others
machining costs
material costs
39
2. CUTTING STRATEGIES
The major objective in metal cutting is the closest possible approximation to the final
contour, especially also for free-form surfaces. Since in high-speed cutting it is possible to
use feeds five to eight times faster than usual, the cutter lines can for the same finishing
time be made five to eight times closer than in conventional milling. As can be seen from
Fig. 3, this results in a much better approximation to the final contour. For completion, only
minor corrections of dimensions and surfaces are required which as a rule are made
manually. This means that the manual rework time involved with smoothing and polishing
is substantially reduced. This is the reason why for high-speed machining of molds and dies
various strategies can basically be used:
1.
High speed cutting itself does not reduce the mechanical manufacturing time, but due
to the close approximation to the fmal contour, manual rework is reduced
substantially.
2.
HSC can also be used to reduce the manufacturing times in finishing and prefinishing
operations, but in that case the potential of choosing very close line widths is not fully
made use of. This will increase rework times to a certain extent.
overmeasure
I
I I I
____,_____:._ I
desired
contour
; line space
line space
real contour
E'
0.03
(tool diameter D: 10 mm /
E.o.o2
VI
VI
U0.01
..s::
+"'
0::::
.........- ..-/
0.2
0.4
/
0.6
v
0.8
40
H. Schulz
High speed machining has many advantages, but one essential disadvantage, i.e tool wear
which increases substantially with increasing cutting speed. It is therefore required to
economically get this negative effect under control by using the following means:
1.
2.
30
............
...
~/
25
I
~
20
. "'
~
.......
r--
'"
..
It/~ '
15
10
-+
-10
VBmax =0.4 mm
"""
..........
'
10
' ",
.
'
material
tool
(ball head)
cutting material
overhang
'.
. .
~
: 0 20 z=1
: KOS
: 60 [mm]
........
20
30
40
tilting angle ~f [ o]
: 40CrMnMo7 (1.2311)
: 300
: 0,2
:1
: 0, 7
[m/min]
[mm]
[mm]
[mm]
50
P25
uncoated
P40JSO TiN
K05
oxide
ceramics
Si3N4
350
300
cermet
250
200
cermet
ion .impl.
1 CBN
150
I
100
so
800
300 (old)
300 (new)
300
0.1 5
0.3
0.35
0.35 ,--
600
0.2 ~
VB=02
P25
uncoated
KOS
VB= 0.3
0.3
0.3
0.15
VB= 0.2
r-
Jn A
1 000 ~============~--~~~
Vc[m/min) fz [mm)
300
0.5
300
300
800
800
400 r - - - - - - - - - - - l
VB=0.2
V8=0.2
r- r1-
BB
s1hcon
nitride
ce~~i~fcs
Vc[mlmin] fz [mm
P30/40 TIN
1000
0.15
KOS
300
0.3
o~ide ceramics 1000
0.05
$13N4
1000
0.05
cermet
600
0.3
CBN
1000
0.1 5
600 r----------------~
VB=0.3
P40150
TiN
41
1-
r-r-_n
GFC 1
BC40
[_ LMC4 I
cermet
CBN
technology:
down-cut/drawing cut
tilting angle: + 15
infeed: 1 mm
line space: 0.7 mm, 0.5 with CBN
5.
42
H. Schulz
too/life [m]
400 r-----.---~~--~----~--~~
VB= 0.2 mm
material:
40 CrMnMo7
(steel)
tool:
ball head cutter: 0 6 (1 0) mm
number of teeth: z=2
cutting material: cermet
--.-I= 30 [mm)
100 +------T---- -----1------ --H - - I= 45 [mm)
technology:
--1=60[mm )
I =40 [mm)
O L-~~~--~~--~~--~~--~
0.15
0.1
rise per tooth [mm]
0.2
0.25
down-cut/drawing cut
tilting angle: + 15
cutting speed: 300m/min
line space: 0.5 (0.6 with 0 10) mm
cutting depth: 0.5 mm
.s
..r:::
~
.....
0'1
cQJ
material
tool
ball head
: 40CrMnMo 7
: 0 20 Z=1
: P40, PSO
TiN-besch.
: 60 [mm]
0'1
.p
......
::J
wear
~:
15
drawing cut
down-cut
Fig. 7. Cutting length depending on cutting speed and feed
[mm]
[mm]
[mm]
43
material
tool
(ball head)
r--
cermet
VBmax: 0.3mm
: 40CrMnMo7
:0 20
: Z=1
r-
300
200
1 00
overhang
+-'
+-'
::::l
Q)
ru
.._
"0
r--
r--
::::l
+-'
+-'
Q)
Q)
Q)
c
..s:::
Vl
::::l
Q.
tilting into
feed plane
::::l
ru
.._
"0
: 60 [mm]
::::l
..s:::
Vl
::::l
Q.
cutting speed vc
rise per tooth fz
infeed
line space
down-cut
: 300 [m/min]
:0,2 [mm]
[mm]
:1
: 0,7 [mm]
tilting crosswise
to feed plane
pushing cut
'
,,.,.
material:
tool:
diameter
number of teeth
cutting material
overhang
technology:
cutting direction
I~
spindle speed
rise per tooth
line space
cutting depth
40CrMnMo7
10mm
Z=2
cermet
50mm
in direction
of curvature
20,000 U/min
0.1 mm
0.6mm
0.5 mm
44
H. Schulz
spindle power
spindle speed
7 .. .. 12 kW
30 000-40 000 1/min
path
XxYxZ
rapid motion X, Y, Z
60 ... 80 m/min
machining feed X, Y, Z 30 .. . 50 m/min
acceleration
20 ... 30 mts2
positionning accuracy
dynamic path accuracy
measuring system
0.1 IJm
definition
45
4. CONCLUSION
Application of high speed cutting in die and mold manufacturing results in substantial time
reductions. In extreme cases, as is shown by a practical example, even finishing times
slightly increased as compared with a conventional NC machine can permit obtaining
overall time reductions. In the case of a mold rough-machined to 0.5 mm oversize, the
finishing time on a conventional NC machine amounted to a total of 36 hours, the total
manual reworking time was 70 hours, thus resulting in a total manufacturing time of 106
hours. As compared with this, application of a HSC machine involved a finishing time of
40 hours and manual reworking time of just I4 hours, i.e. a total of 54 hours. This resulted
in a time reduction of almost 50 % due to the application of this new technology (Fig. II).
For economic considerations it is therefore indispensable to include the production planning
process as a whole.
120 ]
[h]
100
106
I manual
' polishing
80 __J
I
manual
1 spot60 - grinding
' r - -- ---,--
54
40 _ manual.
smoothmg
NC-smoothing
20 -
NC-pre-smoothing
normal NC-machining
HSC -machining
46
H. Schulz
5. REFERENCES
KEYNOTE PAPER
1. INTRODUCTION
An always increasing emphasis on the production of components with "zero defects" is
nowadays observed in the automotive and aerospace industries; "zero defects", in fact,
means no discards and allows a significant cost reduction .. These reasons have determined
an increasing interest in developping a general approach for the design of processes aimed
to prevent the occurrence of defects.
In the industries named above cold forming processes are largely used since they permit to
obtain near-net shape or net shape forged parts characterised by shape and dimensions very
close to the final desired ones, thus requiring little or no subsequent machining operations.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
48
During cold forming processes several classes of defects can arise; generally it could be
observed that a defect occurs when the forged component do not conform to the design
specifications, making it unsuitable for the purpose for which it was designed [1 ].
In particular the defects which most frequently are encountered in bulk metal forming
processes can be classified as follows:
ductile fractures (cracks), occurring inside or on the external surface of the forged
component;
flow defects, which include both imperfections such as buckling or folding, linked to
plastic instability phenomena;
shape and dimensional inaccuracies, caused by an uncomplete filling of the die cavity
or by distorsions induced by residual stresses;
surface imperfections, linked to an unappropriate quality of the manufactured surface,
unsuitable for the purpose for which the component was designed;
unacceptable modifications of the mechanical properties of the material, due to the
microstructural transformations occurring during the forming process.
Among these classes of defects the former one is surely the most important, since its
incidence in forged components can result in very serious consequences for the automotive
and aerospace industries. Actually the production of a defective forged part could
determine the occurrence of costs to the manufacturer at some different leves: first of all, if
.the defect is easily discernible to the naked eye, the costs are linked only to the discard of
the defective component and, in a traditional industrial environment, to the development
and experimental testing of a new forming sequence. The costs to the manufacturer
increase if the defect is not detectable until all the subsequent processing has been carried
out; finally the worst consequence level is reached when the forged component fails in
service, causing the loss of the facility and sometimes of human lifes.
The above considerations show the importance of a new and advanced type of metal
forming process design, namely a design to prevent process- and material-related defects.
Such a design should be based on general methods able to assist the designer, suggesting
the role of the process and the material parameters on the process mechanics in order to
avoid the occurrence of defects.
Actually the causes of cracks insurgence in bulk forging processes are complicated and
often there is not an unique cause for a defect. Despite these difficulties the most inportant
causes of defects can be individuated in the metallurgical nature of the material, with
particular reference to the distribution, geometry and volume fraction of second phase
particles and inclusions, in the tribological variables, including die and workpiece
lubrication, and finally in the forming process operating parameters, i.e. the forming
sequence, the number of steps in the sequence and the geometry of the tools in each step.
Consequently in the last two decades the attention of most of the researchers has been
focused on the understanding on the origin of ductile fractures; the mechanism which
limits the ductility of a structural alloy has been explained by Gelin et al.[2] and by
Tvergaard [3], taking into account the nucleation and the growth of a large number of small
microvoids. The voids mainly nucleate at second phase particles by decohesion of the
particle-matrix interface and grow due to the presence of a tensile hydrostatic stress; as a
consequence the ligaments between them thin down until fracture occurs by coalescence.
49
These phenomena, generally called "damage" or "plastic ductile damage" [4] determine a
progressive deterioration of the material, both as the elastic and the plastic properties are
concerned, decreasing its capability to resist to subsequent loading.
Several criteria and theories have been developed and proposed in the literature which take
into account the fracture mechanism described above and are aimed to follow the evolution
of damage and consequently to evaluate the level of "soundness" of the forged material.
On the other hand other researchers [5,6) have observed that the limitations of the forming
operations are linked mostly to plastic instability phenomena, which yield to the
localization of the deformations in shear bands or strictions inside or on the surface of the
forged component. Such a strain localization, in fact, favours the growth of damage and
leads to ductile fracture. For this reason the attention of these researchers has been focused
on the prediction of the plastic instability phenomena: the sufficient criteria for stability
have been proposed, essentially based on the evolution of the incremental work in the
neighbourhood of an equilibrium configuration.
In this paper the attention of the authors has been focused on the approaches based on the
prediction of damage, which have been applied to a wide range of cold forging processes
and have shown a well suitable predictive capability. In the next paragraph these approach
will be discussed in detail arid a proper classification will be carried out. Finally some
applications to typical bulk cold metal forming processes will be described.
2. MODELLING OF DAMAGE
The aim to evaluate quantitatively the level of damage induced by a plastic forming
process has been pursued by several researchers all over the world in the last two decades.
Among the approaches published in the literature three main groups can be individuated,
namely the one based on ductile fracture criteria, the one founded on the analysis of the
damage mechanics, by means of specific yield functions for damaging materials and proper
models able to analyse the evolution of damage in the stages of nucleation, growth and
coalescence of microvoids, and finally the one based on the use of the porous materials
formulation: in the latter case the yield conditions initially proposed for the analysis of
forming processes on porous materials are in fact employed.
The three fundamental approaches are described in the next, highlighting the advantages
offered by each of them and discussing their differences, which mainly rely on the
capability to take into account the influence of damage occurrence on the plastic behaviour
of the material and on the possibility to consider the nucleation of new microvoids.
2.1 Models based on the use of ductile fracture criteria.
These approaches have been the former to be proposed and still today they represent a
powerful industrial tool to appreciate the "state" of the material, in terms of damage, by
comparison with the critical value of the criterion at fracture.
The ductile fracture criteria proposed in the literature depend on the stress and strain
conditions occurring in the workpiece during the forming process and have been
formulated taking into account the role, described above, of the plastic straining and of the
hydrostatic stress on the fracture mechanism. Moreover the ductile fracture criteria are
50
fad= c
(1)
Ductile fracture occurs when the plastic deformation energy reaches the critical value C.
The validity of this criterion has been recently confirmed by Clift et al. [9], who performed
several experiments to compare the results of several ductile fracture criteria and claimed
that Freudenthal criterion predicted locations of cracks better than the others. However,
Freudenthal criterion does not take into account the effect of hydrostatic stress, which is
known to affect the ductile fracture significantly.
The mean stress is in fact detected as the main responsible of the fracture occurrence in the
criterion proposed by Oyane [10], which is written in the following form:
a~) de= c.
J(l+ Aa
(2)
The criterion has shown good capabilities to predict central bursting occurrence in drawing
processes [11]. Cockroft and Latham [12] proposed that the maximum principal tensile
stress, over the plastic strain path, is the main responsible of the fracture initiation. They
postulated that fracture occurs when the integral of the largest principal tensile stress over
the plastic strain path to fracture equals a critical value specific for the material:
Er
(JI
dE
c.
(3)
An empirical modification of the Cockroft and Latham model has been proposed by
Brozzo et al. [13] in order to explicitly include the dependence of ductile fracture on the
hydrostatic stress. Brozzo model is shown in the following equation:
(4)
a and am
51
52
deviatoric stress tensor and on the void volume fraction f. The Gurson yield criterion
belongs to the so called "microscale-macroscale" approach [2], i.e. the macroscopic yield
function has been derived starting from microscale considerations; Gurson, in fact, started
from a rigid-perfectly plastic upper bound solution for spherically symmetric deformations
around a single spherical void, and proposed his yield function for a solid with a randomly
distributed volume fraction of voids fin the form:
(5)
where
(6)
is the macroscopic effective stress, with oij macroscopic stress deviator, and o 0 is the yield
stress of the matrix (void-free) material. In eq.5 the effect of the mean stress on the plastic
flow when the void volume fraction is non-zero can be easily distinguished, while for f=O
the Gurson criterion reduces to the von Mises one.
The components of the macroscopic plastic strain rate vector can be calculated applying
the normality rule to the yield criterion above written.
Subsequently the Gurson yield criterion has been modified by Tvergaard [19] and by
Tvergaard and Needleman [20], which introduced other parameters obtaining the following
expression:
(7)
In the above expression the parameter q 1 allows to consider the interactions between
neighbouring voids: Tvergaard in fact obtained the above criterion analysing the
macroscopic behaviour of a doubly periodic array of voids using a model which takes into
account the nmiuniform stress field around each void. The value of q 1 was assumed equal
to 1.5 by the same Tvergaard. On the other hand the parameter f*(f) was introduced in
substitution off in order to describe in a more accurate way the rapid decrease of stresscarrying capability of the material associated to the coalescence of voids: in fact f*(f) is
defined as:
for f s fc
:c
(8)
where f* u =l!q 1, fc is a critical value of the void volume fraction at which the material
stress-carrying capability starts to decay very quickly (easily discernible in a tensile test
53
curve) and finally fp is the void volume fraction value corresponding to the complete loss
of stress-carrying capability. Again the constitutive equations associated to the Tvergaard
and Needleman yield criterion can be determined by means of the normality rule.
Finally a model based on the analysis of the damage mechanics must be able to take into
account the evolution of damage in its various stages (nucleation, growth, coalescence). At
the end of each step of the deformation process the values of the void volume fraction
calculated inside the workpiece under deformation should be updated: since the variation
of the void volume fraction depends both on the growth or the closure of the existing voids
and on the possible nucleation of new voids, the void volume fraction rate has to be written
in the form:
. .
f
.
= fgrowth + fnucleation
(9)
It is very simple to calculate the first term of eq.(9): the principle of the conservation of
mass applied to the matrix material, in fact, permits to relate the rate of change of the void
volume fraction to the volumetric strain rate Ev, i.e.:
fgrowth
= (1- f )Ev
(10)
On the other hand, several researchers have focused their attention on the nucleation of
new voids. As regards this topic two main formulations have been proposed: the former is
based on the assumption that nucleation is mainly controlled by plastic strain rate:
fnucleation
= AE
(11)
on the other hand in the latter formulation it is hypothised that the nucleation of new voids
is governed by the maximum normal stress transmitted across the particle-matrix interface,
i.e.:
(12)
since, as suggested by Needleman and Rice [21] the sum of the effective and of the
hydrostatic stress can be assumed as a good approximation of the maximum normal stress
transmitted across the particle-matrix interface.
As concerns the parameter A and B, Chu and Needleman [22] have proposed that void
nucleation follows a normal distribution about a mean equivalent plastic strain or a mean
maximum normal stress: as an example, assuming the strain controlled nucleation model,
eq.(ll) can be rewritten in the form:
(13)
54
where s is the standard deviation which characterises the normal distribution, Emean the
mean strain for nucleation and finally fn is the volume fraction of voids which could
nucleate if sufficiently high strains are reached. Voids are nucleated in the zones where
tensile hydrostatic stresses occur.
The introduction of the yield condition for the damaging material and of the damage
evolutive model in a finite element formulation permits to follow the evolution of the void
volume fraction variable and consequently to evaluate the level of "soundness" of the
forged components, by comparing the current value of the void volume fraction which the
critical value fc, which corresponds to the coalescence of the microvoids and to the steep
decay of the stress-carrying capability of the material.
However the above considerations highlight that the use of a model based on the analysis
of the damage mechanics requires a complete rewriting of the numerical code to be used to
simulate the forming process. Moreover the formulation depends on several parameters,
both as regards the yield condition and the damage evolution model, which must be
properly selected in order to obtain a suitable prediction of fracture occurrence [23,24].
Actually this fact could represent a very important advantage offered by this type of
approach, since it is possible to "calibrate" the model with respect to the actual state and
properties of the material taken into account, but, on the other hand, it makes necessary the
use of complex analytical tools in the stage of the parameters selection. This aim has been
pursued by Fratini et al. [25], applying an inverse identification algorithm, based on an
optimization technique which allows to determine the material parameters by comparing
some numerical and experimental results and searching for the best matching between
them. In particular the load vs. displacement curve during a tensile test on a sheet specimen
has been employed to optimize the comparison between the numerical results and the
experimental ones, and consequently to achieve the desired material characterisation.
The above considerations justify the observation that, even if the models based on the
analysis of the damage mechanics are certainly the most correct from the theoretical point
of view, their practical use in an industrial environment is limited to very few applications,
while they encounter a larger interest in the academic and research fields.
2.3 Models based on the porous material formulation.
In some recent papers a further approach for the prediction of ductile fractures has been
proposed, based on the formulation generally employed to simulate the forming processes
on powder materials. Actually in these processes the powder material is compacted and
consequently its relative density increases: on the contrary the application of the porous
materials formulation to the analysis of ductile fractures insurgence is aimed to predict the
reduction of the relative density (corresponding to the increment of the void volume
fraction) associated to the occurrence of the defect.
The porous materials formulation is based on the yield condition initially proposed by
Shima and Oyane [26], which depends on the first invariant of the stress tensor, on the
second invariant of the deviatoric stress tensor and on a scalar parameter, which is, instead
of the void volume fraction f as in the previous models, the relative density R.
The Shima and Oyane yield function can be written in the form:
55
R
R
=--
(15)
~Ev
Following of the above considerations it derives that the porous materials formulation can
be applied to the prediction of fractures only assuming the existence of an initial relative
density lower than 100%. In other words it is necessary to hypothise that all the voids that
could nucleate, nucleate once a material element enters the plastic zone: subsequently,
depending on the stress conditions, they can grow or close. This assumption can be
interpreted as a particular plastic strain controlled nucleation case, in which the plastic
strain for nucleation is coincident with the tensile yield strain [28]. Furthermore such an
assumption requires an appropriate choice of the initial relative density, depending on the
purity of the considered material.
Nevertheless the application of this approach to the prediction of central bursting
insurgence in the drawing process of Aluminum Alloy and Copper rods has allowed to
obtain a very good overlapping of the numerical and the experimental results, as it will be
shown in the next paragraph [23,29,30].
56
3. APPLICATIONS
A very large number of applications of numerical methods belonging to the three
fundamental groups described in the previous paragraph can be found in the literature.
Here a particular problem will be taken into account, namely the insurgence of the well
known central bursting defect during extrusion and drawing.
Central bursts are internal defects which cause very serious problems to the quality control
of the products, since it is impossible to detect them by means of a simple surface
inspection of the workpiece. In the past some experimental and theoretical studies
[31,32,33,34] have been performed in order to understand the origin of the defect and
consequently to determine the influence of the main operating parameters (reduction in
area, die cone angle, friction at the die-workpiece interface and material properties) on the
occurrence of central bursts. The development of powerful numerical codes and suitable
damage models has represented a new fundamental tool in order to pursue this attempt.
Some researchers [11,14,35] have used ductile fracture criteria, others have applied models
based on the analysis of the damage mechanics [23,28], finally some researchers belonging
to the group of the University of Palermo have used the model based on the porous
materials formulation and in particular on the Shima and Oyane yield condition [29,30].
Some interesting results obtained by the latter group are presented below.
First of all the research has been focused on the drawing process of aluminum alloy (UNI3571) specimens. The influence of the operating parameters (reduction in area and
semicone die angle) on the insurgence of central bursting has been investigated both using
a damage mechanics model founded on the Tvergaard and Needleman yield condition and
the porous materials formulation, based on the Shima and Oyane plasticity condition. In
order to compare the numerical predictions obtained with the two approaches, no
nucleation of new microvoids has been considered in the former model and, taking into
account the high percentage of inclusions and porosities of the material, the value of the
initial void volume fraction has been fixed equal to 0.04 (i.e. the relative density has been
fixed equal to 0.96).
A tension test has been performed, both to obtain the constitutive equation of the material
to be used in the numerical simulation, and to determine the elongation ratio and the
necking coefficient at fracture; these values in fact, are necessary in order to evaluate, by
the comparison with the numerical simulation of the tensile test, the critical value of the
void volume fraction (or of the relative density). In particular the numerical analysis has
been stopped after a total displacement of the nodes of the specimen assumed clamped to
the testing machine equal to the elongation ratio, and the maximum void volume fraction
reached at this stage (or the minimum relative density) has been assumed as the critical
value fo corresponding to the rapid decay of the stress-carrying capability of the material.
Subsequently the numerical analysis has been applied to several drawing processes,
characterised by different reductions in area and die cone angles: for each of them the
maximum achieved value of the void volume fraction has been calculated, and, by the
comparison with the critical value, it has been evaluated if the coalescence of voids, i.e. the
57
insurgence of the ductile fracture, should occur or not. The numerical results have shown a
good agreement with the experimental ones.
Subsequently, in order to test the suitability of the employed models in an actual industrial
problem, the research has been focused on the prediction of bursting occurrence in the
drawing process of commercially pure copper specimens. This material, in fact, is typically
used in industrial drawing operations, since it is characterised by a very low value of initial
porosity and presents a small amount of inclusions; consequently, depending on the
operating parameters (i.e. the reduction in area and the semicone die angle), the insurgence
of defects could occur only after several drawing steps.
The prediction of defect insurgence has been carried out employing the model based on the
porous materials formulation and assuming an initial value of the relative density equal to
R=0.9998. Again this parameter has been selected by simulating the tensile test and
choosing the value of R which allows a good overlapping between the numerical and the
experimental results in terms of elongation ratio and of necking coefficient.
Several drawing sequences characterised by different values of operating parameters have
been taken into account, according to the next Table 1:
a=8
RA=lO%
RA=20%
a=12
X
X
a=15
a=20
a=22
X
Table 1
For each sequence six subsequent reductions are carried out, maintaining constant the same
reduction in area and semicone die angle.
The numerical simulations have supplied the relative density distributions inside the
specimen during the deformation path. In particular, depending on the operating
parameters, tensile mean stresses occur in the zone of the specimen close to the symmetry
axis determining a reduction of the relative density.
The relative density trends for several different reduction in area - semicone die angle
couples are reported in fig.1: for some operating conditions a slight variation of the relative
density with respect to the initial value occurs, while for other conditions after few
reductions a steep decay of the relative density arises. Moreover in the latter cases, in
correspondence to the large reduction of the relative density, the numerical simulation
stops due to the presence of a non positive definite stiffness matrix. This condition is the
indication of the plastic collapse of the material i.e. of the insurgence of the defect.
58
~
u; 0,99
z
w
Q
w 0,98
~-\~~
~-
>
5w
a::
R A I 0 'CIC IS
RA 1001'20
0,97
- - - - - - - - - R A I 0, ot &
---
--
R A 20,. QC B
--o----
RA20()(12
--
R A 2 0 ()(IS
-------+------
R A 2 0 , 1:1( 2 0
0,96
0
REDUCTIONS
Fig. 1 - Relative density trends
The same combinations of the operating parameters, as reported in the previous Table 1,
have been experimentally tested, repeating each sequence 10 times.
Table 2 reports the results of the experimental tests; in the table for each combination of
the operating parameters, the number of defective specimens is reported.
Drawing steps
1st
znd
3rd
4th
sth
6th
RA=10% a= 8
RA= 10% a= 20
RA=20% a=8
RA=10% a= 15
RA=20% a= 12
RA=20% a= 15
RA=20% a= 20
59
"safe" sequences that were characterised by a smooth variation of the calculated relative
density.
Very recently an approach based on the analysis of the damage mechanics has been applied
by some researchers belonging to the group of the University of Palermo to the prediction
of tearing in a typical sheet metal forming process, namely the deep drawing of square
boxes [36]. The yield condition for damaging materials proposed by Tvergaard and
Needleman and a strain controlled nucleation model have been introduced into a finite
element explicit code in order to follow the evolution of the void volume fraction variable
during the deep drawing operation.
First of all the model has been characterised with reference to the steel used in the
experimental tests by means of the inverse identification approach described in reference
[25]. Subsequently it hase been used to predict the insurgence of tearing in the deep
drawing process, at varying the blank diameter and the lubricating conditions. In particular
the void volume fraction trend has been analysed and it has been assumed that tearing
occurs when the calculated void volume fraction reaches the critical value fc associated to
the voids coalescence.
18 16
'E
14
.c
12 -
..
~-
10 --
----numerical (G)
experimental (G)
--numerical (T)
experimental (T)
''-
8
1 ,5
1 7
1 ,9
2,1
Da/Dp
60
4. REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9].
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
Dodd, B., Defect in Cold Forging. Final Report the Materials and Defects Sub-group
of the International Cold Forging Group, Osaka, 1993.
Gelin, J. C., Predelanu, M., Recent Advances in Damage Mechanics: Modelling and
Computational Aspects, Proc. of Numiform '92, pp.89-98, 1992.
Tvergaard, V., Influence of Void Nucleation on Ductile Shear Fracture at a Free
Surface, J. Mech. Phys. Solids, vol. 30, no. 6, pp.339-425, 1982.
Predeleanu, M., Finite Strain Plasticity Analysis of Damage Effects in Metal
Forming Processes, Computational Methods for Predicting Material Processing
Defects, pp.295-307, 1987.
Hill, R., A General Theory of Uniqueness and Stabi~ity for Inelastic Plastic Solids, J.
Mech. Phys. Solids, vol.6, pp.236-249, 1958.
Storen, S., Rice, J.R., Localized Necking in Thin Sheets, J. Mech. Phys. Solids,
vol.23, pp.421-441, 1975.
Barcellona, A., Prediction of Ductile Fracture in Cold Forging Processes by FE
Simulations, Proc. of X CAPE, pp., 1994.
Freudenthal, A.M., The Inelastic Behaviour of solids, Wiley, NY, 1950.
Clift, S. E., Hartley, P., Sturgess, C. E. N., Rowe, G. W., Fracture Prediction in
Plastic Deformation Processes, Int. J. of Mech. Sci., vol.32, no.1, pp.1-17, 1990.
Oyane, M., Sato, T., Okimoto, K., Shima, S., Criteria for Ductile Fracture and their
Applications, J. of Mech. Work Tech., vol.4, pp.65-81, 1980.
Alberti, N., Barcellona, A., Masnata, A., Micari, F., Central Bursting Defects in
Drawing and Extrusion: Numerical and Ultrasonic Evaluation, Annals of CIRP,
Vol.42/1!1993, pp. 269-272.
Cockroft, M. G., Latham, D. J., Ductility and the Workability of Metals, J. Inst.
Metals, vol.96, pp.33-39, 1968.
Brozzo, P., De Luca, B., Rendina, R., A New Method for the Prediction of
Formability Limits in Metal Sheets, Proc. of the 7th Conference of the International
Deep Drawing Research Group, 1972.
Ayada, M., Higashimo, T. Mori, K., Central Bursting in Extrusion of
Inhomogeneous Materials, Advanced Technology of Plasticity, vol.l, pp.553-558,
1987.
McClintock, F., Kaplan, S. M., Berg, C. A., Ductile Fracture by the Hole Growth in
Shear Bands, Int. J. of Mech. Sci., vol.2, p.614, 1966.
Rice, J., Tracey, D., On Ductile Enlargement of Voids in Triaxial Stress Fields, J. of
Mech. Phys. Solids, vol.17, 1969.
Osakada, K., Mori, K., Kudo, H., Prediction of Ductile Fracture in Cold Forming,
Annals of the CIRP, Vol. 27/1, pp.135-139, 1978.
Gurson, A. L., Continuum Theory of Ductile Rupture by Void-Nucleation and
Growth: Yield Criteria and Flow Rules for Porous Ductile Media, J. of Eng. Mat.
Tech., vol.99, pp.2-15, 1977.
Tvergaard, V., Ductile Fracture by Cavity Nucleation between larger voids, J. Mech.
Phys. Solids, vol. 30, no. 4, pp.265-286, 1982.
61
[20] Needleman; A., Tvegaard, V., An Analysis of Ductile Rupture in Notched Bars, J. of
Mech. Phys. Solids, vol. 32, No.6: 461-490, 1984.
[21] Needleman, A., Rice, J.R., in Mechanics of Sheet Metal Forming, edited by D.P.
Koistinen et al., Plenum Press, New York, p.237, 1978.
[22) Chu, C.C., Needleman, A., Jnl. Eng. Mat. Tech., vol.102, pp.249-262, 1980.
[23] Alberti, N., Barcellona, A., Cannizzaro, L., Micari, F., Predictions of Ductile
Fractures in Metal Forming Processes: an Approach Based on the Damage
Mechanics, Annals of CIRP, vol.43/1, pp.207-210, 1994.
[24] Alberti, N., Cannizzaro, L., Micari, F., Prediction of Ductile Fractures Occurrence in
Metal Forming Processes, Proc. of the II AITEM Conference, pp.157-165, 1995.
[25] Fratini, L., Micari, F., Lombardo, A., Material characterization for the prediction of
ductile fractures occurrence: an inverse approach, accepted for the publication on the
Proceedings of the Metal Forming '96 Conference.
[26] Shima, S., Oyane, M., Plasticity Theory for Porous Metals, Int. J. of Mech. Sci., vol.
18, pp. 285, 1986.
[27] Kobayashi, S., Oh, S. 1., Altan, T., Metal Forming and the Finite Element Method,
edited by the Oxford University Press, 1989.
[28] Aravas, N., The Analysis of Void Growth that Leads to Central Bursts during
Extrusion, J. of Mech. Phys. Solids, vol. 34, no.1, pp.55-79, 1986.
[29] Alberti, N., Borsellino, C., Micari, F., Ruisi, V.F., Central Bursting Defects in the
Drawing of Copper Rods: Numerical Predictions and Experimental Tests,
Transactions of NAMRI/SME, vol.23, pp.85-90, 1995.
[30] Borsellino, C., Micari, F., Ruisi, V.F., The Influence of Friction on Central Bursting
in the Drawing Process of Copper Specimens, Proceedings of the International
Conference on Advances in Materials and Processing Technologies (AMPT'95),
pp.1230-1239, 1995.
[31] Avitzur, B., Analysis of Central Bursting Defects in Extrusion and Wire Drawing,
Trans. ASME, ser. B, vol.90, pp.79-91, 1968.
[32] Orbegozo, J. 1., Fracture in wire drawing, Annals of CIRP, vol.16/l, pp.319-322,
1968.
[33] Avitzur, B., Choi, C. C., Analysis of Central Bursting Defects in Plane Strain
Drawing and Extrusion, Trans. ASME, ser. B, vol.108, pp.317-321, 1986.
[34] Moritoki, H., Central Bursting in Drawing and Extrusion under Plane Strain,
Advanced Technology of Plasticity, vol.l, pp.441-446, 1990.
[35] Hingwe, A. K., Greczanik, R. C., Knoerr, M., Prediction of Internal Defects by Finite
Elements Analysis, Proc. of the 9th International Cold Forging Congress, pp.209216, 1995.
[36] Micari, F., Fratini, L., Lo Casto, S., Alberti, N., Prediction of Ductile Fracture
Occurrence in Deep Drawing of Square Boxes, accepted for the publication on the
Annals of CIRP, vol.45/1, 1996.
KEYNOTE PAPER
Progress in metal fonning operations depends among other factors upon improved
evaluation of single and combined effects of process parameters on product, entailing
reliable estimation of material properties, both initial and as modified during production.
The interplay between theory and experiment leads to enhanced modeling capability, and in
turn to identification of areas with sizable potential for improvement in terms of process and
product quality. Steel wire drawing is no exception, particularly in view of the ever
increasing demands on production rate, product integrity, and process reliability. Some
results obtained over a decade of applied research work performed on this subject at
Politecnico di Torino are presented in this paper.
As a frrst step in process modeling, theoretical description of the basic wire drawing
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
64
D. Antonelli et al.
operation was relied upon to provide an assembly of single die functional models for
simulation of an extended range of multipass production machines. Reduced drawing stress
was evaluated (in terms of yield stress) according to either the classic slab method, or the
upper-bound method, leading to fairly close results. Strain hardening, and related issues
concerning central bursting, were also modeled according to established theory [Avitzur
1963, Chen et al. 1979, Godfrey 1942, MacLellan 1948, Majors 1955, Yang 1961].
2a
r%
24 2 0 . . . . . - - - - - - - - - - - - - - - - - - ,
D
22
175
20
150
18
16
14
15
125
100
12
75
50
25
Fig. 1. Computed percentage reduction r %, drawing force Fz and back tension Fb plotted
(for a constant die angle 2a) versus step no. NS for a 19 pass drawing machine.
b)
Fig. 2. Typical die configuration, a), and relevant hoop strain pattern, b), as evaluated with
FEM. Strain levels shown range between -6 JlE (A)and 24 JlE (Q).
65
Software developed accordingly, catering for easy, user friendly comparative evaluation of
drawing sequences on a what-if basis, see e.g. Fig. 1 [Zompl et al. 1990], proved practical
enough to warrant regular exploitation in industrial environment for process planning and
troubleshooting work.
Elastic stress distribution in the die was also modeled, typical results being shown in Fig. 2,
and checked at selected locations against strain gage results, the latter involving some tricky
undertaking as mechanical and thermal strains have by and large the same order of
magnitude, not to mention low strain level and boundary condition problems. Quantitative
information concerning scatter in drawn wire size, and drawing force, were also obtained,
showing that while average drawing force increases slightly with drawing speed, scatter was
found to be by and large constant over a 10:1 speed range [Zompi et al. 1991].
a)
t\0
\0
\0
--~
10
M
;;.r
10
0\
\0
\0
\0
N
0\
tI
0\
I
crm ,MPa
b)
t"--
M
M
00
00
0
-.::t
N
-0'\
;,1. *~
-;l:,._';t
-.::t
V)
0'\
0
V)
0
0
0
0
0
Epl
Fig. 3. Computed mean stress, a), and equivalent plastic strain pattern, b), corresponding to
die angle a=12, reduction r=16%, friction coefficient !!=0.05 and strain hardening
coefficient P=0.32.
66
D. Antonelli et al.
As detailed analysis of effects of the drawing process on wire was sought, single pass
operation was further modeled with FEM and run over a comprehensive array of
combinations of operating parameters and material properties, namely die cone angle,
percentage reduction, friction and strain hardening coefficients. Stress and strain patterns
covering large plastic deformation were obtained, see Fig. 3, with features which enabled
validation against established theoretical models; response variables included drawing force,
peak normal stress and normal stress gradient
Concise polynomial models were developed explaining with a handful of terms a range from
over 95 to 99% of variation according to the response variable considered, a common
feature being the predominant influence of quadratic and interaction terms, whose effects
were frequently found to exceed those pertaining to first order ones. Detailed description of
modifications in mechanical properties produced by drawing was thus obtained, and
peculiarities such as those leading to drawn wire diameter smaller than that pertaining to die
throat were demonstrated, supported by experimental evidence, the influence of contact
length being also a factor to be reckoned with [Zompi et al. 1994].
Realistic simulation of manufacturing process, covering quality related aspects, entails
however evaluation of cumulative effects of consecutive drawing passes on wire properties,
inclusive of such defect evolution as may trigger either in process breakage or delayed
failure. Since minor flaws may eventually cause sizable effects, refined material testing
procedures are needed, particularly because some crucial information is available only in the
neighborhood of plastic instability in tensile tests [Brown and Embury 1973, Goods and
Brown 1979].
2. MULTIPASS WIREDRAWING MODEL
A numerical model consisting of a rigid die interacting with a deformable wire was built in
order to describe the stress and strain patterns over a number of drawing steps, also in view
of predicting impending wire fracture by monitoring void propagation. Symmetry permits
two dimensional representation; nominal parameters were selected to match those of die
sets exploited in laboratory drawing tests. Initial model wire shape is a cylinder 5.5 mm dia.
with a tapered portion engaging the first die, a stub section 4.86 mm dia. being provided for
drawing force application. The smallest length consistent with reaching stationary drawing
conditions was selected to keep computing time under control.
Mesh is made up by some 400 quadrangular axisymmetric linear isoparametric elements
with full integration; boundary conditions include axial symmetry and constant displacement
increments over the leading (right hand) edge of the model. An elasto-plastic flow curve
material model with strain hardening is assumed, with void induced damage described
according to established models [Gurson 1977, Tvergaard 1984]. Coulomb friction is
assumed, with a coefficient 1J.=0.08 arrived at by matching test results with theoretical
estimates [Avitzur 1964, Yang 1961]. Some approximation is entailed, as friction is known
to vary somehow over the set of drawing steps, it being typically larger in the first one
owing mainly to coil surface conditions as affected by decarburization due to previous
manufacturing operations.
The combined effects of reduced model length, rigid end displacement and finite number of
elements induce some fluctuation in computed drawing force. Model size and discretization
67
are a compromise between accuracy and computational time, the latter being fairly sizable
(about 2 hours per pass on a ffiM RISC/6000 platform) as the implicit Euler scheme
resorted to for damage model integration requires rather small integration steps, say of the
order of a few hundredths of the smallest element, if discretization errors are to be kept
within reasonable bounds.
Model is remeshed whenever element distortion exceeds a given limit, and a tapered section
matched to next die angle is introduced after every pass. Length increases due to drawing
must be offset in order to avoid unnecessarily large run times after the first drawing steps.
Ad hoc routines were developed to automate remeshing, a prerequisite for industrial
application.
3. ORAWING AND TENSILE TESTS
Laboratory tests were performed on C70 steel wire specimens ranging from 5.5 to 2.75 rnm
diameter, the latter being obtained from the former in six steps of cold drawing. Main data
concerning the dies used (throat dia., angle, reduction ratio and average true strain) are
given in Table 1.
Table 1. Main die parameters and related values
Die
No.
"out
a [o]
r%
E =ln(AofA)
10.2
10.7
10.7
10.8
13.6
17.6
20.6
23.0
17.7
26.2
17.1
18.7
0.23
0.49
0.69
0.99
1.18
1.39
,..
[mm]
4.90
4.30
3.90
3.35
3.05
2.75
1
2
3
4
5
6
1.8
...
~1.2
.!::
I'll
~
t::
.8
.2
...... ~
v
1
)< /
~
v
.6
I'll
.......
;IK- ,........,...,. ~ l--
~ .-o/'
.4
'n
.6
........
-.... .
.8
I
1.2 1.4
true strain E
Fig. 4. Strain hardening exponent, yield and ultimate stress obtained from tensile tests at
each drawing step.
68
D. Antonelli et al.
Material properties were evaluated from tensile tests (after straightening specimens if
required), at least three tests being performed at every drawing step. Tensile and drawing
tests were carried out on a hydraulic material testing machine, with continuous monitoring
of force, strain and crosshead displacement Flow, yield (at 0.2% strain) and ultimate
stresses were measured, and strain hardening exponent n was evaluated, according to
ASTM E 111 & 646-91, see Fig. 4.
The flow curve of the material was obtained by averaging several experimental tensile test
curves; sizable prestraining of wire occurred before testing. As a consequence some scatter
in Young's modulus was found, as underlined elsewhere [Hajare 1995]. An inherent
limitation of the tensile test lies in the small value of uniform plastic strain obtained before
the start of necking and then failure. After the onset of necking precious little data may be
obtained with conventional techniques but for the local necking strain, leading to rather
poor accuracy in the evaluation of flow curve. As simulation shows that the equivalent
strain reached in proximity of die-wire interface exceeds by far what can be reached in
tensile tests, extrapolation becomes necessary, entailing sizable uncertainties.
69
=(Jeq2 (-2
cr 'am' f)
where aeq, a and crm are respectively von Mises equivalent stress for damaged material,
yield stress of sound material, and mean normal stress, f being a scalar state function
representing the relative void volume fraction at any point of the specimen, thus indicating
in a simple way the path towards ductile failure. Yield condition now depends on
hydrostatic pressure and void volume; an increase of either mean normal stress and/or void
volume fraction reduces equivalent stress and consequently stress-carrying capability. The
state equation defining porosity rate has the form:
=df(d.e.,q, dcrm),
d/growth
=df(/, de,)
where Eeq is the equivalent plastic strain and Ev the volumetric strain. Change in porosity
prior to failure is due to the combined effect of void nucleation and growth. Nucleation rate
can be strain and/or stress controlled depending on the distribution of void nucleating
Table 2. Damage model parameters (Gurson-Tvergaard) used in multipass simulation.
0%
30%
1.2
5%
15%
20%
./
I
I
numerical
- experimental
2
6
4
8
nominal strain [%]
10
70
D. Antonelli et al.
dilatancy while a negative one tends to close cavities. Eventually, coalescence occurs when
porosity exceeds an established threshold, while upon reaching a higher ultimate level
catastrophic failure is initiated
Damage model was identified by fitting FEM simulation of wire tensile test results to
experimental data with an iterative process; initial values were taken from literature
[Tvergaard and Needleman 1984]. Model parameters were selected as corresponding to
best fit to experimental stress-strain plots in the plastic range (see Fig. 5) and more closely
matching reduction of area at fracture due to necking. MARC and ABAQUS codes were
used, both catering for the damage model considered. Realistic values were obtained (see
Table 2), well within the range of values reported in literature [Chu and Needleman 1980,
Tvergaard 1982]. Strain controlled nucleation is adopted since the major ductile mechanism
in a axisymmetric geometry is assumed to rely on voids formation by small particles-matrix
decohesion and failure by coalescence [Brown and Embury, 1973].
b)
half die
c)
.8
.9
half die
f%=2
\
4
Fig. 6. Computed contour plots showing mean normal stress a), equivalent plastic strain b),
and void volume density c), at the fourth drawing step.
71
r..
8
tS'"' 7
gp 6
-~
5
cU
Q
(.)
'"'
"'0
41
II
III
IV
VI
drawing passes
Fig. 7. Plot showing drawing forces computed (with and without damage) and measured
over six consecutive passes.
,.......,
.8
cU
f5
.._.
b
<ll
<ll
.4
1a
....~
<ll
1/
1\.
-.4
\.
k--""
ru.
-a.__
-.. . ._
-1. 2 o
.......
..,...,....
A_.. ~
-.8
~~
lr
.2
.4
.6
.8
--
1 1.2 1.4
true strain e
Fig. 8. Mean normal stress in wire at three characteristic locations versus true strain.
End effects due to finite length are apparent; steady state is however approached in the
central part of the model towards the end of the drawing step, see plots b) and c). A plot
showing over six steps computed and measured drawing forces (Fig. 7) indicates a
substantial agreement of the latter with damage model results (average deviation of the
order of7%, with a maximum of 15%).
Scatter in test results and numerical fluctuation due to discretization would not justify
seeking for a substantially better agreement. On the other hand, plain model exhibits not
only larger differences (20% on the average) but also a marked discrepancy in trend, see
72
D. Antonelli et al.
step no. 6. Alternances between tension and compression along wire axis due to die's
complex action are apparent in mean normal stress, accounting for incremental damage and
its propagation over consecutive steps.
Process induced damage evolution presents some peculiar aspects susceptible of explanation
in terms of interplay between hydrostatic stress and equivalent strain time and space
histories. In a nutshell, while void nucleation is controlled (on a probabilistic basis) by a
strain threshold, void growth is conditioned by the sign of hydrostatic stress. Therefore void
nucleation near wire surface, while experiencing a rather sharp increase with the first step, is
progressively curtailed by large compressive stresses due to die action which more than
offset the tensile ones due to drawing force (Fig. 8), thus bringing about void closure. See
for instance numerical results obtained on defect formation in rod drawing [Zavaliangos et
al. 1991].
On the other hand the delay in void nucleation on wire axis is more than offset by the
accumulated action due to subsequent steps, as the effect of hydrostatic tension occurring
right under the die is only in part countered by steady state compressive stresses apparent
after exit, see Fig. 8. Remark also that midway between surface and axis trends observed for
void evolution duplicate closely those pertaining to surface, as opposed to what takes place
along the axis after the fourth step (Fig. 9).
These findings point towards preferential void growth around wire axis, thus providing a
mechanism in agreement with established experimental evidence [Goods and Brown 1979],
and supported by microhardness tests performed across wire sections, see Fig. 10, where
evidence of peculiar radial hardness gradients, and of strain hardening matching to some
extent only flow stress increase induced by consecutive drawing passes is shown. It may be
worth remarking that void volume fraction on wire axis after six steps only approaches the
threshold for coalescence, a stage likely to be reached within a few additional steps .
.4
true strain
Fig. 9. Evolution of damage in multipass drawing on wire axis, outer surface and midway.
Remark crossover at second step and change of trend after fourth.
73
420
HV
(kg f/ mm2)
40o
380
360
340
320
300
0
Fig. 10. HV microhardness values obtained at points equispaced along radius r over wire
cross-sections (l=centre, 5=outer), corresponding to six drawing steps.
Risk of process disruption due to wire breakage may be readily estimated to a first
approximation according to the well known Warner probabilistic model [Haugen 1968],
taking ultimate stress as strength and flow stress as load. Main parameters of the relevant
distributions are defined by material properties, inclusive of damage related terms, and strain
hardening characteristics. Assuming for the sake of expediency both stresses to be
independent, normally distributed random variables (a rash statement, to put it mildly), the
maximum admissible ratio k of flow stress to ultimate stress consistent with a given risk of
tensile failure in process may be readily evaluated, and the feasibility of any given drawing
sequence rated accordingly.
Taking e.g. for both stresses a coefficient of variation of 5% over a representative wire
length, for risks at 1, 0.1, 0.01% level ratio k would amount in turn to 0.85, 0.80, 0.77.
However at 30 m/s drawing speed a 1% risk of failure per km of wire corresponds to a
MTBF of the order of an hour, hardly appealing at factory floor level since it would entail a
downtime almost as large as production time.
The argument is quite simple; the fly in the ointment is the requirement of reliable
information about the main variance components pertaining to flow and ultimate stresses,
and about departures from the normal form of the relevant tails of distributions involved.
Extreme value distribution (which was found to fit rather well experimental data on yield
stress) may be adopted to provide better approximation; a fairly substantial body of
experimental evidence is however required if confidence intervals are not to become
meaningless because of excessive width.
74
D. Antonelli et al.
6. DISCUSSION
Multipass wire drawing was found to be amenable to comprehensive numerical
modellization, capable of describing the evolution throughout the process of material
properties inclusive of damage related aspects. Exploitation of appropriate FEM software,
and computational techniques, enable detailed, full field evaluation of progressively
superimposed stress/strain patterns and some of their effects on product. Results obtained
provide a mechanism explaining how localization on wire axis of defects takes place, and
which factors are to be reckoned with in order to keep failures under control within a given
drawing program.
Unrestricted predictive capability is however not yet achieved, owing to material properties
related problems. Identification of parameters defining initial void distribution, and
evolution of damage may be performed with substantial uncertainty only on the basis of
traditional tests; and, the smaller wire gauge, the higher are testing skills required to achieve
adequate quality of results. Microhardness tests may add valuable information.
Conventional specifications are simply inadequate for identification of constitutive equation
parameters; and, rather demanding tests are required to detect and evaluate substantial
variation in dynamic plastic deformation properties. As tests at high strain rate up to large
plastic deformation are typically performed in compression, extension of results to wire
drawing, entailing suhstantial tensile loading, is by no means an easy undertaking, especially
under consideration of the peculiarly non symmetric behavior of metals due to void
presence. Time dependent defect evolution, liable to cause delayed rupture either at rest or
under tension much lower than nominal load carrying capability, is another major issue to be
addressed.
Effects observed were rather consistent; some however apply strictly to the sample
examined, and to the relevant production lot. Coils from different heats, let alone from
different steel mills, are routinely found to perform far from uniformly in multipass drawing
machines on production floor, notwithstanding nominal material properties being well
within tolerance limits. Close cooperation between steel mill and wire manufacturer is
required in order to enhance final product quality without adversely affecting costs.
REFERENCES
Alberti, N., Barcellona, A., Cannizzaro, L., Micari, F., 1994, Prediction of Ductile Fractures
in Metal-Forming Processes: an Approach Based on the Damage Mechanics, Annals of
CIRP, 43/1: 207-210
Anand, L., Zavaliangos, A., 1990, Hot Working - Constitutive Equations and
Computational Procedures, Annals of CIRP, 39/1: 235-238
Avitzur, B., 1963, Analysis of Wire Drawing and Extrusion Through Conical Dies of Small
Cone Angle, Trans. ASME, J. Eng. for Ind., Series B, 85: 89-96
Avitzur, B., 1964, Analysis of Wire Drawing and Extrusion Through Conical Dies of Large
75
Cone Angle, Trans. ASME, J. Eng. for Ind., Series B, 86: 305-315
Avitzur, B., 1967, Strain-Hardening and Strain-Rate Effects in Plastic Flow Through
Conical Converging Dies, Trans. ASME, J. Eng. for Ind., Series B, 89: 556-562
Avitzur, B., 1983, Handbook of Metal Forrnin~ Processes, J. Wiley, New York
Benedens, K., Brand, W.O., Muesgen, B., Sieben, N., Weise, H.R., 1994, "Steel Cord:
Demands placed on a High-Tech Product", Wire Journal International, April, pp. 146-151.
Bray, A., Fortin, G., Franceschini, F., Levi, R., Zompl, A., 1994, Messung der
Mechanischen Eigenschaften von Feinstdrahten, Draht, v. 45, n.3: 181-188
Brown, L.M., Embury, J.D., 1973, The Initiation and Growth of Voids at Second Phase
Particles, Proc. 3rd Int. Conf. on Strength of Metals and Alloys, Inst. of Metals: 164-169
Chen, C.C., Oh, S.l., Kobayashi; S., 1979, Ductile Fracture in Axisymmetric Extrusion and
Drawing, Trans. ASME,, J. Eng. for Ind., Series B, 101:23
Chu, C.C., Needleman, A., 1980, Void Nucleation Effects in Biaxially Stretched Sheets,
Journ. Eng. Materials and Technology, 102: 249-256
Godfrey, H.J., 1942, "The Physical Properties of Steel Wire as affected by Variations in the
Drawing Operations", ASTM Trans., 42: 513-531.
Goods, S.H., Brown, L.M., 1979, The Nucleation of Cavities by Plastic Deformation, Acta
Metallurgica, 27: 1-15
Gurson, A.L., 1977, Continuum Theory of Ductile Rupture by Void Nucleation and
Growth: Part I - Yield Criteria and Flow Rules for Porous Ductile Media, Journ. Eng.
Materials and Technology, 99: 2-15
Hajare, A.D., 1995, Elasticity in Wire and Wire Product Design, Wire Industry, 5:271-3
Haugen, E.B., 1968, Probabilistic Approaches to Desi~n. J. Wiley, New York
MacLellan, G.D.S., 1948, A Critical Survey of the Wire Drawing Theory, Journal of the
Iron and Steel Institute, 158: 347-356
Majors, H., Jr., 1955, "Studies in Cold-Drawing- Part 1: Effect of Cold-Drawing on Steel",
Trans. ASME, 72/1:37-48.
Negroni, P.O., Thomsen, E.G., 1986, A Drawing Modulus for Multi-Pass Drawing, Annals
ofCIRP, 35/1: 181-183
76
D. Antonelli et al.
Puttick, K.E., 1959, Ductile Fracture of Metals, Phil. Mag., 8th series, 4: 964-969
Rogers, H. C., 1960, The Tensile Fracture of Ductile Metals, Trans. Met. Soc. AIME, 218:
498-506
Siebel, E., 1947, Der derzeitige Stand der Erkentnisse tiber die mechanischen Vorgange
beim Drahtziehen, Stahl und Eisen, 66/67, 11/22: 171-180
Thomsen, E.G., Yang, C.T., Kobayashi, S., 1965, Mechanics of Plastic Deformation in
Metal Processing, Macmillan, New York
Tvergaard, V., 1982, Ductile Fracture by Cavity Nucleation between Larger Voids, J.
Mech.Phys.Solids,30:265-286
Tvergaard, V., 1984, Analysis of Material Failure by Nucleation, Growth and Coalescence
of Voids, Constitutive Equations, Willam, K.J. ed., ASME, New York
Tvergaard, V., Needleman, A., 1984, Analysis of the Cup-Cone Fracture in a Round Tensile
Bar, ActaMetallurgica, 32: 157-169
Yang, C.T., 1961, On the Mechanics of Wire Drawing, Trans. ASME, J. Eng. for Ind.,
Series B, 83: 523-530
Zavaliangos, A., Anand, L., von Turkovich, B.F., 1991, Towards a Capability for Predicting
the Formation of Defects During Bulk Deformation Processing, Annals of CIRP, 40/1: 267271
Zimerman, Z, Avitzur, B., 1970, Analysis of the Effect of Strain Hardening on Central
Bursting Defects in Drawing and Extrusion, Trans. ASME, J. Eng. for Ind., Series B, 92:
135-145
Zompl, A., Levi, R., Bray, A., 1990, La misura dell'attrito nella trafilatura a freddo di till in
acciaio di piccolo diametro, Politecnico di Torino (unpublished report)
Zompl, A., Cipparrone, M., Levi, R., 1991, Computer Aided Wire Drawing, Annals of
CIRP, 40/1: 319-332
Zompl, A., Romano, D., Levi, R., 1994, Numerical Simulation of the Basic Wire Drawing
Process, Basic Metrology and Application, Barbato, G. et al ed., Levrotto & Bella, Torino,
187-192
H.K. Tonshoff
University of Hannover, Hannover, Germany
C. Bode and G. Masan
IPH-Inst. f. Integrierte Produktion Hannover gGmbh, Hannover,
Germany
KEYNOTE PAPER
78
2. MEASURING PLANNING
Measuring planning is based on the workpiece geometry and the inspection plan. The
measuring plan guides the operator through all steps of the measuring process. If the
desired CMM is integrated in a production line, measuring planning should be done
offline.
AUGE did basic research on measuring planning [2]. GARBRECHT and EITZERT analyzed
how the placing of measuring points affects the reproducibility of measuring results [3,4].
KRAUSE et al. developed a system for technological measuring planning in order to achieve
reproducible measuring results [5]. This system takes into account fixed probes and
standard features like planes, cylinders, circles, etc. Additional devices for CMMs (e.g.
rotary tables or indexable probes), freeform surfaces, workpiece alignment and fixturing
were not considered.
measuring
technology
probe
cluster
workpiece
alignment
and
fixtures
79
operation scheduling. Based on this knowledge we developed a process plan for measuring
planning and methods to compute probe cluster and workpiece alignment taking into
account all kinds of geometry and CMM (Fig. 1).
3. COMPUTATION OF PROBE CLUSTER AND WORKPIECE ALIGNMENT
Determination of probe cluster and workpiece alignment ensures that at least one probe of
the cluster can approach a specific measuring point of the workpiece. Therefore, the
computation of the probe cluster depends on the measuring points and the workpiece
geometry. Besides there is a close relationship between probe cluster design and workpiece
alignment. A redesign of the probe cluster requires a change of the alignment and vice
versa.
Especially the probe cluster has a significant impact on the efficiency of the measurement.
For example, the set-up time required is proportional to the number of probes. If probes are
changed during measure, each change lengthens the running time. Since a high number of
probes decreases the quality of the entire probe cluster, one main objective of measuring
planning is to reduce the number of required probes.
3.1 ACCESS AREAS OF A WORKPIECE
Each measuring feature has a distinct property
called access area. Inside the access area at least
one probe approaches the measuring element
without colliding with the workpiece. An access
area describes all possible probe orientations to
touch the assigned measuring point.
The theoretical definition of an access area can
be given by modelling a straight probe with
infinitesimal diameter.
..c:.
c..
a>
"0
Definition 1:
An access line Rr of a measuring point is a half
line starting from the measuring point and not
intersecting with the workpiece.
Every access line represents one possible probe
orientation. Consequently an access area includes
all access lines of a measuring point (Def. 2). The Fig. 2: Access area assigned to a
so-called set-form of an access area allows any
measuring point
operations defined in set theory.
80
n
n
a; -:1-
i=l
0 .
81
To design an optimal probe cluster, the number of probing groups must be minimized. A
capable algorithm to perform grouping was presented by CARPENTER/GROSSBERG and
MOORE [3,5]. Probing groups are created by joining the 'closest' access areas. If two access
areas intersect, they are close. We expressed the 'distance' A between a probing group G
and an access area z as follows:
A(G z) = A -A + A3 + ~ A3
'
I
2
16Jr
(1)
Primarily, the distance is determined by A1. A1 is the number of elements of the probing
group G. Since the algorithm repeats grouping, Az= 1 must be subtracted if the access area z
is already an element of G. Secondarily, the surface measurement A3 of the intersection of
G and z is considered.
As MOORE showed the distance scale is crucial for the behaviour of the algorithm. We
showed that the scaled described above satisfies the stability conditions proposed by
MOORE [7]. Additionally, BODE proofed that the algorithm requires linear time [6].
3.3 PROBE ORIENTATIONS
In general, any type of probe or CMM has unusable probe orientations. A general approach
to distinguish the unusable from the preferred probe orientations is a quality function . The
quality function assigns to each probe orientation a value representing its usability. Since
the value of a probe orientation is bound to the CMM configuration, the quality function is
expressed in the machines coordinate system.
Especially when using indexable probes,
unusable probe orientations are becoming
important. An indexable probe cannot adjust
to probe orientations inside a cone round the
machine's z-axis. This cone contains the
sleeve to which the probe body is mounted.
Preferred probe orientations are usually
corresponding (or close) to the axes of the
CMM. These probe orientations are easy to
maintain during the measurement.
BODE developed quality functions for various
types of inspection devices [2].
Quality functions are conveniently developed
using the spherical coordinate system Fig. 4: Plot of quality function (cf eq. 2)
K{cp,fJ,r = 1}. The following example was
designed for a parallel system CMM equipped with indexable probes (Eq. 2, Fig. 4). The
indexable probe imposes unavailable orientations as described above.
82
(2)
Comm~nly
teach-in programming takes about 2 hours. Although teach-in programming requires less
time, the necessity of a real workpiece devalues this procedure. Besides, teach-in
programming hinders production taking place on the CMM.
83
Both conventional offline programming and teach-in programming set up a probe cluster
consisting of seven probes (Fig. 5).
To utilize the new methods 28 access areas were directly derived from the CAD model. Six
more access areas than features were created for cylindrical features. The construction of
access areas and determination of workpiece alignment and probe cluster took about one
hour. The computed workpiece alignment was similar to the previous one, but by a rotation
of 180 the number of required probes was reduced to four. Thus, it is obvious that much
time is saved applying the new methods (Table I, Fig. 5).
teach-in
conventional
offline progr.
supported
offline progr.
2h
10 h
2h
1.5 h
8h
Ih
20h
6h
6h
conventional
supported by
new methods
84
5. CONCLUSIONS
The examination of the currently performed measuring planning revealed a timeconsuming procedure. This time-consumption is mainly caused by determination of probe
cluster and workpiece alignment.
To shorten the expenditure of time for measuring planning we developed three consecutive
methods. The efficiency of these methods has been proven in the automotive industry by
applying these methods in the development of a car body part. The resulting effect was that
a high proportion of time required during the measuring planning process has been saved.
The modified sequence outlined above will have an impact on the further development of
measuring planning. After integration of the methods into CAD systems the designer is
able to validate his work with respect to quality assurance [ 10]. This may lead to integrated
Computer Aided Measuring Planning systems (CAMP).
6. REFERENCES
1. Hahn, H.:Wirtschaftlichkeit von Mehrkoordinaten-MeBgediten mit unterschiedlicher
Automatisation. Der Stahlformenbauer, Vol. 8 (1991) No.6, p. 77178
G.F. Micheletti
KEYNOTE PAPER
ABSTRACT:
The role and influence of eco-design on new product conception is investigated
along four main guidelines as Life Cycle: Analysis, Engineering, Assessment,
Development concept.
Friendly attitudes are the ethical and psycological basic factors. New design
strategies require "ad hoc" methods and solutions, to reach the improvements.
Which are the foreseable reactions of the entrepreneurs? Beside the CIM role, a
more profitable approach could be envisaged; the concurrent design, supported by
eco-auditors and eco-experts within the company, in order to face the "ecolabelling" prescriptions.
The total cost of the Life Cycle shall be included in the industrial costs in view of the
heavy social costs that will be imposed for waste collection.
A prospect suggests a practical classification.
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
G.F. Micheletti
86
1. INTRODUCTION
I would like to start - in order to better investigate the role and the influence of
eco-design on new product conception - by taking into consideration four main
guidelines, along which the Life Cycle Research and Development should
enhance the achievements:
- the Life Cycle Analysis that has been considered like a ''balance energyenvironment". taking from the more conventional energy balance the basic
study lines, due to the effect of the environment as an essential parameter, in
each phase of the production process;
the Life Cycle Engineering, that has been designated not only as
"engineering" but as "art of designing" a product, characterised by a proper
Life Cycle derived from a combined choice of design, structure materials,
treatments and transformation processes;
- the Life Cycle Assessment , that is the instrument able to visualise the
consequences of the choices on the environment and on the resources, assuring
the monitoring and the control not only on products and processes, but also on
the environmental fall out;
- the Life Cycle Development Concept, that is centred on the product and
summarises the techniques to be adopted in design phase, keeping in mind till
from the first instant of the creative idea, the fall out that will influence the
environment both during the period of the product's use, and in the following
phase when the functional performance is no more existing, but the product still
exists, to be dismantled or converted.
A friendly attitude
Let me add one more ethical and psychological factor, where some basic
friendly attitudes are emphasised:
- A product must become a friend of the environment;
- A process should respond of itself as a friend of the environment;
- A distribution to the market will act as a friend of the environment;
-A use ofthe product shall assure its behaviour as a friend of the environment;
- A re-use and a final dismantling of the components will finally validate all the
previous steps as friends of the environment.
In contrast let me add the unfriendly attitudes that are enemies against which
the environment asks to be defended:
- not bio-degradable and toxic materials;
- incompatible treatments;
- not destroyable components;
- not dismantables joints and groups.
All the above positive and negative factors are fitted as the basis for the
increasing standardised prescription, starting from the famous "State of the
Art" of Life Cycle Engineering design, that included the idea-design
subsequently spread all over the world, in occasion of the "Health Summit Agenda 21 -Action Program of the United Nations, held in Rio 1992, where
the responsibilities especially for the more developed Countries where pointed
out for a "sustainable global development"
As a direct consequence have been prepared the Standards ISO 14001,
becoming operative during the present year 1996 as Environmental
Management Standard, to which have to be added specific standards for
industry, taking into account the more general ones to give space to an
Environmental Management System (EMS).
For instance, within the frame of the Life Cycle Engineering/Design, several
more relevant points have been identified:
- to reduce quantity of materials used for industrial production, through a
rationalisation and simplification in design and dimensioning, responding to an
analysis of the effects that every type of material induces onto the environment,
at the same time saving at least or eventually including the functionality of the
products;
- to centre the study not only on the production process (as it normally
happens), but on the whole effects of the product's existence, by using in the
best way the resources together with the protection of the environment.
Nevertheless there is still an evident gap between rules, expectations and
realities.
I would introduce some concrete considerations
New Design strategies
It is clear that new design strategies still require to be invented and tested, as
well as new "ad hoc" methods and instruments, properly addressed to the
features, that could come from universities and from international initiatives.
87
88
G.F. Micheletti
- the environment policy, having reached in the last time a great importance,
was not enough considered versus the standard already established within CIM
strategy.
This is why the interpretation that seems to day more substantial within a CIM
framework is as follows:
- CIM must be regarded as an important mean for the factory where the
decision are confirmed as responsibility of the men, with the help of the
computers;
- the organisation of an enterprise induces now a more flexible configuration
enabling an easier dynamics in the companies;
- the management sets out to adopt engineering techniques able to improve the
problems of the different factory areas in a punctual way, using the specialised
co-operation corning from different/complementary competencies, included the
environmental labelling;
- the product design, if and when Life Cycle criteria are adopted can be put in
front in a CIM global way, towards the subsequent steps as manufacturing,
marketing, use and, even beyond, can allow to identify the way of dismantling
and reusing.
A more profitable approach
Though a more profitable approach to the new environmental problems could
be offered by the "concurrent engineering strategy" showing the better
integration within the results and a fruitful stimulation, through the
establishment in the company of groups involving experts of-and fromdifferent areas, sustained by their personal experience and inclined to
investigate together many issues of different nature.
The expression "concurrent or simultaneous engineering" in the actual case of
designing should be properly indicated as "concu"ent design":
a
denomination that has been accepted at an international level since 1989 (being
adopted in USA for the first time) to define a solution - very complex and
ambitious - to support the competitiveness of industries, the improvement of
the quality, the times reduction, the observance of the environmental rules.
This allows to put in action newly conceived Working Groups, including
experts as designers, production engineers, quality responsibles, market
operators, plus experts in the area of eco-system, with the possibility for them
to be active since the first moment in which a new product is born, avoiding in
this way the subsequent corrections that bring to time delays.
So the companies should be prompt to welcome their internal (and/or external)
experts of environment in their teams as eco-auditors and eco-experts, in order
to assure since the design phase and during the set up of the operation cycle,
the criteria on which we are hereby dealing with.
89
90
G.F. Micheletti
91
92
G.F. Micheletti
Type of impact
Criteria of impact
Consumption
resources
Consumption of soil
-Erosion
Damage to ecosystems,
landscape etc.
Human toxicity
Eco-toxicity
Global effects
atmosphere
Other effects
on
93
CONCLUSION
Each Enterprise shall select the items that attains to its production~ the basic
recommendation is to start very seriously the analysis.
There are already available some examples, that give useful demonstrations on
how some industrial sectors started their respective "Life Cycle Assessment"
(some are reported in the Bibliography) other sectors will move the first steps.
One "must" is addressed to each Company: every delay in facing the
environment obligations is not a benefit, but a penalisation and a lack of ethics
(whose price will be paid later).
Life Cycling has already an important role in industrial management.
The eco-management is showing the importance of parameters not too much
considered till yesterday, to day brought in evidence and to morrow belonging
to the general sensitivity of the people.
Producers and users are together involved in respecting the ecology problems
that with the finished product come back to the single components.
What the Life Cycle imposes, requires a co-operation involving "concurrent
engineering", together with "eco-engineering", that goes far away from the
actual laws with reference to the various emissions and stimulates study for
modelling causes, behaviours and effects.
Till now the laws are prevailing on heavy risks, accidents, contaminations, but
shall be necessary to prevent, with the design, in such a way to eliminate the
causes of risks and accidents by means of the proper technologies~ to this
purpose shall be more and more necessary to create and update a "data-base"
for materials, components, their chemical and physical properties, toxicological
and eco-toxicological properties, bio-degradability etc. In the mean time many
perspectives and hopes are connected with the introduction of clean
technology.
94
G.F. Micheletti
Bibliography
1. "Concu"ent Engineering", G. Sohlenius, Annals ofCIRP, vol.41/2/1992
7. "Life-Cycle Engineering and Design", Leo Alting, Jens Brobech Legarth, Annals ofCIRP,
Vol. 44/2/1995
95
15. "Plastics waste-recovery of Economic value", J. Leidner", Ed. K. Dekker, N.Y., 1981.
16. "Le materie plastiche e l'ambiente", a cura dell'AIM, Ed. Grafts, Bologna 1990
17. "Recupero post-consumo e riciclo delle materie plastiche" F. Severini, M.G. Coccia
Ed.IVR, Milano, 1990.
18. ''RecyclingandReclaimingofMunicipal solid Wastes" F.R. Jackson Noyes Data Corp.,
1975.
19. ''Resource Recovery and Recycling Handbook ofIndustrial Waste", M. Sittig Noyes Data
Corp., 1975.
20. ''Problematiche nel riciclo dei materiali plastici", P. La Mantia Macplas 116, p. 67, 1990.
21, "Macromoleculs". H. G. Helias, Plenum Press, N.Y., 2. Ed. 1994, vol. II, p. 858.
22., ''Recycling ofplastics", W. Kaminsky, J. Menzel, H. Seim Conserv. Recycling 1, 91,
1976.
23. ''Sorting of Householdwaste" A. Skordilis, Ed. M. Ferranti, G.Ferrero, Elsevier Appl. Sci.
Pubbl., Londra 1985.
24.
25. "The LCA sourcebook, a European business guide to life cycle assessment SustainAbility",
SPOLD Ltd., 1993.
26. "The Fiat Auto recycling project: current developments" S. Di Carlo, R. Serra, Fiat Auto,
Auto Recycle Europe '94.
27. "The life cycle concept as a basis for sustainable industrial production"L. Alting, L.
96
G.F. Micheletti
use
recycling
Unl Erlangen
Prof. Feldmann
National problems
Global problems:
Uni Erlangen
Prof. Feldmann
Glob:~!
1. INTRODUCTION
Machining is one of the oldest process for shaping components and due to its
versatility and precision, achieved through continual innovation, research and development,
has become an indispensable process used in manufacturing industry. In more recent years
machining has led the way towards the 'revolution' in modern computer based
manufacturing through developments in computer controlled machining systems and
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
98
flexible automation. These modem automated systems have been forecast to increase the
total available production time a component spends being machined from 6% to 10% in
conventional (manual) systems to 65% to 80% making machining more important than
ever before[l, 2]. As a consequence the long recognised need for improved and reliable
quantitative predictions of the various technological performance measures such as the
forces, power and tool-life in order to optimise the economic performance of machining
operations has also become more pressing than ever before if full economic benefits of the
capital intensive modem systems are to be achieved [3]. This pressing need for reliable
quantitative performance data has recently been re-emphasised in a CIRP survey [4].
The estimation or prediction of the various technological performance measures
represents a formidable task when the wide spectrum of practical machining operations
such as turning and milling as well as the numerous process variables such as the tool and
cut geometrical variables, speed and material properties for each operation are considered
[3]. Furthermore the establishment of comprehensive machining performance data,
preferably in the form of equations, is an on-going task to allow for new developments in
tool designs, materials and coatings as well as workpiece materials. Alternative approaches
to machining performance prediction have been noted in the research literature and
handbooks although the dearth of such data remains. It is interesting to note that a new
CIRP Working Group on 'Modelling of Machining Operations' has been established to
investigate the alternative modelling approaches to performance prediction [5, 6].
In this paper the alternative approaches to force, torque and power prediction for the
important face milling operations will be reviewed and compared. Particular attention will
be placed on the 'Unified Mechanics of Cutting Approach' developed in the author's
laboratory [3] and the establishment of equations for these performance measures essential
for developing constrained optimisation analysis for selecting economic cutting conditions
in process planning [7, 8].
99
These techniques enable a larger number of variables to be included for a given amount of
testing and also provide estimates of the scatter or variability at defined levels of
confidence [7, 9, 10].
Recent reviews of the forces in milling operations has shown that although
empirical equations for some of the force components and the power were reported for
peripheral milling very few equations were found for the popular face milling operations
[11, 12]. Some earlier work by Roubik [13] using a planetary-gear torque-meter has shown
that the tangential force Ftang in face milling was dependent on the feed per tooth f1 and the
axial depth of cut aa when tested independently and given by eqs.(l) and (2) while Doolan
et al [14] using a 'fly cutter' and a strain gauge mounted on the tooth flank used multi
variable regression analysis to arrive at eqs.(3) for the tangential force Ftang in terms of ft. aa
and the cutting speed V
F
= K f.f t
0.766.
Ftang = K d a ao.940
(1, 2)
tang
'
F
=
K. f o.745a o.664y0.049
(3)
tang
t
a
where Kr, Kct and K were empirical constants.
Interestingly in a recent comprehensive Chinese handbook [15] (drawing on CIS
data and handbooks) an empirical equation for the tangential force which allows for the
majority of influencing variables in peripheral, end and face milling has been presented as
shown below
K a x, f x1 a x, N
F
=at r
t
(4)
tang
DYd NYn
where K, Xa, xr, Xr, Yct and Yn are 'empirical' constants and aa. ar, ft Nt. D and N are the axial
and radial depths of cut, the feed per tooth, the number of teeth, the cutter diameter and the
spindle speed. The values of the empirical constants have been given for a variety of
common work materials and tool materials. Furthermore this equation enabled the torque
Tq and the power P to be evaluated from the known cutter radius (0/2) and peripheral
cutting speed V, i.e.
= Ftang. V
(5, 6)
Thus despite the very large number of variables considered comparable equations
for the three practical force components, i.e. the average feed, side and axial force, were
not available.
In general these empirical equations provided estimates for the average forces,
torque and power but did not estimate the fluctuating forces in face or other milling
operations. Furthermore few equations have been found for all the average force
components so necessary for machine tool and cutting tool design, vibration stability
analysis and constrained optimisation analyses for economic selection of milling conditions
in process planning.
Despite the many disadvantages of the empirical approach noted above an
important advantage is the form of empirical equations established which greatly enhance
the development of constrained mathematical optimisation strategies of many operations
[7' 8].
100
101
number of teeth Nt. axial depth of cut aa and feed per tooth f1 can be readily interpreted (i.e.
linear increases in the forces with increases in these variables), the effects of the tooth
cutting edge angle Kr, radial depth of cut ar. cutter axis 'offset' u (from the workpiece
centre-line) and the cutter diameter D are not always obvious in view of the complex
trigonometric functions involved. In addition, the effects of the tooth normal rake angle 'Yn
and inclination angle As as well as the cutting speed V are not explicitly expressed since
these are embedded in the modified mechanics of cutting analysis 'area of cut' and 'edge
force' coefficients, i.e. the Kc's and Ke's in the equations. Thus despite the comprehensive
nature of the predictive model and the average force and torque equations computer
assistance is required to study the effects of a number of variables on the average forces.
The effects of these operation variables on the average feed force (Fx)avg. side force (Fy)avg.
axial force (Fz)avg and torque (Tq)avg are shown in Fig. 1 where the trends have been
qualitatively and quantitatively verified [12, 20].
ar(kN)
(Fz >avg
/~
"""
<Fx) v
(~
(kN)
(kN)
(Fz)avg
~)avg
0 0 2 4 6 8 1012
(a)
(~:f ~
2 -
"', ., :
0.1
Nt
(Fy)avg
(c)
(Nm
o,___ ____,____
3
2
(kN)
1
(e)u(mm)
'-----'-:-----"--''--'--'--
As
(Fy)avg
(kN)
(kN)
==
:c::::
-20-10'0' 10'20'
(i) As
(Tq)avg
x:r-
(Fy)avg
~g
45'
60'
(j)
75'
Kr
_____
~
(g)
(N:)'-----""::--1
~
<Fz )avg
-15' -10'
(f) D (mm)
(kN)
(~
1001
(Nm
0'---------
~
~
120f
(Nm~
0.3
(Tq)avg
100f
0.2
ft(mm)
(Tq)avg
~
(Tq)avg
120l
0'
Yn
10'
<Fy)avg
(kN)
(Fz >avg
<Fx>av
'--::-::---..........:~...__-:-:-20
30
40
(h) V(m/min.)
90'
Fig. 1 Typical average force and torque tredns in face milling (zero tooth run-out).
102
(7)
(8)
(9)
(10)
(11)
_1
9 c =sm
(ar-D+2u)
. _(ar -2u).
- +sm -D- '
1
(12)
(13, 14)
Despite the generic and comprehensive nature of the 'unified mechanics of cutting
approach' there is a need to establish equations for the average forces, torque and power of
the type used in the traditional 'empirical' approach discussed above. Such equations
would explicitly show the effect of each operation variable of use in machine tool and
cutting tool design. Furthermore the simpler form of equations are admirably suited to the
development of constrained optimisation analysis and strategies for selecting economic
cutting conditions in process planning as noted above [7, 8].
From Fig.l it is apparent that all the predicted trends are either independent of the
operation variables or vary monotonously suggesting that empirical-type equations can be
fitted to the model predictions using multi-variable linear regression analysis of the log
transformed data. Since 'Yn and A.s can be positive or negative these variables have been
expressed as (90-"(0 ) and (90-As) to ensure the logarithm of a positive number is used in
the regression analysis. Similarly the offset u has been incorporated in the radial depth of
cut about the cutter axis ai (=(aJ2)+u) and Ro (=(aJ2)-u). In addition it is noted in Fig. l(e)
that there is a limiting negative offset u (with the cutter axis closer to the tooth entry than
exit) beyond which the average feed forces (Fx)avg is alway positive.
using the predictive model and data base for milling s 1214 free machining steel
with a TiN coated carbide tooth cutter an extensive numerical study involving 5184
103
=K x (90o _ 'Y n)eg. ( 90o _A s )elx K rekx Ded. N t en. a. ei. a oeo. a aea. fer.
yev.
t
( Fy) avg =K y (90o _ "( n)egy (90o _A s )ely K reky Dedy N t eny a. eiy a oeoy a aeay f t efy yevy
(1 5 )
=K ( 90o -"( )eg, (90o -A )el,K ek, Ded, N en, a.ei,a eo, a ea,f ef,yev,
( F)
z avg
z
n
s
r
t
1
o
a t
T
)
=
K
(
0o
_
'
Y
)egq
(
0o
_A
)elq
K
ekq
edq
N
enq
a.
eiq
a
eoq
a
eaq f efq yevq
( q avg
9
9
q
n
s
r 0
t
1
o
a
t
(l?)
( Fx) avg
(l 6 )
(l 8 )
Table 1 The constant and exponents of the fitted empirical-type equations (S1214 work
material and TiN coated carbide tool).
eg
el
ek
ed
en
ei
eo
ea
ef
ev
CFx)avg
l.24xl0 5
4.11
-1.04
0.394
-0.945
0.995
1.11
-0.164
1.028
0.698
0.256
(Fy)avg
1.2946
1.328
0.069
-0.078
-0.913
0.999
0.376
0.537
0.988
0.849
0.071
(Fz)avg 9.29xl0- 11
3.98
3.52
-1.66
-1.01
0.991
0.518
0.495
0.852
0.669
0.279
(Tq)avg 7.25xl0-4
1.35
0.064
-0.077
-.006
0.999
0.507
0.500
0.988
0.848
0.071
From a study of the exponents in Table 1 it can be deduced that the effects of the
different operation variable on the average forces and torque are generally consistent with
the trends in Fig. 1. In addition the predictive capability of the approximate empirical-type
equations with respect to the rigorous model predictions has been assessed in tetms of the
percentage deviation (e.g. %dev=l00x(Empirical pred.-Model Pred.)/Model Pred.)). The
histograms of the percentage deviations in Fig.2 show that the average %dev. is very close
to zero for all force components and torque with the largest scatter from -10.8% to 13.5%
occurring for the average feed force CFx)avg Thus the predictive capability of the simpler
empirical-type equations can be considered to be very good.
mean=0.1%
40
30
20
mean=0.01%
80
....;
0
10
0
60
30
40
20
20
10
0
-16
-8
%dev.
16
mean=0.12%
40
60
....;
0
40
-8
%dev.
16
....;
0
20
0
-16
mean=0.01%
80
0
-16
-8
%dev.
16
-16
-8
%dev.
16
(a)
(b)
(c)
(d)
Fig. 2 Histograms of the percentage deviations between equation and model predictions.
104
60
(Fx)avg
40
20
80
mean=6.69%
me~n=1.39%
60
(Fy)avg
60
oi
40
oi
.a
40
.a
-40
20
0
%dev.
20
40
(a)
mean=1.06%
!0
a'/.
20
20
0
(Fz)avg
0
-40
-20
0
%dev.
(b)
20
40
40
20
0
%dev.
20
40
(c)
5. CONCLUSIONS
There is a pressing need for reliable quantitative predictions of the forces, torque,
power and tool-life in face milling, and machining in general, to optimise machining
conditions and gain maximum economic benefits of the increased productive times in
modem computer based manufacturing.
The traditional 'empirical' approach to force prediction is laborious, expensive and
primarily considers the tangential force and power for some variables although the
'empirical' equations are most suitable for economic optimisation. The semi-empirical
'mechanistic' approach is more comprehensive but does not consider all the fluctuating and
average force components and relies on some special milling tests and computer assistance
for quantitative predictions. The 'unified mechanics of cutting approach' is the most
comprehensive and generic approach which allows for all the forces, torque and power;
encompasses the 'mechanistic' approach, but results in complex equations. Nevertheless
this generic approach can be used to develop comprehensive 'empirical-type' equations for
use in CAD/CAM applications and economic optimisation.
The importance of fundamental cutting analyses and data bases in predictive
modelling of machining operations is highlighted in this work.
ACKNOWLEDGMENTS
The authors wish to acknowledge the financial support offered by the Australian
Research Council (ARC) in this and other projects run in the authors' laboratories.
REFERENCES
1. Merchant, M.E., "Industry-Research Integration in Computer-Aided Manufacturing",
Int. Conf. on Prod. Tech., I.E. Aust., Melbourne (1974).
2. Eversheim, W., Konig, W., Week, M. and Pfeifer, T. Tagunsband des AWK'84,
Aachener Werkzeugmaschinen-Kolloquim, (1984).
3. Armarego, E.J.A., "Machining Performance Prediction for Modem Manufacturing", ih
Int. Conf. Prod./Precision Engineering and 4th Int. Conf. High Tech., Chiba, Japan,
K52, Keynote paper, (1994).
105
4.
KEYWORDS:
ABSTRACT:
Reaming is a process that is widely used in industry with very little theoretical
modelling being carried out. In this paper the cutting action of the reaming operation is presented
by explaining the thrust and torque involved. A model based on an orthogonal theory of
machining and variable flow stress is presented in order to predict the thrust and torque involved in
the cutting process. A comparison of the predicted and experimental results give good correlation
and thus indicates that the procedure used is viable.
1.0 INTRODUCTION
Reaming is an internal machining operation which is normally performed after drilling to
produce holes with better surface finish and high dimensional accuracy. A reamer consists
of two major parts. The first part is the chamfer length for material removal and the second
part, the helical flute section, carries out the sizing operation of the hole. During a reaming
operation the chamfer will first remove the excess material left from the drilling operation
which is then followed by the helical flutes which size the hole precisely and produce a
good surface finish. In analysing the thrust force and cutting torque in a reaming operation
the first step will consider the action of the chamfer length. To study the forces acting on
the chamfer, it is necessary to investigate the cutting action of a single tooth in the reamer
(Figure 1). The investigation of the single tooth showed that the cutting edge represented a
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
108
Helical Part
Chamfer Length
SectionAA
An experimental investigation was carried out to observe the thrust, the torque and the chip
formation during the reaming operation. The experiments involved reamer sizes of
cj)lO.Omm to cj)16.0mm with varying drill sizes (cj)9.5mm to cj)15.5mm) for each reamer. A
total of 18 experimental observations were obtained for a plain carbon steel with a
chemical composition of 0.48%C, 0.021 %P, 0.89%Mn, 0.317%Si, 0.024%S, 0.07%Ni,
0.17%Cr, 0.02%Mo, 0.04%Cu, 0.03%Al. An example of the experimental condition is as
follows: drilled hole size= cj)9.5mm or cj)9.65mm; reamer= cj)lO.Omm; rotational speed=
140 rpm; feed-rate= 0.252mm/r. The experimental results were measured using a Kistler
9257 two component force/torque dynamometer and Kistler 5001 charge amplifiers
connected to a PC-based data acquisition system using a RTI815 AID card and a 80386
computer with accompanying software. A total of 1000 data points for each component
were collected over a 30 second period and an example of the results obtained is shown in
Figure 2. From this figure it is clear that the thrust force increases quickly to its maximum
value when the chamfer length is in full contact with the workpiece. The thrust force
remains fairly constant throughout the cutting period until the reamer exits the hole at the
109
Thrust (N)/
Torque(Ncm)
230
180
130
80
30
10
15
20
25
30
Time (sec)
110
where Tm is the total torque in Ncm, Tc is the cutting torque in Ncm due the chamfer length
and Tb is the rubbing torque in Ncm due to the rubbing effect of the helical part of the
reamer. It is expected that Tc will be constant during the reaming process while Tb will
change with specimen length or hole depth and the frictional condition at the interface for a
given cutting condition. In order to verify the experimental observations another
experiment was carried out to compare the thrust and torque obtained when reaming a
cj>9.65mm hole created by a larger drill. In this case the amount of material removed is less
than the previous cutting condition due to a larger drill being used. The results obtained
indicate a reduction in the thrust and torque with the results showing 55N for the thrust
force and 85 Ncm for the cutting torque, Tc. This is a reduction of 15 N for the thrust and
20Ncm for the torque which is expected due to the less material being removed. There was
also difference in the magnitude of the maximum rubbing torque attained by the tests. The
difference in magnitude is 53Ncm with the test for the smaller width showing a higher
rubbing component of 135Ncm compared to 82Ncm for the larger width. This result is
interesting in that the smaller width indicates a higher rubbing component and this could be
due to the variation in hole size due to the smaller width and less material removed during
cutting. So the helical part carries out the extra material removal.
Since the material removal in reaming is seen to be similar to the turning operation a
model is developed to predict the thrust and torque in reaming taking into account the
variable flow stress orthogonal machining theory developed by Oxley and co-workers [1].
3.0
To predict the thrust and cutting torque it is essential to know the actual geometry of the
cutting edge of a reamer. The cutting edge can be modelled as an oblique tool with a
specific geometry[3]. Once this geometry is known, then for a single straight cutting edge
in oblique machining, the method uses the experimental observations (i) that for a given
normal rake angle and other cutting conditions, the force component in the direction of
cutting, Fe, and the force component normal to the direction of cutting and machined
surface, Fr. are nearly independent of the cutting edge inclination angle, i, and (ii) that the
chip flow direction, 11c satisfies the well known Stabler's flow rule (11c = i) over a wide
range of conditions. It is assumed that Fe and Fr can be determined from a variable flow
stress orthogonal machining theory by assuming zero inclination angle irrespective of its
actual value and with the rake angle in the orthogonal theory taken as the normal rake, an ,
of the cutting edge. The tool angles associated with the cutting edge of the reamer together
with the predicted values of Fe and FT and the values of 11c and i are then used to
determine ~ the force normal to Fe and Fr which results from a non-zero inclination
angle, from the relation
(2)
111
For a tool with a non-zero side cutting edge angle, C8 , the force components Fr and ~ no
longer act in the feed and radial directions. Therefore, the force components are redefmed
as P 1, P2 and P3 of which the positive directions are taken as the velocity, negative feed and
radially outward directions as shown in Fig.l. For the equivalent cutting edge these are
given by the following equations
P2
P3
FTcosC 8 + FRsinC 5
FrsinC, - FRcosC,
(3)
In predicting the force components Fe and I\-. the orthogonal machining theory as
described by Oxley[l] is used. The chip formation model used in predicting the forces is
given in Figure 3.
work
112
gives a shear flow stress in the chip material at the interface equal to the resolved shear
stress, as the assumed model of chip formation is then in equilibrium. Once the shear angle
is known then Fe, Fr etc. can be determined. Details of the theory and its applications have
been given by Oxley [1]. In addition the thermal properties of the work material used in
the experiments which are also needed in applying the machining theory are found from
relations given by Oxley.
In order to work out the equivalence of the reamer cutting edge to a single point tool
the following is carried out. The feed per revolution is converted into a feed per tooth or
cutting edge, f1, and the cutting velocity is worked out. Since the chamfer has a negative
45 chamfer angle (C8) it is necessary to convert the feed per tooth into an equivalent
undeformed chip thickness, t1o for the principal cutting edge by using t 1 = f1 x cos C8 and
the width of cut, w = radial difference in the reamer and hole size +cos C8 These values
are then inputted into the orthogonal theory to determine values of Fe and Fr. These values
are then used with the inclination angle of 10 to determine FR and then the values of P 1, P2
and P3 are determined. From these values the cutting torque, Te, and the thrust force Fthrust
per cutting edge is determined using the following relations
(4)
where r is the radius of the reamer. From these values the total cutting torque and thrust
force are calculated by multiplying the values in equation by the number of cutting edges in
the reamer. These predicted values are now compared with the experimental results
obtained.
4.0
The experiments carried out used the same rotational speed and feed per revolution
as the initial experiments but the reamer diameters were varied from cp 1Omm to cp 15mm
with the width of cut ranging from 0.07mm to 0.5mm due to the different drill sizes used to
create the original hole. The experimental results obtained are shown in Figure 4. The
predicted and experimental values are plotted together. As can be seen from Figure 4 the
correlation between the experimental and predicted results is good given that the model is
based on the orthogonal machining theory for single point tools. The maximum difference
seen is between the predicted and experimental thrust forces (Figure 4a) with the biggest
difference being 105% for the cp10mm reamer and the 0.31mm width of cut. However it
must be noted that this attempt at predicting the thrust force has given values of the same
magnitude. The differences in the data can be explained by the inconsistency in the
generation of the drilled hole as the drill could have inconsistent cutting action along its
flutes and thus creating an uneven surface for the reamer to follow. The results for the
thrust of the larger reamer (cp15.0mm) are in good agreement with the maX.imum variation
being only 20%.
113
The results in Figure 4b indicate excellent agreement between the predicted and
experimental cutting torques. The biggest variation observed between the results is
approximately 35% when the width of cut is very small. This seemingly large variation
could be again due to the drilling action of the previous tool causing variations in width of
cut and thus material removal. Overall the results presented here indicate that the variable
flow stress theory of machining is capable for predicting the thrust and torque in reaming.
160
140
X Ex (w=0.31mm)
120
~.,
2
~
100
80
60
40
20
+
10
11
Pr (w=0.31 mm)
Ex (w=0.07mm)
Pr (w=0.07mm)
Ex (w=0.25mm)
Pr (w=0.25mm)
EK (w=0.1 Bmm)
Pr (w=0.18mm)
EK (w=0.32mm)
Pr (w=0.32mm)
_._Ex (w=0.5mm)
- P r (w=0.5mm)
Ex (w=0.211mm)
Pr (w=0.28mm)
12
13
14
15
Reamer dlameter/mm
CONCLUSION
The work in this paper indicates that the reaming operation is not a simple operation
but involves two types of operations for material removal and sizing of the hole. The
reaming operation involves a cutting action by the chamfer length and then this is followed
by a rubbing action of the helical part to create the precise hole. The thrust force is fairly
constant during the operation with the torque being made up of two components as given
by equation (1). The rubbing component increases as the length of the reamer in contact
with hole increases. This will remain until the hole is fully cleaned out by the reamer as
indicated by the results in Figure 2. The prediction of the thrust and torque using the
variable stress machining theory has been successful however further work is required to
improve the correlation. Finally the rubbing component needs further investigation to fully
understand the reaming operation and this is currently being carried out.
114
500
II(
450
.t.Ex (w=0.31mm)
400
XPr (w=0.31mm)
E 350
0
ie
..:
300
Gl
~
~ 250
s.,
200
150
(.)
Ex (w..o.28mm)
Pr (w..0.28mm)
100
10
11
+ Ex (w=0.25mm)
Pr (w=0.25mm)
-Ex (w=0.18mm)
Pr (w=0.18mm)
Ex (w=0.32mm)
X Ex (wO.Smm)
)I( Pr (WO.Smm)
0
9
ePr (w=0.07mm)
.t. Pr (w.,0.32mm)
)I(
50
X Ex (w=0.07mm)
12
13
14
15
Reamer dlameter/mm
6.0 ACKNOWLEDGMENTS
The authors wish to thank Mr Ron Fowle for his help with the experimental work.
7.0 REFERENCES
1.
2.
3.
Lin, GCI, Mathew, P, Oxley, PLB, and Watson, AR, Predicting Cutting Forces for
Oblique Machining Conditions, Proc Instn Mech Engrs, 196, No 11 (1982), 141-148.
I. INTRODUCTION
The principal properties required of modern cutting tool materials for a high production rate
and high precision machining include good wear resistance, toughness and chemical stability
under high temperatures.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
116
S. Lo Casto et al.
Tool failure is also usually attributed to excessive wear on tool flank and rake face, where
the tool is in close contact with the workpiece and the chip, respectively.
For this reason many tool materials have been developed in the last ten years, such as
ceramics. For a long time alumina ceramics have held great promise as cutting tool materials
because of their hardness and chemical inertness, even at high temperatures. However, their
inability to withstand mechanical and thermal shock loads make them unpredictable for most
cutting operations.
In order to improve the toughness of alumina ceramics, various research has recently been
done [1,2,3,4,5]. These include transformation toughening by the addition of zirconia, the
incorporation of significant amounts of titanium carbide, and reinforcement with silicon
carbide whiskers [6, 7, 8, 9, 10]. Recently this group has been supplemented by silicon nitride.
In our previous papers [11, 12] we report the performances and wear mechanisms of some
ceramic materials when cutting AISI 1040. In the light o(the results obtained after
machining new materials for special uses, it become very relevant to study the effect of
Nickel and Chromium on tool life. These latter are present in special refractory steels.
The AISI 310 nickel based alloy is one of the most frequently employed materials for
equipment subjected to high chemical wear at working temperatures of up to 11 00C.
Generally the AISI 310 steel belongs to the group of "hard to machine" materials. Its low
specific heat, thermal conductivity and hardness as compared to AISI 1040 steel, its
pronounced tendency to form a built-up-edge, strain hardening and the abrasive effect of
intermetallic phases result in exceptionally high mechanical and thermal stresses on the
cutting edge during machining. Due to the high cutting temperatures reached, sintered
carbide inserts can be used only at relatively low cutting speeds. Because of these
difficulties, recent improvements have made some ceramic tool grades suitable for
machining nickel-based alloys.
For these reasons the purpose of this paper is to refer to the performance of some ceramic
materials when cutting AISI 310 steel, nickel based alloy, in continuous cutting with cutting
speeds ranging from lrnls to 4m/s.
2. EXPERIMENTAL SECTION
A set of tool life tests, in continuous dry turning with three-dimensional cutting conditions,
was performed on AISI 310 steel whose characteristics are reported in Tab. I
117
The material worked was a commercial tube with an outer diameter of 250mm and an inner
one of 120mm. A piece of this tube, approximately 750mm long, was fixed between chuck
and tail stock. The commercially available ceramic materials selected for the tests, according
to the insert number SNG 453, were as follows:
-Zirconia-toughened Alumina (Al203-Zr02 7%vol.), in the following called "F";
- Mixed-based Alumina (Al203-TiN-TiC-Zr02), in the following "Z";
-Alumina reinforced with SiC whiskers (Al203- SiCw), in the following "W';
- Silicon nitride (Si~4), in the following "S";
- Sintered carbide grade P 10 (WC-TiC-Co), in the following "C".
The inserts was mounted on a commercial tool holder having the following geometry:
- rake angle
y =-6
- clearance angle
a. = 6
- side cutting edge angle
\jl = 15
- inclination angle
A. =-6
The tests were carried out with the following parameters:
-depth of cut: d=2.0mm;
- feed: f=O .18mm/rev;
-speeds: v,=I.3m/s; v2=2.Im/s; v3=3.3m/s.
In each test the cutting tool wear level was periodically submitted first to a classical control
by profilometer and then to observation of rake face and flank by computer vision system.
Each image of the cutting tool observed was digitized by a real time video digitizer board.
Finally the image thus obtained was stored in an optical worm disk. With this technique one
can always measure the flank wear and observe and check the crater dimensions.
3. ANALYSIS OF THE RESULTS
The most important observations is that in all tests carried out with AISI 310 a large groove
at the end of the depth of cut immediately begins and rapidly grows, according to the
cutting tool material. It could be thought as due to the abrasive effect of intermetallic phases
on all the tool materials used.
During the tests it has been observed that the only material which has shown a crater and
flank wear was type S, while the tool materials type C, F, W and Z have shown a large
groove at the end of the cut. For this reason it has been decided to stop the tests just when
the height of the primary groove reaches approximately 2.0mm.
Type S tool materials show a high level of flank wear, Vb, and a low level of groove, Vbn,
at speed of 1.3m/s, Fig. I. With the increase of speed the flank wear increases. At 3,3 m/s
the flank wear reaches a level of 2. I mm after 30" of cutting.
Type W tool material show a slow increase of groove and a very low flank wear at a lower
speed, Fig.2. With the increase of speed, flank wear and groove grow more quickly. At the
speed of 3.3m/s tool life is reduced to 210", with very little crater wear even at a higher
speed.
Type F and Z tool materials show a very short life also at a lower speed. At 1.3m/s the
groove was 2.2mm for F and 2.4mm for Z after 240" of cutting. The flank wear was very
S. Lo Casto et al.
118
low at all speeds. At a higher speed both tool materials reached the maximum groove level
after 30" .
2,5
........
-Vb/S/1.3
... 1,5
-.er- Vb/S/2.1
s::
>
--*-- Vbn/S/2.1
>"
-+- Vbn/S/3.3
-+--- Vb/S/3 .3
0,5
0
4
rrun
Fig. 1 -Wear ofS-type tool vs cutting time.
2
1,8
1,6
-+--- Vbn/WI 1. 3
........ 1,4
e
e
..........
1,2
s::
>"
0,6
-Vb/W/1.3
-.lr-
> 0,8
Vb/W/2.1
--*-- Vbn/W/2. 1
-+- Vbn/W/3.3
-+--- Vb/W/3 .3
0,4
0,2
0
0
min
Fig. 2- Wear ofW-type tool vs cutting time.
10
119
Type C tool material, Fig.3, was very interesting at low and medium speed. At 1.3m/s the
flank wear was 0.2mm after 2400" of cutting. After the same time the groove reached the
level of 1.9mm. At the speed of 2.1m/s after 2400" of cutting the flank wear reached the
level of 0.8mm and the groove the level of l.Smm. At the speed of 3.3m/s the flank wear
was predominant and after 400" of cutting reached the level of 1.1 mm and the groove after
the same time the level of0.3mm.
1,~
se
1,6
1,4
-+-- Vbn/C/1.3
1,2
-+- Vb/C/2.1
s=
--....-- Vbn/C/2.1
~ 0,8
~'
--*- Vbn/C/3 .3
-+- Vb/C/3.3
0,6
0,4
0,2
0
0
10
20
30
40
rmn
Fig. 3- Wear ofC-type tool vs cutting time.
During cutting the chip became hotter at its external extremity. This was probably due to
the abrasive effect of intermetallic phases.
In the tests the limit imposed by regulations regarding flank wear and groove were
exceeded. The work was continued until the workpiece was well finished.
4. CONCLUSIONS
After the tests carried out with ceramic tool materials on cutting AISI 310 steel we can
conclude:
- type S tool material wears very quickly and is the only one which displays crater and flank
wear;
- types F and Z tool materials have a very short life and only displays groove wear;
- type W tool material was very interesting at low and medium speeds;
- type C tool material was the most interesting because of its long life, 40min at.low speed
and 35min at medium speed. At high speed the life shortens at 9min.
The tool materials type C, F, W and Z do not displays crater wear.
120
S. Lo Casto et al.
ACKNOWLEDGEMENTS
This work has been undertaken with financial support of Italian Ministry of University and
Scientific and Technological Research.
REFERENCES
1. Chattopadhyay A.K. and Chattopadhyay A.B.: Wear and Performance of Coated Carbide
and Ceramic Tools, Wear, vol. 80 (1982), 239-258.
2. Kramer B.M.: On Tool Materials for High Speed Machining, Journal of Engineering for
Industry, 109 (1987), 87-91.
3. Tonshoff H.K. and Bartsch S.: Application Ranges and Wear Mechanism of Ceramic
Cutting Tools, Proc. of the 6th Int. Conf. on Production Eng., Osaka, 1987, 167-175.
4. Huet J.F. and Kramer B.M.: The Wear of Ceramic Tools, lOth NAMRC, 1982, 297-304.
5. Brandt G.: Flank and Crater Wear Mechanisms of Alumina-Based Cutting Tools When
Machining Steel, Wear, 112 (1986), 39-56.
6. Tennenhouse G.J., Ezis A. and Runkle F.D.: Interaction of Silicon Nitride and Metal
Surfaces, Comm. ofthe American Ceramic Society, (1985), 30-31.
7. Billman E.R., Mehrotra P.K., Shuster A.F. and Beeghly C.W.: Machining with Al203SiC Whiskers Cutting Tools, Ceram. Bull., 67 (1980) 6, 1016-1019.
8. Exner E.L., Jun C.K. and Moravansky L.L.: SiC Whisker Reinforced Al203-Zr02
Composites, Ceram. Eng. Sci. Proc., 9 (1988) 7-8, 597-602.
9. Greenleaf Corporation: WG-70 Phase Transformation Toughened Ceramic Inserts,
Applications (1989), 2-3.
10. Wertheim R.: Introduction ofSi3N4 (Silicon Nitride) and Cutting Materials Based on it,
Meeting C-Group of CIRP, Palermo, (1985).
11. Lo Casto S., Lo Valvo E., Lucchini E., Maschio S., Micari F. and Ruisi V.F.: Wear
Performance of Ceramic Cutting Tool Materials When Cutting Steel, Proc. of 7th Int.
Conf. on Computer- Aided Production Engineering, Cookeville (U.S.A), 1991, 25-36.
12. Lo Casio S., Lo Valvo E., Ruisi V.F., Lucchini E. and Maschio S.: Wear Mechanism of
Ceramic Tools, Wear, 160 (1993), 227-235.
M. Beltrame
P. Rosa TBM, Maniago, Italy
E. Kuljanic
University of Udine, Udine, Italy
M. Fioretti
P. Rosa TBM, Maniago, Italy
F. Miani
University of Udine, Udine, Italy
KEY WORDS: Diamond Machining, Titanium Alloys, Milling Blades PCD, Gas Turbine
ABSTRACf: Is milling of titanium alloys turbine blades possible with PCD (polycrystalline
diamond) cutter and what surface roughness can be expected? In order to answer the question a basic
consideration of diamond tools machining titanium alloys, chip formation and experimental results in
milling of titanium alloy TiAI6V4 turbine blades are presented. The milling results of a "slim"
turbine blade prove that milling with PCD cutter is possible. The tool wear could not be registered
after more than 100 minutes of milling. The minimum surface roughness of the machined blade was
Ra =0.89 tJ.m. Better results are obtained when wet milling has been performed. Therefore, finishing
milling of titanium alloy TiAI6V 4 turbine blades with PCD cutter is promising.
1. INTRODUCTION
Contemporary technology relies much on the exploitation of new and advanced materials.
Progress in Materials Science and Technology yields year by year new applications for new
materials. The field of gas turbine materials has experienced the introduction of several
advanced materials [1] for both the compressor and the turbine blades: respectively titanium
and nickel based alloys have met thorough industrial success. Compressor blades are used
with high rotational speeds; materials with high Young modulus E and low density are
required to obtain a high specific modulus, which is the ratio of the two and is one of the
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
M. Beltrame et al.
122
key factors in controlling the rotational resonance. TiA16V4 (IMI 318), an alloy with a
mixed structure a (hexagonal close packed) and ~ (body centered cubic) with a room
temperature proof stress [2] of 925 MPa and a relative density of 4.46 kgldm is now almost
universally used for blades operating up to 350 c.
Titanium alloys are generally machined with uncoated carbides tools at speeds that have
been increased in the last decades much less than the ones employed in steel cutting. A
possibility to apply PCD tools in turning for titanium - based alloys is presented in [3]. As
far as the authors know there are no publications on titanium alloys turbine blades milling
with PCD cutter.
Is milling of titanium alloys turbine blades possible with PCD (polycrystalline diamond)
cutter and what surface roughness can be obtained? To answer the question we will present
some basic considerations of diamond tools machining titanium alloys, chip formation and
experimental results in milling of titanium alloy TiA16V4 turbine blades, obtained in Pietro
Rosa T.B.M., a leader in manufacturing compressor gas and steam turbine blades.
2. BASIC CONSIDERATIONS OF DIAMOND TOOLS MACHINING TITANIUM
Cutting forces in titanium machining are comparable to those required for steels with
similar mechanical strength [4]; however, the thermal conductivity, comparing to the same
class of materials, is just one sixth. A disadvantage is that the typical shape of the chip
allows only a small surface area contact. These conditions cause an increase in the tool edge
temperature. Relative machining times increase more than proportionally than Brinell
hardness in shifting from the pure metal to a alloy to a/~ to f3 alloys as in the following
table [5]:
. hmes f or vanous htamum aII ~s
. f mach'mmg
Table 1 Rahoo
Turning
Face Milling
Brine II
Titanium
WCTools
WCTools
Hardness
alloy
Pure metal
1.4
0.7
175
Ti
Near a
2.5
1.4
300
TiAl8Mo1V1
a/~
TiAl6V4
~
TiV13Cr11Al3
Drilling
HSSTools
0.7
1
350
2.5
3.3
1.7
400
10
10
In roughing of titanium alloys with a 4 mm depth of cut and feed of 0.2 mm/rev, the cutting
speeds are influenced not only by the hardness but also by the workpiece material structure,
as seen in the Figure 1.
Kramer et al. [6, 7] have made an extensive analysis of the possible requirements for
improved tool materials that should be considered in titanium machining. In such an
interesting analysis a tool material should:
promote a strong interfacial bonding between the tool and the chip to create seizure
conditions at the chip-tool interface,
123
have low chemical solubility in titanium to reduce the diffusion flux of tool constituents
into the chip,
have sufficient hardness and mechanical strength to maintain its physical integrity.
Polycrystalline diamond (PCD) [8] possesses all these requirements. The heat of formation
of TiC is among the highest of all the carbides [9] (185 kJ/mol), the chemical solubility is
low, even if not negligible (1.1 atomic percent in a Ti and 0.6 atomic percent in j3 Ti), and
comparing it with single crystal diamond, has indeed enough hardness, along with a
superior mechanical toughness. PCD is thus a material worth of being considered for
machining titanium alloys, if correct cutting conditions are chosen. The correct cutting
conditions can be found out only by experiments.
c
70
' '
' '
' '
'
E 60
E 50
"0
Q)
Q)
0...
(/)
Ol
-+-+-
:::J
40
30
---A
B
-C
20
10
0
--200
300
400
HB
Figure 1. Titanium roughing with a 4 mm depth of cut and feed of 0.2 mm/rev.
A - a WC tools, B - a and a+j) HSS tools, C - f3 HSS tools
3. EXPERIMENTAL APPARATUS AND PROCEDURE
3.1. Workpiece and Workpiece Material
The workpiece is a compressor blade of a gas turbine, Figure 2. Such a "slim" blade was
chosen on purpose to have an extreme low stiffness of the machining system. The effect of
stiffness of machining system on tool wear in milling was considered in [10].
The material of the workpiece is TiA16V4 titanium alloy heat treated HB400 usually used
for turbine construction.
3.2. Machine Tool and Tool
The milling experiments were performed on a CNC five axis milling machine, at Pietro
Rosa facilities in Maniago, P = 16 kW, with Walter end milling cutter, 32 mm diameter and
3 inserts PCD (Figure 3).
124
M. Beltrame et al.
191.22
A-A
3
__.,__2 ___ .,___
125
M. Beltrame et al.
126
4.2. Tool Wear
An investigation of tool wear in milling was done in [11). The characteristics of diamond
tools are high hardness and wear resistance, low friction coefficient, low thermal expansion
and good thermal conductivity [12).
In these experiments the crater or flank wear was not observed after 108 minutes of dry
milling, Figure 5. The same results were obtained, Figure 6, in wet milling at the same
cutting conditions.
127
There is no difference between new cutting edge and even after 108 minutes of dry or wet
milling. There is an explanation for such a behavior of PCD tool when turning titanium
alloys [3]. The formation of titanium carbide reaction film on the diamond tool surface
protects the tool particularly of the crater wear. Further work should be done for better
understanding of this phenomenon.
4.3. Surface Roughness
Surface roughness is one of the main features in finishing operations. The surface roughness
was measured at three points: 1, 2 and 3 on both sides of the blade, Figure 2. The minimum
value of surface roughness was Ra = 0.89 ~-tm measured in feed direction, and the average
value was Ra = 1.3 ~-tm in both dry and wet milling. It can be seen that the obtained surface
roughness is low for such a "slim" workpiece and for milling operation.
5. CONCLUSION
Based on the results and considerations presented in this paper, we may draw some
conclusions about milling of titanium alloy turbine blades with PCD cutter. The answer to
the question raised at the beginning, whether milling of titanium alloys turbine blades may
be performed with PCD (polycrystalline diamond) cutter, is positive.
The crater of flank wear of PCD cutter does not occur after 108 minutes of milling.
The minimum surface roughness of the machined surface is Ra = 0.89 !J.m, and an average
value is Ra =1.3 ~-tm measured in feed direction.
Milling of TiA16V4 with PCD cutter could be done dry or wet. However, it is better to
apply a coolant.
In accordance with the presented results, milling of titanium based alloy TiA16V4 blade
with PCD cutter is suitable for finishing operation. This research is to be continued.
ACKNOWLEDGMENTS
The authors would like to express their gratitude to Mr. S. Villa, Technical Manager of
WALTER- Italy. This work was performed under sponsorship of WALTER Company.
REFERENCES
1. Duncan, R.M., Blenkinsop, P.A., Goosey, R.E.: Titanium Alloys in Meetham, G.W.
(editor): The Development of Gas Turbine Materials, Applied Science Publishers,
London,1981
2.
Polmear, I.J.: Light Alloys, Metallurgy of the Light Metals, Arnold, London, 1995
128
M. Beltrame et al.
3. Klocke, F., Konig, W., Gerschwiler, K.: Advanced Machining of Titanium and NickelBased Alloys, Proc. 4th Int. Conf. on Advanced Manufacturing Systems and Technology
AMST'96, Udine, Springer Verlag, Wien, N.Y., 1996
4. Chandler, H.E.: Machining of Reactive Metals, Metals Handbook, Ninth Edition,
ASM, Metals Park Ohio, 16(1983)
5. Zlatin, N., Field, M.: Titanium Science and Technology, Jaffee, R.I., Burte, H.M.,
Editors, Plenum Press, New York, 1973
6. Kramer, B.M., Viens, D., Chin, S.: Theoretical Considerations of Rare Earth
Compounds as Tool Materials for Titanium Machining, Annals of the CIRP, 42(1993)1,
111-114
7. Hartung, P.D., Kramer, B.M.: Tool Wear in Titanium Machining, Annals of the CIRP,
31(1982)1, 75-79
8. Wilks, J., Wilks, E.: Properties and Applications of Diamond, Butterworth
Heinemann, Oxford, 1994
9.
Toth, L.E.: Transition Metal Carbides and Nitrides, Academic Press, New York, 1971
10. Kuljanic, E.: Effect of Stiffness on Tool Wear and New Tool Life Equation, Journal of
Engineering for Industry, Transaction of the ASME, Ser. B, (1975)9, 939-944
11. Kuljanic, E.: An Investigation of Wear in Single-tooth and Multi-tooth Milling, Int. J.
Mach. Tool Des. Res., Pergamon Press, 14(1974), 95-109
12. Konig, W., Neise, A.: Turning TiA16V4 with PCD, IDR 2(1993), 85-88
S. Dolinsek
University of Ljubljana, Ljubljana, Slovenia
ABSTRACT: In the following paper some results of the on-line identification of the cutting process
in the macro level of orthogonal turning, are presented. The process is described by the estimation of
the transfer function, defmed by output-input energy ratios. The estimated parameters of the transfer
function (gain, damping) vary significantly with different tool wears and provide a possibility for
effective and reliable adaptive control.
1. INTRODUCTION
Demands on machining cost reduction (minimization of the operators assistance and
production times) and improvements in product quality are closely connected with the
successful monitoring of the cutting process. Thus, building up an efficient method for online tool condition monitoring is no doubt an important issue and of great interest in the
development of fully automated machining systems. In detail, we describe a reliable and
continuous diagnosis of the machining process (tool failures, different tool wears and chip
shapes), observed under different machining conditions and applied in practical
manufacturing environments. A great effort has been was spent during the last decade in
researching and introducing different applications of tool monitoring techniques [ 1].
Numerous research works have addressed these questions, related to the complexity of the
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
S. Dolinsek
130
cutting process, but marketable monitoring applications are still too expensive and
unreliable. They are more useful in tool condition monitoring techniques. The completed
monitoring system usually consists of sensing, signal processing and decision making.
According to different approaches to monitoring problems, methods can be divided into
two categories: model based and feature based methods. A comprehensive description of
different methods is depicted in Fig 1. [2]. The most widely used are features based
methods, where we can observe some features, extracted from sensor signals to identifY
different process conditions. In model based methods sensor signals are outputs of the
process, which is modeled as a complex dynamic system. These methods consider the
physics and complexity of the system and they are the only alternative in modeling a
machining system as a part ofthe complex manufacturing system [3]. However, they have
some limitations, real processes are nonlinear, time invariant and difficult for modeling.
outpulol:a
I!!Sllm~(IOO or
model patameters
-I
IDENTIFICATION OF
PROCESS CONDITION
131
For the orthogonal cutting model, presented in Fig.2., the input and output energies are
expressed in the form of their time series parameters [5]:
(2)
(3)
fii CIIOMI
\ surface inlerfaces.
Fig 2.: Energy model of cutting process [6) and practical orthogonal implementation [7].
Fig.2. also shows the practical solution of orthogonal cutting in the case of side turning of
the tube and the necessary measuring points to access input-output parameters in energy
equations. The cutting process can be described in this way by on-line estimation of the
input output energies and their spectral estimations. The transfer function is defined as
follows [8,6):
(4)
S. Dolinsek
132
At the transfer function equation, Gu;uo represents the cross power spectrum estimate
between the input and output energies and Gu;u; the input energy power spectrum estimate.
An estimated transfer function could be described in the form of its parameters, gain
(amplitude relationship) and damping factors (impulse response).
3. EXPERIMENTAL SET UP DESCRIPTION AND TIME SERIES ENERGY
ASSESSMENTS
For the verification of the presented model, it is necessary to build-up a proper machining
and measuring system. The sensing system for accessing the parameters in energy equations
consists of a force sensor, cutting edge acceleration ( velocity displacement ) sensors and a
cutting speed sensor. With their characteristics, they do not interfere within the studied
frequency range of the cutting process. The greatest problem exists in measuring the chip
flow speed. On-line possibilities have so far not been materialized so that the speeds had to
be defined from interrelations between chip thickness and cutting speeds. To record all
measured parameters simultaneously in real time, a sophisticated measuring system was
used. Fig. 3. shows the basic parts of equipment for signal processing and also a description
of workpiece material, cutting tool geometry and the range of selection in the cutting
conditions.
F, (t) Fy (t)
Spectrum I Network
Analyzer
rr===~F=!=:;;::::=
HP;-]3567 A
TOOL
WORKPIECE
material
ISO- C45E4
I
I
treatment
nonnalized
insert
1NMA 2204os 1
grade
415 PIS
CUTTING CONDillONS
v, (m/min)
!50
HP-tB
1 f (mm/rev)
0,193
a (mm)
2
I
I
lubrication
I
I
aver. (No.)
INSTRUMENTATION CHARACTERISTICS
sampl. I. (s)
1 freq. r. (kHz)
6,4
resol. (lines)
1600
40
HP Measurement
Software
133
parameters of the process ( power spectrum of the cutting force and displacement speed ) is
shown in Fig. 4. From the above study we can conclude that the energy of the cutting
process is, in the case of the real turning process mainly, distributed in the range of the
natural frequencies of the cutting tool tip.
Fig 4: Power spectra of force and displacement velocity plotted in comparison of tool-tip
modal characteristics in input direction.
The fluctuations of input-output energies are determined from real time series records of
measuring parameters in energy equations. Fig. 5. shows an example of time series records
of the input parameters ( cutting speed, acceleration, computed displacement speed, the
difference between the cutting and displacement speeds and the input force) and a
calculated time series record of the input energy. Similar results have also been obtained for
the output energies. An analysis of stochastic time series records of cutting forces and
displacement velocities signals shows stability, normality and sufficient reproducibility of
measuring results. Changes in cutting conditions significantly influence the static and
dynamic characteristics of the parameters in the energy equations.
speed difference
input speed
f:r~~:-~
l'
,.
,.
acceleration
displacement speed
-~----.---
;_ -- ---~
~----:
__ ;.. __ _
lrt.--~--~~~-L_:____
._
_,.
Fig. 5.: Time series of the measured parameters and input energy evaluation.
S. Do1insek
134
so
)11!{.>
!lOr...
HI
=0 mm)
=0,2 mm)
Fig. 6.: Power spectra of the input and output cutting forces for new and worn tool.
4. TRANSFER FUNCTION ESTIMATION
From the estimated power spectra of input and output energies, their relation functions and
transfer function were obtained. As presented in Fig. 7., the estimated cross power spectra
show a common signal component in the frequency range of 2 to 2,5 kHz, where good
coherence relationships (betw. 0,75 to 0,85) and signal to noise ratio (betw. 5 to 10) exist
An estimated transfer function of the cutting process could be analyzed qualitatively
corresponding to its structure and quantitatively with respect to its parameters.
The shape of the transfer function is a characteristic of the process in connection with the
structure characteristics of the machining system in a closed loop. In amplitude relationships
its shape shows certain gain as a consequence of the cutting process and also as multimodal
responses of the tool-tip and dynamometer. The damping of the transfer function was
obtained from its impulse response, which is a damped one - sided sine wave.
135
<[ner S et.>
lk
Uoq
NZm-2
<[ner
output energy
.>
1
I
f'
---::----:---:----:---1 !----:---:----:--1
---~----~---~----~---+
1
~ ---~----~---~----~--
- ~----~---~----~--! l !
-- ---~---4----~--1
o~,k--~:==~~~~~~~--~-d=-~:--~
H!
transfer function
cross energy
ISO~<C~ros!_Ls.~[ne~r!__>- , - - - - , - - - - , - - - - - ; - - - - ; - - - - - - ; - - - - ; - - - ; - - - ,
<lm.
'I
.ol
Real
impulse response
R~o.>
! ! ! ! ! ! ! l
m!:----:---~----~---:---:----:---:----:--11..
U\
it
- :~~~~--~----~---+---~----~---~----~---
'!IK
--
Fig. 7.: Estimations ofthe power spectra and transfer function of the cutting process.
In the region of a strong connection between energies we can locate changes in the gain and
damping of the transfer function as an influence of cutting conditions [ 11]. These
parameters could therefore be a basis for the identification of the process and a criterion for
on-line adaptation of cutting conditions according to optimal cutting circumstances[12].
Tool wear is certainly one of the most unfavorable phenomena in cutting processes.
Research on turning with a worn tool indicated an increase at power spectra of input energy
and decrease of output energy. The most significant changes are in the transfer function
parameters, while the shape remains unchangeable. Fig. 8. indicates the influence of
different tool wears on the parameters of the estimated transfer function. Decreasing in gain
and increasing in damping are the identification characteristics which confirm unfavorable
cutting conditions and accuracy of the identification process.
1,8
iD
."'
.6
1,5
-2
Iii'
::::. 1,2
"'
a
r:::
-4
..,.
E 0,9
-6
0,6
-8
-10
0,10
tool wear (mm)
0,20
0,3 +----------r--------1
0
0,10
0,20
tool wear (mm)
Fig. 8. Gain and damping values of the transfer function for different tool wears.
136
S. Dolinsek
5. CONCLUSIONS
The results of the on-line identification of the cutting process on the macro level in side
orthogonal turning are presented. A cybernetic concept of the machining system, proposed
as a basis of identification by J. Peklenik, treats the cutting process in a closed loop with the
machine tool. The process is described by the estimate of the transfer function defined by
the output input energy ratios. The shape of the estimated transfer function is a
characteristic of the process. Corresponding to changes in tool wear, the gain and damping
values of the transfer function are also changed. A decrease in gains and an increase in
damping are identification characteristics, which show the unfavorable cutting conditions
and a strong connection to the cutting process characteristics. The proposed analysis of online identification of the cutting process has therefore confirmed the possibility of applying
the proposed models, however, more experimental verifications in different cutting
conditions should be made to provide a possibility for practical use.
6. REFERENCES
1. Byrne, G., Dornfeld, D., Inasaki, 1., Ketteler, G., Teti, R.: Tool Condition Monitoring, The Status
of Research and Industrial Application, Annals ofthe CIRP, 44 (1995), 2, 24-41
2. Du, R., Elbestawi, M., A, Wu, S., M.: Automated Monitoring of Manufacturing Processes,
ASME, Journal of Engineering for Industry, 117 (1995), 3, 121- 141
3. Serra, R., Zanarini, G.: Complex Systems and Cognitive Processes, Sprin.- Verlag, Berlin, 1990
4. Peklenik, J., Mosedale, T.: A Statistical Analysis of the Cutting System Based on an Energy
Principle, Proc. of the 8th Intern. MTDR Conference, Manchester, 1967, 209-231
5. Mosedale, T.W., Peklenik, J.: An Analysis ofthe Transient Cutting Energies and the Behavior of
the Metal-Cutting System using Correlation Techniques, Adv. in Manuf Sys., 19, 1971, 111-141
6. Peklenik, J., Jerele, A: Some Basic Relationship for Identification of the Machining Process,
Annals ofthe CIRP,41(1992), 1, 129 -136
7. Dolinsek, S.: On-line Cutting Process Identification on Macro Level, Ph.D.thesis, University of
Ljubljana, 1995
8. Merchant, M., E.: Mechanics of the Metal Cutting Process, Journal of Applied Physics, 16
(1945), 3, 267- 275
9. Bendat, J., Piersol, A: Engineering Applications of Correlation and Spectral Analysis, John
Willey and Sons Ltd, New York, 1980
10. Ewins, D., J.: Modal Testing- Theory and Practice, John Willey & Sons, London, 1984
11. Dolinsek, S., Peklenik, J.: An On-line Estimation of the Transfer Function for the Cutting
Process, Technical paper of NAMRI/SME, 27 (1996), 34-40
12. Kastelic, S., Kopac, J., Peklenik, J.: Conceptual Design of a Relation Data Base for
Manufacturing Processes, Annals ofthe CIRP, 42 (1993), 1, 493-496
1. INTRODUCTION
In industrial applications, where flexible manufacturing systems are employed, one of the
most important tasks is to control the tool status in order to replace it as it loses its cutting
capability. Amongst the various methods of controlling the tool status, a subdivision can be
made between on-line and off-line methods.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing :jy_stems and Technolo~y.
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
138
The Authors have already conducted both on-line (related to machine tool vibrations) (1]
and off-line (related to direct measurement of the flank wear by means of a microscope)
analyses showing how, sometimes, it is rather difficult to correctly correlate the actual tool
status with the physical variables chosen to monitor the system. The simplest way to
conduct off-line check of tool status is to measure either the flank and the crater wear levels
(see Figure 2) or to evidence the presence of a cutter breakage. This operation gives a lot of
problems when it is conducted automatically on-line [2, 3, 4, 5].
A suitable mechanism for automatic recognition of the tool wear level can be found
applying a neural network trained to perform image recognition. The neural network
proposed by the authors is a multi-layer one where both the input nodes ( 518) and the
second layer (3 7) perform non-linear operations.
NO
I~~~~N!ntinuous I J;~ I ~ I
Data acquis~on
monitoong
of aphysica variolllo
2. NEURALNETWORKS
The field of Artificial Neural Networks has received a great deal of attention in recent
years from the academic community and industry. Nowadays A.N.N. applications are found
in a wide range of areas, including Control Systems, Medicine, Communications, Cognitive
Sciences, Linguistic, Robotics, Physics and Economics [6].
As a definition we can say that A.N.N. is a parallel, distributed information processing
structure consisting of processing elements (which can possess a local memory and can
carry out localised information processing operations) interconnected via unidirectional
signal channels called connections. Each processing element (called neuron) has a single
output connection that branches into as many collateral connections as desired. Each of
them carries the same signal, which is the processing element output signal. The processing
element output signal can be of any mathematical type desired. The information within each
139
element is treated with a restriction that must be completely local : this means that it
depends only on the current values of the input signal arriving at the processing element via
incoming connections and on the values stored in the processing element's local memory.
Bias
Threshold unit
X1
x, ;~"
;
I
I
I
I
.,
\
i
1
xn
Figure 3 - Basic neuron scheme
Input
signals
signals }
' ....
I
I
\~/
Output
layer
3. mE BACK-PROPAGATION ALGORITHM
The activation function for the basic processing unit (neuron} in a B.P. network is the
sigmoid function (please refer to Figure 3}:
1
L
n
.I
I Output I
(I)
140
The unknowns in this expression are represented by w; and () for each neuron in each
layer. The error of the network can be evaluated comparing the outputs obtained by the net
(opo) with the desired ones (dpo) extended to all the neurons of the output layer. This means
that for one of the pattern (p) used to teach the net, the error can be defined as:
Ep
=L(dpo- opor
(2)
1 (
)2 ...21 p
E=""-""
....2.... dpo -opo =""-E
p
(3)
The target of the learning of the net is to minimise this error, i.e. it is possible to correct
the weights repeatedly until convergence. The algorithm (7] states that the change to be
made to each weight is proportional to the derivative of the error with respect to that
weight :
(4)
where w if is the weight between the output of unit i and unit j in the next layer. 17
represents the learning rate of the process.
It is our intent to use such a network (once adequately taught) to determine automatic
recognition of the tool status.
4. THE ARCHITECTURE OF THE SYSTEM DEVELOPED
141
Step 2
Step 3
Step 4
origin of 1he coordinate system
forlhe captured image
X
Steps 5 and 6
1. the original digital image is recorded on the hard disk of the personal computer ; the
images are stored using 256 grey levels ;
2. the maximum value of the grey level of the background is used in order to obtain a
black background ; the mean grey level is also calculated ;
142
3. all the non-black pixel modify their grey level in order to obtain a mean value equal to
127 (256/2 - I) ;
4. a suitable procedure (starting from the origin of the co-ordinate system) finds the tool
edge and then the tool corner analysing the grey level of the image's pixels ;
5. once the image is cleaned up and the corner of the cutter is identified, a rectangle
(width equal to 2 mm and height equal to 0.2 mm) is drawn around the tool edge; the
number of pixels contained in this rectangle depends on the scale along the x and y
axes : there are 185 x 70 pixels; this means that, using all these pixels, the input layer
of the A.N.N. would consist of 12950 neurons (a too large number);
6. the pixels contained in this area are therefore tessellated into squares of 5 x 5 pixels
taking the mean value of the grey level for each square ; the tassels are then 3 7 x 14 ;
7. these grey levels are stored in a suitable ASCII file which represents one of the inputs
of the A.N.N. ; using these files the network can be taught or can recognise the tool
status.
6. NETWORK ARCHITECTURE
143
hidden neurons, 1 output neuron, 24 samples of both good and worn tools, a learning rate
equal to 0.5 and a maximum acceptable error equal to 0.01. In order to facilitate the
network convergence a suitable procedure was adopted. More specifically the network is
trained using only one set of the samples ( 5 at the beginning) and then (once convergence
occurred) a new image is put in the sample set which is used for a new convergence, and so
on until all the images are considered. The convergence of the network was obtained within
20 minutes. Figure 8 shows as an example one of the input images and the weights
distribution between the input and the hidden layers after the training phase.
Figure 8 -An example of the grey level for one of the sampled images
and the weight distribution for the trained network
To test the network's capacity for recognising tool status, 28 other images were
experimented. The answers of the network are shown in Table 1. Only in one case did the
network give a wrong answer and in two cases it was not sure of the tool status.
15
16
17
18
19
20
21
WORN
0000
0.999
0.999
0.999
0.999
0.957
0.995
WORN
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
23
24
25
26
27
28
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
0.703
0.999
0 999
0.999
0.903
0.979
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
GOOD
144
8. CONCLUSIONS
The present work has shown how it is possible to identify the tool status by means of offline analysis using a suitably trained neural network. The architecture proposed here has
lead to a small number of interconnections (weights) even though the input layer has a large
number of neurons. The procedure of increasing the number of images in the training set has
also facilitated the network convergence reducing the calculus time. Once the network is
trained, the answer to a new image is given in real time.
ACKNOWLEDGEMENTS
This work has been made possible thanks to Italian CNR CT11 95.04109 funds.
BffiLIOGRAPHY
1. E. Ceretti, G. Maccarini, C. Giardini, A. Bugini: Todl Monitoring Systems in Milling
2.
3.
4.
5.
6.
7.
G.M. Lo Nostro
University of Genoa, Genoa, Italy
G.E. D'Errico and M. Bruno
C.N.R., Orbassano, Turin, Italy
Lifetime ofhigh speed steel (HSS) tools for steel machining, usually exhibits coefficients of
variation in the range 0.30-0.45 [1-3]. Nevertheless, analyses of cutting performance oftaps
point out that tool life coefficients of variation may have higher values, in the range 0.6-0.8.
Since lifetime of taps is affected by the pre-drilling process, it is interesting to investigate if
a relationship exists between wear of drills and life obtained by taps.
In order to contribute to a deeper insight, the present paper is focused on an array of
experiments designed and performed as follows.
A set of twist drills is partitioned into subsets such that elements in a subset are
characterised by values of wear belonging to a given range. A lot of steel workpieces is also
sub-divided into groups such that each group is composed of workpieces machined using
drills belonging to a single subset only. Each group is further machined, until tap breakage,
Published in: E. Kuljanic (Ed.) Advanced Manufacturing S~~tems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, W1en New York, 1996.
146
using nominally equal taps. During the experiments, observations are made in terms of
number ofholes obtained by a tap before its breakage.
A statistical treatment of experimental results allows to find out a relationship between wear
of twist drills and life of taps. This result translates also into optimisation criteria for the
complete tapping process.
2. EXPERIMENTAL DETAILS
Discs (thickness 16 mm) of steel 39NiCrMo3 UNI 7845 (HV50 = 283-303) are used for
workpieces prepared according to UNI 10238/4. On each workpiece 53 through holes are
obtained by wet drilling operations at cutting speed Vc = 20 m/min and feed f = 0.08
mm/rev.
The workpieces are subdivided into 36 groups. Each group contains 8 specimens denoted
by numerals i (i=1, ... ,8). More, 36 twist drills UNI 5620 (diamet~ 6.80 mm) are used to
perform drilling operations such that a single drill per group is used. Hence 424 holes are
drilled by each drill (53 per specimen). The mean value of flank wear Vs is measured on the
drill after 53 holes are completed on a single specimen. Therefore a basic experiment
consisting in drilling 8 specimens in the sequence from 1 to 8 is iterated 36 times (i.e. the
number of the specimen groups
After the drilling operations, the above specimens are tapped using taps M8 (UNI-ISO
54519). Tapping is performed at a cutting speed Vc = 10m/min. The same machine is used
for both wet drilling and wet tapping operations.
Tapping experiments are performed as follows. The drilled workpieces are grouped into 8
groups such that each group} (j=1, ... ,8) contains the 36 specimens denoted by i=j. All the
36 specimens of each single group are tapped by a single new tap: the tap allocated to
group j is denoted by j (j=1, ... ,8). The total number of holes on a complete set of 36
specimens is 1908. A tap is used until a catastrophic failure occurs. If a tap happens to fail
before a set of specimens is completely machined, the worn tap is replaced by a new tap,
giving rise to a second run of experiments. Such a situation occurs when tapping specimens
belonging to the groups from no. 4 to no. 8. In the case of group no. 1, 7 extra specimens
(for a total of 43 specimens) are tapped before the tap breakage. Results of tapping
experiments are summarised in Table 1 in terms of number m ofholes threaded during a tap
lifetime.
m (1 81 run)
m (2ndrun)
1
2
3
4
5
6
7
8
2120
1621
1101
735
653
645
517
566
957
899
789
823
844
147
Individual values of flank wear measured on drills_;u-e collected in a set, such that 8 sets of
36 data each are obtained. Each set is processed in order to check if relevant data fit a
normal distribution, using a Kolmogorov-Smirnov test (K-S). Table 2 reports the mean
values of V8 (mm) along with standard deviations cr and parameter D2 relevant to the
distributions obtained.
Table 2 A synopsis of statistical data.
group no.
mean Vs, mm
1
2
3
4
5
6
7
8
0.076
0.121.
0.164
0.204
0.228
0.259
0.291
0.330
cr
0.0152
0.0283
0.0299
0.0285
0.0295
0.0334
0.0341
0.0514
D2
0.157
0.175
0.191
0.134
0.144
0.181
0.097
0.173
Since all D2 values are less than 0.27, the critical value at the significance level l-a=99%,
results of the (K-S) test are always positive.
The mean flank wear Vs (mm) observed on the drills may be plotted versus the number of
drilled holes n on the base of a 3rd order polynomial [4]:
Equation ( I ) has a correlation coefficient r=O. 913, a quite high value given the sample size.
Table 3 List of n 25 values and correlation coefficients of relevant 3rd order polynomials.
Drill no.
I
2
3
4
5
6
7
8
9
10
11
I2
n2s
327
286
355
295
325
341
270
363
338
231
212
270
r
0.958
0.972
0.941
0.966
0.948
0.979
0.974
0.989
0.980
0.966
0.962
0.943
Drill no.
13
I4
15
16
17
18
19
20
21
22
23
24
n2s
332
297
274
247
376
30I
30I
308
310
231
270
257
0.978
0.963
0.972
0.960
0.952
0.984
0.924
0.971
0.987
0.956
0.989
0.971
Drill no.
25
26
27
28
29
30
31
32
33
34
35
36
n2s
I96
291
243
197
242
283
290
292
305
246
327
278
0.983
0.986
0.943
0.967
0.974
0.977
0.957
0.961
0.972
0.983
0.997
0.965
148
Following the above treatment, if also individual curves V8 =V8 (n) pertaining to each single
drill are interpolated by 3rd order polynomials, the mean number of holes n 25 obtained until
reaching a Vs=0.25 mm can be estimated. The set of these 36 values of n 2s fits a Weibull
distribution, characterised by the parameters a = 7.33 and f3 = 305.0. The significance of
this distribution according to a (K-S) test, results in a confidence level higher than 99%
(D2=0.987>0.27, the critical value). Estimated values ofnzs are listed in Table 3 along with
correlation coefficients.
If data of tapping experiments are cross-correlated with data of drilling experiments, the
following equation can be obtained with a regression coefficient r = 0.897:
21750.673
m =----,-.,...,....,-
(2)
n0.596
A correlation between V8 , mm, (UNI ISO 3685) and m can be obtained by the following
equation with a regression coefficient r = 0.896:
233.884
(3)
m=-....,....,.~
VB0.852
Equations (2) and (3) have prediction limits for the individual values respectively given by
the following equations (4) and (5):
(4)
(5)
It is likely that tap life decrease with increase of twist drill wear may be due to the greater
work hardening that the worn drills produce on the hole walls. Also the hole diameter
reduction due to drill wear might be supposed to be a cause of this situation, but such a
conjecture is discarded on the base of the following procedure performed (before tapping)
on 3 holes in each test specimen. Actually, a statistical analysis of measurements of pairs of
diameters, taken at 1/3 and 2/3 of distance from the hole bottom respectively, points out
that the max difference between the mean diameters of the holes made by new drill and by
worn drill is maximum 0.0207 mm only. This reduction corresponds to a removed-material
increase of less than 3%. Accordingly, it is deemed that the effect of hole diameter
reduction on tap life is negligible, at a first level of approximation.
~
0
149
2500
~----:------------,
2000
J'
..r:::
l15oo
a.
.rg
,_
....
1000
500
Q)
.0
E
c
::J
...
... ... .
+-------~------~
100
200
300
400
2500 - -
:fl2000
0
..c
-ga. 1500
a.
IU
0 1000 -
....Q)
.0
500 ~
0
0
.10
.20
.30
.40
Vs (twist drills), mm
Figure 2: Number of tapped holes vs. drill' s flank wear.
In order to verifY the extent of work hardening on the hole walls, the 3 above mentioned
pre-drilled holes of eight specimens are sectioned. In these sections, 40 Vickers
microhardness measurements (loading mass 50 g) are performed at a distance of 0.1 mm
from the wall. Four specimens in group no. I (i.e. drilled by new twist drills) are compared
150
with four corresponding specimens in group no. 8 (i.e. drilled by drills at the maximum wear
condition). Results of this investigation are summarised in Tables 4-5. Table 4 reports the
confidence intervals (at the level 1-a=0.95), of the mean values of microhardness
measurements.
Table 4 Confidence intervals ofmicrohardness mean values.
specimen
confidence intervals
specimen
group no. 8
group no. I
#I
#2
#19
#20
confidence intervals
#1
#2
#19
#20
293.0-304.5
333.5-348.9
332.7-345.5
314.0-339.0
3I2.0-324.4
356.2-377.8
350.3-362.6
347.3-367.6
Table 5 reports the confidence intervals (at the level 1-a=0.999) of the differences
(according to UNI 6806) of mean HV values of specimens in groups no. 1 and no. 8.
Table 5 Confidence intervals of mean microhardness difftrences.
group no. I vs. group no. 8
confidence intervals
specimens pair #1
specimens pair #2
specimens pair # 19
specimens pair #20
5.37-33.63
13.00-38.50
2.32-32.38
15.60-46.20
The life / 1 obtained by a tap when tapping 1 out of n holes pre-drilled by the same single
drill can be estimated by use of equation (6), where m is obtained by equation (2):
fi=llm
(6)
In general terms, results of previous analysis can be applied for estimating the tool life ratio
/x+y attainable by a new tap in tapping a number y of holes that have been pre-drilled by a
drill used for a total of n=x+y holes. An estimate for /x+y is given by:
x+y
fx+y =
subject to
k0.596
k=x+t21750.673
(7a)
(7b)
In particular, when a new (or re-sharpen) drill is used to pre-drill a total of y holes, the
following conditions apply:
151
x=O, andy=n=m
(7c)
k0.596
f =L--Y
k=l21750.673
(7d)
From equation (7d), the quantity y=700 is obtained such that jy=l: this value of y
represents the maximum number of holes attainable by a tap during its whole lifetime.
An optimal value for y can be derived on the base of economical considerations. If costs
related to accidental tap breakage are not taken into account, the cost C for tapping y holes
already pre-drilled by a single drill can be estimated. Assuming that a single new tap is used
for tapping y holes, the following equation can be derived:
(8)
where:
cd is the cost of a new drill;
c, is the cost for re-sharpening a wom drill,
Ct is the cost of a tap;
n, is the umber of expected re-sharpening operations;
T. is the time required for substitution of a worn drill;
M is the machine-tool cost per time unit;
h is the exploited ratio of tap life.
It is worth noting that if a tap fails before the drill, then 11r represents the number of re-sharpening
operations performed on the drill before the tap failure occurs: in this case is (tlr + l )y=l, and the
last term in equation (8) should be replaced by the quantity ctl(n, + l).
If costs are expressed in Italian liras and time in minutes, the following cases can be developed by
introducing in equation (8) the values c,=500, c1= 15000, Ts=0.5, and:
(a) cd=5000, M=lOOO;
(b) cd=3250, M= 1000;
(c) cd=5000, M=50;
(d) cd=5000, M= 1500.
The relationships C=C(y) obtained by equation (8) are plotted in Figure 3, where curves
relevant to cases (b) and (c) appear overlapped.
A minimum in a plot in Figure 3 indicates the number ofholes over which it is convenient to
replace a worn twist drill. It should be noticed that minima in such plots may become even
more important if a term related to cost of expected damage due to tool failure is added in
equation (8).
152
70 -..-----
-----~
.~50
iii
~ 40 l
()
1ii 30 J
8
Ol 20
c:
c..
a. 10
$
.,
'
4. CONCLUSIONS
The investigation performed propose a comprehensive a treatment of tool life variability
which affects performance of tools for internal threading. In the light of the experimental
results presented and discussed, the following conclusions may be drawn.
I. A statistical model may be developed which points out a possible relationship between
wear oftwist drills and life oftaps.
2. It is likely that tap life decrease with increase of twist drill wear may be due to the
greater work hardening that the worn drills produce on the hole walls.
3. A minimum cost criterion provides the basis for an optimal replacement strategy of worn
drills.
REFERENCES
I. Wager, J.G., M.M. Barash: Study ofthe Distribution of the Life ofHHS Tools, Transactions
oftheASME, II (197I), 1044-1050.
2. Ramalingam, S., Tool-life Distribution. Part 2: Multiple Injury Tool-life Model Journal of
Engineering for Industry, 8 ( 1977), 523 - 531.
3. Lo Nostro) G.M., E.P. Barlocco, P.M. Lonardo, Effetti della Riaffilatura di Punte Elicoidali in
HSS ed in ASP, nude o rivestite, La Meccanica Italiana, 12 ( 1987), 38-45.
4. De Vor, R.E., D.R. Anderson, W.J. Zdebelik, Tool Life Variation and its Influence on the
Development ofTool Life Models, Transaction ofthe ASME, 99 (1977), 578-584.
For reasons of rationalization of the process high costs of purchasing some of the newest
cutting fluids have to be compensated, by a suitable increase in production rate or
considering smaller tool wear, and by the environmental effects: th~ new cutting fluids have
to be environment-friendly with a possibility of recycling. The results of the research are
presented in diagrams and tables serving as guidelines where by the pondered values method
the suitability and economy of particular cutting fluids in cutting threads were assessed.
154
material
energy
products
service
dust
dust water
~pecial dust
MANUFACTURING
PROCESS
water
knowledge
elimination
hydrocarbons
KUTEOLCSN5
Teo! Ljubljana
semysynthetic fluid
cutting oil
TEOLIN AIK
Teo! Ljubljana
(20% emulsion)
SKF
0
1\
C==
155
-oil phase
-molecule of water
- molecule of oil
- surface active molecules
- aditive in water
- mulecules of soluble material
Si
0.15
0.35
0.42
0.50
Mn
0.50
0.80
max.
0.035
max.
0.035
Flow
stress
N/mm2
Tensile
stress
N/mm:
min.
420
670
elongation
%
min.
16
Table 3. Chemical structure and mechanical properties of alloy AlMgSiPbBi (design. T8)
Chemical structure %
Si
1.11
I
I
Mg
1.00
Pb
0.53
I
I
Fe
0.30
Flow
stress
N/mm2
Tensile
stress
Nlmm 1
elongation
365
387
11.4
Bi
0.56
Another aim of this paper was to establish suitable combinations of cutting speeds and
cutting fluids from the point of view of machinability. Experiments were planned with onefactor plan by Boks-Wilson method inside cutting speed limits. A mathematical model of
machinability function, which is used for searching the response inside of experimental
space, is a potential function:
M=Cv/
where are:
C, p
Vc
machinability parameters,
cutting speed as control factor and
moment as process state function.
(1)
156
The moment is the output which results from measurements of tapping process. The cutting
speed varied from Vc. min= 12 m/min to Vc. max= 25 m/min. SKF M10 tap was used with a
shape for common holes deep I = 10 mm, which were selected considering the plan of
experiments and optimal cutting geometry for Al-alloy or steel.
4. MEASUREMENT RESULTS
Measurement results are directly shown in Table 4 and 5 through the matrix plan and
statistic values.
Table 4. Tapping into AI- alloy AlMgSiPbBi with 20% AIK emulsion
Matrix
plan
No.
Xo
1
2
3
4
5
6
1
1
1
1
1
1
+1
-1
0
0
0
0
Variable
value
v. m/min
25.13
' 12.57
17.59
17.59
J7.59
17.59
Measured
value
M Nm
3.740
3.325
3.618
3.692
3.740
3.723
y=lnM
y2
1.319
1.202
1.286
1.306
1.319
1.315
1.74
1.44
1.65
1.71
1.74
1.73
10.01
1.65
1.71
1.74
1.73
6.83
Yo2
Yo2
Matrix
plan
Xo
1
1
1
1
1
1
XJ
+1
-1
0
0
0
0
Variable
value
Vc m/min
12.57
6.28
8.79
8.79
8.79
., 8.79
Measured
value
M Nm
10.787
9.581
11.569
10.522
11.182
10.697
y=lnM
2.378
2.260
2.448
2.353
2.414
2.370
5.65
5.11
5.99
5.37
5.83
5.62
33.57
5.99
5.37
5.83
5.62
22.81
Equations which represent the relationship between the cutting force moment and cutting
speed are:
- for steel using a 20% Teolin AIK emulsion:
M = 7 .385 vc o.n
(2)
157
=1034 vc -o.o29
(3)
(4)
=5323
v c -o.
.
(5)
182
The model requires a variation of 24 tests carried in four repetitions in the zero point and
distributed according to the matrix. The coefficients of regression were considered and the
results verified by the Fischer criterion FrLF. the dispersion of the experimental results mean
values (SLl) being compared with respect to the regression line (SR2):
sz
-y
F rLF -
(6)
LF
R
The condition of suitability is fulfilled ifFrLF < F,: F, = 9.55 and is obtained from the table
of the degrees offreedom. The calculation ofFrLF yield the following results:
- Ck 45 I 20 % emulsion
- Ck 45 I oil
- Al-alloy I 20 % emulsion
- Al-alloy I oil
which show adequacy of these models for each workpiece material I cutting fluid
combination used.
Diagrams which following from equations (2) to (5) are shown below in Figs. 3 and 4.
158
100
material: steel Ck 45
tool: tap M1 0 (type E 348)
cutting speed: 12 to 25 m/min
e~
cr
~c 10
~
0
...0
10
Cutting speed
Vc
(m/min)
I
100
10
material: AIMgSiPbBi
tool: tap M1 o (type E 358)
cutting speed: 12 to 25 m/min
-===;::; ~
tr
10
Cutting speed
Vc
(m/min)
100
159
5. CONCLUSIONS
Cutting fluid has, besides of cutting speed, a big influence on forces and moments in tapping
thread into steel Ck 45 and Al-alloy AlMgSiPbBi. An appropriate combination of cutting
speed and cutting fluid is important for the efficiency and required quality of machining.
Used tap
steel
Ck45
20% emulsion
Teolin AIK with water
Q= 0.51/min
Cutting oil
Kuteol CSN 5
Q= 0.51/min
20% emulsion
Teolin AIK with water
Q = 0.51/min
12
Cutting oil
Kuteol CSN 5
Q =0.51/min
25
AIMgSiPbBi
(T8)
Recommended cutting
speed
Vc m/min
6
12
We can see that the moment is decreasing with cutting speed Kuteol CSN 5 oil is used, but
its usage is limited because of chlorinated hydrocarbons contents. Because of that, we must
reconcile ourselves with the primary goal of this paper stating ecological effects as the main
factor. The achievement of goals which are related to clean nature and improvement of
working and living environment definitely require more costly production.
REFERENCES
1. E.M. Trent: Metal Cutting, Edition, Butterworths, London, 1984
2. K. Mijanovic: Research of quality of TiN coated taps, Master thesis, Faculty of
mechanical engineering, Ljubljana, 1992
3. L. Morawska, N. Bafinger, M. Maroni: Indoor Air Integrated Approach ES1"', Elsevier,
Oxford, 1995
160
4. G. Douglas, W.R. Herguth: Physical and Chemical Properties of Industrial Mineral Oils
affecting Lubrication, Lubrication Engineering, Vol. 52, No.2, January 1996, 145- 148
5. W.D. Hewson, G.K. Gerow: Development ofNew Metal Cutting Oils With Quantifiable
Performance Characteristics, Lubrication Engineering, Vol. 52. No. 1, January 1996, 31-38
6. K. Heitges: Metallwerkerlehre, Band 2, Umformen mit Maschinen Arbeiten an
Werkzeugmaschinen, Lehrmittelverlag Wilhlem Hagemann- Dusseldorf, Dusseldorf, 1972
7. A. Salmic: Measurement of Tapping Forces, Diploma thesis, 1996, Ljubljana, Slovenia
8. J. Kopac, M. Sokovic and F. Cus: QM and costs optimization in machining of AI alloys, Total Quality, Creating individual and corporate success, Institute of Directors, Excel
Books, First Edition, New Delhi, 1996, 150 - 156
ABSTRACT: Two methods of intensification of the drilling process are described in the paper.
The first one - by application of different deposited or chemically treated layers for twist drills made
from high speed steels. Three coatings were applied: titanium carbide, titanium nitride and diamondlike thin layers. Also, a vacuum nitriding was applied. Tool life was investigated when cutting carbon
steel, laminated glass fibre and hardboard. In the second part ofthe paper the possibility of minimising of cutting forces and torque in drilling is discussed. The influence of different modifications of
chisel edge was investigated experimentally. The results of these investigations and conclusions are
presented in the paper.
I. INTRODUCTION
Drilling is one of the most popular manufacturing methods of making holes. Twist drills are
the most common tools used in mass production as well as in small batch production. They
are made more and more frequently from cemented carbides, but mainly for economical
reasons, drills made from high speed steels are still in use. It concerns, for instance, drilling
holes of small diameters, where the risk of destruction of brittle cemented carbide drills is
very high.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
162
Commercially available HSS twist drills have some disadvantages. Their wear resistance is
often insufficient, which makes their life too short for economical application. It sometimes
can be observed in cutting non-metallic materials which can consist of highly abrasive particles or have poor thermal conductivity. In consequence, the intensity of tool wear is very
high. Also geometry of commercial twist drills is not optimal for effective drilling. For
example, relatively long chisel edge makes cutting forces high, which increases the risk of
tool breakage, increases tool wear and decreases drilling accuracy.
All of these makes investigations of HS S twist drills with modifications increasing their life
and decreasing cutting forces important for economical application of these tools.
2. INVESTIGATION OF TOOL LIFE OF THERMOCHEMICALLY TREATED TWIST
DRILLS
One of the ways of tool wear resistance improvement is application of thermochemical
treatment to their working surfaces. There are a lot of methods of treatment existing. This
paper deals with methods in which temperature of treatment is below tempering temperature for high speed steels. In such cases there are no structural changes, thermal stresses,
cutting edge deformations, etc. in tool material.
Commercially available twist drills made of SW7M HSS, 0 3.8 mm in diameter were investigated. Drills were divided into five groups. Four groups were additionally treated according to information given in table 1. The fifth group of drills remained with no additional treatment for comparison. For statistical reasons each group consisted of nine drills.
Table 1 Thermochemical treatment methods applied to twist drills
Kind of treatment
References
[1]
[2]
[3)
[4]
163
Measurementsof tool wear for each drill were performed after making every 50 holes and
drilling was continued until tool wear limit was reached. In these cutting conditions flank
wear dominated, cf fig. I, and was taken as tool wear indicator. Measurements were carried out on a microscope using a special device.
The following results of experiments, carried out according to [5], were obtained:
- comparison of drill wear characteristics for different workmaterials
(1)
- comparison of tool wear intensity coefficient
(2)
where: lw is length of drilling [mm].
Tool wear characteristics, average for each group of twist drills in drilling laminate of glass
fibre are shown in fig. 2. Flank wear VB=0.25 mm was taken as limiting value for these
cutting conditions because higher values causes burr appearance which is unacceptable in
this finishing operation.
Experimental average tool wear characteristics for hardboard drilling are shown in fig. 3.
Tests were stopped at lw=800 mm because of drill flutes loading with workmaterial and
appearance of tool wear trend shown in fig.3.
164
0,25
0,2
.!
~
>
Kvs=3.1
-+--no treatment
---TiC
- k - vacuum nitriding
-x- diamond-like
Kva=2.0
Kva=l.8
~TiN
0,15
0,1
Kva=l.2
0,05
ox
0
100
200
400
300
500
600
' 700
800
1.. [rom]
Fig. 2 Twist drill wear vs. drilling length for drilling hardboard
0,7
.!
~
>
Kvs=27.3
-+- no treatment
---TiC
0,6
- k - vacuum
nitriding
-x- diamond-like
0,5
~TiN
0,4
0,3
0,2
0,1
ox
0
20
40
60
80
100
120
140
lffl
100
200 220
240
1.. [rom]
Fig. 3 Twist drill wear vs. drilling length for drilling hardboard
Experimental, average tool wear characteristics for mild steel drilling are shown in fig. 4.
Tool wear limit YBxlim=0.4 was assumed.
0,8
0,7
0,6
..
0,5
165
Kvs=64.6
---+- no treatment
-+--TiC
____._vacuum nitriding
~ diamond-like
___._TiN
Kvs=46.0
Kvs=45.0
VBmn
0,4
=
;, 0,3
Kvs=33.0
Kvs=20.6
0,2
0,1
0
0
20
40
80
60
100
120
lw [mm]
Fig. 4 Twist drill wear vs. drilling length for drilling mild steel
CEMC =
I -1
__L_Q_
Is
100%
{6)
where: I, is unmodified chisel edge length and lo is chisel edge length after modification.
Investigations were performed on a stand shown in fig. 6. Feed force Fr and torque M. were
measured by means of 4-component Kistler 9272 dynamometer. The following cutting conditions were applied:
- workmaterial:
carbon steel .45%C, 210HB,
- twist drills diameters:
07.4 mm, 013.5 mm, 021 mm,
- cutting speed:
0.4 ms- 1 (constant),
- feed rates:
0.1 mm/rev, 0.15 mm/rev, 0.24 mm/rev,
- cutting fluid:
emulsion.
166
Material removed
b)
'
computer
:1000 or-----,--- -
Z2500
----r-- "'
....
~2000
~
"0
~ 1500
I000
167
f=O.l mm /rev
f=O.l5 mm /rcv
f=0.24 mm/rev
12
11
sto
==--T"
.,--:c-- .,.._
-~ ----:"'--,,..;;........;..;...
~9
!\
:I
E' 7
E- 6
5
4
+--+---~f-----t--+----1
0
f=O. l mm /re\'
- - f=O. l5 mm /rev
__,._ f=O 24 mm /rev
80
20
100
20
-10
60
80
100
CEMC 1%1
.2
:....
~ 1000
+----r-----r--.----.----i
()
or---.,---~
54.5
41
~
500
5.:
f=O. I mm /rev
f=O. 15 nun/rev
-;t5oo
....
20
40
GO
80
100
r
+------r--- .,.....----r--t----1
()
20
-10
'
-;:1500
....
,_;; :1000
~ 25 00
41
~ 2000
1500
0
20
1'
60
80
100
CEMCI'%1
-4000
'
CEMCI'%1
4500
f=O.l mm /rc,
f=O.l5 mm /rcv
:w
f=O.l mmlrc\
f=O. l5 mmlrc,
s 25
~ 20
a'
lo.
-.-.
~ 15
~ --
10
40
GO
!\()
100
CEMC I'Ytl
()
20
40
60
80
100
CEMCI%1
168
The most detailed investigations were carried out for 013.5 mm twist drills, figs. 7 and 8 It
is clearly seen from these figures that chisel edge modification has very significant influence
on feed force Fr, fig. 7, and its influence on torque is hardly observed, fig. 8. Feed force
decreases with increase of CEMC and this decrease is more intensive in a range of smaller
values of this coefficient (0%-40%). For higher values of CEMC feed force decreases
slightly
It is also seen from fig. 7 that feed rate has an influence on feed force vs. CEMC changes.
For small value of feed rate (f=O.l mm/rev.) feed forces decreases of 44 % of initial value
while for feed rate of 0 24 mm/rev. only 24% decrease is observed.
The maximum drop of 12% is observed, fig. 8, for torque in the whole range of investigated
chisel edge modifications and feed rates for 013.5 mm twist drills.
Similar trends are observed for 07.4 mm, figs. 9 and 10, and 021 mm, figs. 11 and 12,
twist drills diamet~rs. The greatest changes of cutting forces are observed for smaller values
of feed rates. Also, more intensive decrease of cutting forces are observed in initial range of
chisel edge modification.
CONCLUSIONS
I. Investigations presented in the first part of the paper showed advantages of thermochemical treatment applied to increase their functional properties - tool life and wear
resistance - in drilling of selected workmaterials.
2. Investigations showed that titanium nitride and diamond-like layers deposited on the
working surfaces of the drill are the most effective ones.
3. Chisel edge modification has important influence on feed force and relatively small
influence on cutting torque.
4. Chisel edge modification of about 40% is sufficient to reduce significantly cutting forces
Further chisel edge reduction has practically no influence on cutting forces.
REFERENCES
1. Gawronski Z , Has Z Azotowanie prozn10we stali szybkotnqcej metodct NITROVAC'79. Proceedings of II Conference on Surface Treatment, Cz~stochowa (Poland),
1993, 255-259.
2. Wendler B., et al: Creation of thin carbide layers on steel by means of an indirect
method, Nukleonika, 39 (1994) 3, 119-126.
3. Mitura S et al: Manufacturing of carbon coatings by RF dense plasma CVD method,
Diamond and Related Materials, 4 (1995), 302-305.
4. Barbaszewski T., et al: Wytwarzanie warstw TiN przy wykorzystaniu silnoprctdowego
wyladowania lukowego, Proceedings of Conference on Technology of Surface Layers,
1988, Rzesz6w, 88-98.
ABSTRACT: A commercial cennet grade SPKN 1203 ED-TR is PVD coated. The following thin
layers are deposited by a ion-plating technique: TiN, TiCN, TiAIN, and TiN+TiCN. Dry face milling
experimmts are performed with application to steel AISI-SAE 1045. A comparative performance
evaluation of Wlcoated and coated inserts is provided on the base of flank wear measuremmts.
1. INTRODUCTION
Cermets, like hardmetals, are composed of a fairly high amount of hard phases, namely
Ti(C,N) bonded by a metallic binder that in most cases contains at least one out of Co and
Ni [1]. The carbonitride phase, usually alloyed with other carbides including WC, Mo2C,
TaC, NbC and VC, is responsible for the hardness and the abrasive wear resistance of the
materials. On the other hand, the metal binder represents a tough, ductile, thermally
conducting phase which helps in mitigating the inherently brittle nature of the ceramic
fraction and supplies the liquid phase required for the sintering process.
Cermet inserts for cutting applications can conveniently machine a variety of work materials
such as carbon steels, alloy steels, austenitic steels and grey cast iron [2-11].
A question of recent interest is to assess if the resistance of cermet cutting tools to wear
mechanisms may be improved by use of appropriate hard coatings.
Controversial conclusions are available in the relevant technical literature, since positive
results are obtained in some cutting conditions but unsatisfactory results may also be found,
especially with application to interrupted cutting processes [12-16].
The present paper deals with the influence of some Physical Vapour Deposition (PVD)
coatings on the performance of a cermet tool when milling blocks of normalised carbon
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Coursr.s and Lectures No. 372, Springer Verlag, Wien New York, 1996.
170
steel AISI-SAE 1045. The following thin layers are deposited by a ion-plating technique [17]:
TiN, TiCN, TiAIN, and TiN+TiCN.
Dry face milling tests are performed on a vertical CNC machine tool. The cutting
performance of the uncoated and coated inserts is presented and compared in terms of tool
life obtained until reaching a threshold on mean flank wear.
2 EXPERIMENTAL CONDITIONS
The cermet used for substrate in the present work is a commercial square insert with a
chamfered cutting edge preparation (SPKN 1203 ED-TR) for milling applications (ISO
grade P25-40, M40). The insert micro-geometry is illustrated in Figure 1. This insert is the
most reliable found in a previous comparative work [4] among a set of similar commercial
cermet inserts for milling applications tested. This cermet has the following percentage
volume composition [3, 4]: 52.04 Ti(C,N), 9.23 Co, 5.11 Ni, 9.41 TaC, 18.40 WC, and
5.80 Mo2C, and a hardness of91 HRA.
As regards PVD coatings, three mono-layers (thickness 3J..lm) of TiN, TiCN, and TiAIN
and a multy-layer (thickness 6f.!m) of TiN+TiCN are deposited by an industrial ion-plating
process [ 17].
TiN
TiCN
TiAIN
thickness (f..lm)
1-4
1-4
1-4
2300
3000
2700
0.4
0.4
0.4
operating temperature ( 0 C)
600
450
800
9.4
9.4
171
The deposition temperature is around 480 C which allows adhesion to substrates avoiding
also deformation and hardness decay. The nominal values of some main characteristics of
these coatings are given in Table I, according to industrial data. Ti-based coatings are used
as a thermal barrier against the temperature raise during the cutting process, they reduce
friction between cutting edge and workpiece, chemical-physical interactions between insert
and chip, crater and abrasive wear, and built-up-edge formation. Further, TiCN is valuable
in application to difficult-to-cut materials, and TiA1N provides a raised resistance to high
temperature and to oxidation.
As far as machining trials are concerned, dry face milling tests are performed on a vertical
CNC machine tool (nominal power 28 kW). Work.pieces of normalised carbon steel AISISAE 1045 (HB 1905) were used in form of blocks 100250400 mm3 . In such conditions,
the length of a pass is L = 400 mm, and the time per pass T = 31.4 s).
The cutting parameters and the geometry of the milling cutter (for six inserts) are shown in
Table II.
Table II Machinin conditions.
Cutting parameters
cutting speed vc
250m/min
feed/z
0.20 mm/tooth
130mm
K,
rake an les
Coating
uncoated
0.218 (0.0259)
19.7
TiN
0.189 (0.0107)
23.9
121.5
TiCN
0.241 (0.0119)
18.3
93.2
TiAlN
0.189 (0.0141)
23.4
119.0
TiN+TiCN
0.193 (0.0252)
23.3
118.3
172
Percentage variations of tool life obtained by use of coatings are also shown in Table III,
with reference to the uncoated insert. Evolution of maximum flank wear VB8 measured
every 3.15 minutes (i.e. every 6 cuts) during the milling operations is plotted in Figure 2.
0.275
0.250
E 0.225
E
~~
E
p::)
~
~
0.200
0.175
0.150
0.125
0.100
0.075
0.050
0
10
15
20
25
30
TiAIN
35
TiN
TiCN
TiN+TiCN
173
174
175
Focusing on positive results, since the wear plots in Figure 2 relevant to TiN, TiAJN, and
TiN+TiCN show similar behaviours, it can be deduced that the multy-layer does not
improve the cutting performance of the mono-layers.
It is worth noting that the TiN and TiAJN mono-layers exhibit quite high operating
temperatures, respectively 600 C, and 800 C (Table I). In interrupted cutting, this
characteristic is more important than hardness: actually TiCN has the highest hardness value
(3000 IN 0.05), but also the lowest operating temperature, 400 C (Table 1). In dry milling
operations the cutting insert is subject to high cutting temperature, and to thermal shock as
well as to mechanical impacts. Using a cermet substrate, the resistance to mechanical
impacts (which relates to toughness) is controlled mainly by molybdenum carbide in the
substrate composition (volume percentage of Mo2C is 5.80% in the cermet used for
substrate). The resistance to thermal shock is controlled both by the substrate (particularly
by tantalum carbide and niobium carbide) and by the coating: in this case, cermet substrate
has 9.41% by volume ofTaC, while NbC is not included in the composition.
4. CONCLUSIONS
In the light of the experimental results obtained in dry face milling tests using diverse PVD
coatings of a cermet insert, the following conclusions can be drawn.
1. Results of cutting performance are almost scattered: if the uncoated cermet insert is
referred to as the baseline, tool lives vary from --7% to -22%.
2. The best performing coating is TiN, while the worst one is TiCN.
3. Efficiency of PVD coatings of cermet inserts for interrupted cutting is related to the
possibility of raising the substrate's resistance to temperature-controlled wear
mechanisms.
REFERENCES
1.
2.
3.
4.
176
5.
6.
7.
8.
9.
IO.
II.
12.
13.
14.
15.
16.
17.
Destefani, J.D.: Take Another Look at Cermets, Tooling and Production, 59 (1994)
10, 59-62.
Do~ H: Advanced TiC and TiC-TiN Base Cermets, Proceedings I st International
Conference on the Science ofHard Materials, Rhodes, 23-28 September, I984, (E.A.
Almond, C.A. Brookes, R Warren eds.), 489-523.
Porat, R, A. Ber: New Approach of Cutting Tool Materials, Annals of CIRP, 39
(1990) I, 7I-75.
Thoors, H., H. Chadrasekaran, P. Olund: Study of Some Active Wear Mechanism in
a Titanium-based Cermet when Machining Steels, Wear, I62-164 (I993), 1-Il.
Tonshoff, H.K., H.-G. Wobker, C. Cassel: Wear Characteristics of Cermet Cutting
Tools, Annals ofCIRP, 43 (1994) 1, 89-92.
.
Wick, C.: Cermet Cutting Tools, Manufacturing Engineering, December (I987), 3540.
D'Errico, G.E, E. Gugliemi: Anti-Wear Properties of Cermet Cutting Tools,
presented to International Conference on Trybology (Balkantrib'96), Thessaloniki
(Greece), 4-8 June, 1996.
Konig, W., R Fritsch:
Physically Vapor Deposited Coatings on Cermets:
. Performance and Wear Phenomena in Interrupted Cutting, Surface and Coatings
Technology, 68-69 (1994), 747-754.
Novak, S., M.S. Sokovic, B. Navinsek, M. Komac, B. Pracek: On the Wear of TiN
(PVD) Coated Cermet Cutting Tools, Proceedings International Conference on
Advances in Materials and Processing Technologies AMPT'95, Dublin (Ireland), 8-12
August, 1995, Vol. III, (M.S.J. Hashmi ed.), 1414-1422.
D'Errico, G.E., R Chiara, E. Guglielmi, F. Rabezzana: PVD Coatings of Cermet
Inserts for Milling Applications, presented to International Conference on
Metallurgical Coatings and Thin Films ICMCTF '96, San Diego-CA (USA), April
22-26, 1996.
D'Errico, G.E., E. Guglielmi: Potential of Physical Vapour Deposited Coatings of a
Cermet for Interrupted Cutting, presented to 4th International Conference on
Advances in Surface Engineering, Newcastle upon Tyne (UK), May 14-17, 1996.
D'Errico, G.E., R. Calzavarini, B. Vicenzi: Performance of Physical Vapour
Deposited Coatings on a Cermet Insert in Turning Operations, presented to 4th
International Conference on Advances in Surface Engineering, Newcastle upon Tyne
(UK), May 14-17, 1996.
Schulz, H., G. Faruffini: New PVD Coatings for Cutting Tools, Proceedings of
International Conference on Innovative Metalcutting Processes and Materials
(ICIM'91), Torino (Italy), October 2-4, 1991,217-222.
N. Tomac
IDN Narvik Institute of Technology, Norway
K. T!fnnessen
SINTEF, Norway
F.O. Rasch
NTNU, Trondheim, Norway
1. INTRODUCTION
Magnesium alloys have found a growing use in. transportation applications because
of their low weight combined with a good dimensional stability, damping capacity, impact
resistance and machinability.
Magnesium alloys can be machined rapidly and economically. Because of their hexagonal metallurgical microstructure, their machining charact~ristics are superior to those of
other structural materials: tool life and limiting rate of removed material are very high, cutting forces are low, the surface finish is very good and the chips are well broken. Magnesium dissipates heat rapidly, and it is therefore frequently machined without a cutting fluid.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing $ystems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
178
The high thermal conductivity of magnesium and the low requirements in cutting power
result in a low temperature in the cutting zone and chip.
Literature indicates that magnesium is a pyrophoric material. Magnesium chips and
fines will burn in air when their temperature approaches the melting point of magnesium
(650 C). However, magnesium sheet, plate, bar, tube, and ingot can be heated to high temperatures without burning flJ.
It has been described [2J that when cutting speed increases to over 500 m/rnin, a
build-up material may occur on the flank surface of the tool. This phenomenon may lead to
a high deterioration of surface finish, an increase in cutting forces and a higher fire hazard.
When machining with a firmly adhered flank build-up (FBU), sparks and flashes are often
observed. It has been reported that FBU formation is essentially a temperature-related phenomenon [3].
Water-base cutting fluids having the best cooling capabilities, they have been used in
cutting magnesium alloys at very high cutting speeds to reduce the temperature of the
workpiece, tool and chip [4]. In addition, the coolant always plays a major role in keeping
the machine tool at ambient temperature and in decreasing the dimensional errors resulting
from thermal expansion. Water-base cutting fluids are cheaper and better coolants, and also
easier to handle in comparison with other cutting fluids.
Despite these excellent properties, water-base cutting fluids are historically not recommended in the machining of magnesium due to the fact that water reacts with magnesium to form hydrogen gas, which is flammable and explosive when mixed with air. Our
research shows that magnesium alloys can be machined safely with appropriate water-base
cutting fluids. The FBU problem was completely eliminated and the ignition risk presented
by the magnesium chips was minimized. Frequent inspection of the machine tool area with
a gas detector did not show any dangerous hydrogen level. The engineers concluded that
the prohibition of magnesium machining with water-base cutting fluids is no longer justified [5].
In this paper, further attempts have been made to examine hydrogen formation from
wet chips. The major outcome of the present work is the determination of a safe method for
storage and transport of wet magnesium chips.
179
Value
4.0-75.0
Ignition temperature, C
585
Flame temperature, C
2045
The flammability limits are expressing the dependence of the gas concentration. If
the concentration of the gas in air is beyond the limits, the mixture will not ignite and burn.
~
d
~
~
~
~
~
~
~
~
t t Ventilation :
I C::::::%=S I
~
~
~
~
~
~
~
~
~
;~
~ 3. Chip-transportation
2. Working area
liUJd 3:~
of machme tool
~
and storage unit
system
Figure 1. Schematic representation o.( areas where hydrogen is generated
1.
Cu~ting<
180
Several commercial Mg alloys were used as test materials. The magnesium chips
used in this study were produced in continuous fine turning. The depth of cut 0.4 mm and
the feed per revolution 0. 1 mm were held constant. The chips formed when cutting magnesium are easily broken into short lengths with small curvatures. The hexagonal crystal
structure of magnesium is mainly responsible for the low ductility and results in segmented
chips. The ratio between the volume and weight of chips is low due to the very short chips
lengths and the low density of magnesium.
For continuously monitoring the levels of hydrogen gas generated, the EXOTOX 40,
portable gas detector was used. It is also designed to register when the lower flammable
limit is reached such that attention is drawn to this fact. The development of hydrogen concentration in a container completely filled with wet magnesium chips, was measured by the
experimental equipment illustrated in Figure 2.
- ~Window
Display
~:-.-.-:#:
.<Y.::>~ ::::::::!=5
: ii: ".,('
::S:'"'
~ WETCHI.PS
Gas monitor
(EXOTOX 40)
'~HIP CO
TAINER
(30 dm 3)
130
Window
Ventilation hole
181
5.RESULTS
5.1 RATE OF HYDROGEN FORMATION
The hydrogen gas formation can be estimated based on the test results given in Figure 3, which shows the percentage of hydrogen generation for distilled water and two cutting fluids. Of these three fluids, the cutting fluid with pH 9.5 gave the best results.
The rate of corrosion of magnesium and consequently the generation of hydrogen gas
in aqueous solutions is severely affected by the hydrogen ion concentration or pH value.
The generation of hydrogen gas is several times higher at low pH values. Therefore the pH
value of the cutting fluid used in the machining of magnesium should be chosen as high as
possible.
It is important to note that, to avoid health hazard the pH-value should however not exceed
about 9.5.
--
~
.._
c.
,r
~
~
:I: l
~
C,!j
z~~
~
~
~
20
40
60
TIME, t (min)
Figure 3. Measured values (~(generated hydrogen concentration in the closed container
.filled with wet magnesium chips
(Tramportation time jinm cutting zone to container: 10 min)
The generation of hydrogen from wet magnesium chips is also highly affected by the
temperature of the applied cutting fluid. The hydrogen gas formation is about 25 times
higher at 65 oc than at 25 "C. An increase in temperature affects the chemical composition
and physical properties of the fluid. The composition of the cutting fluid is affected by
changes in solubility of the dissolved magnesium, which cause an increase of pH value.
Simultaneously, the volume of generated hydrogen rises f4].
182
~
..._.,
-.=....
~
c.
<!)
>
r;t)
~
1;11}
- .....
~
.,_..............,...,~t
::r:
~
~
;z:
~
~
Q.;
2
4
6
EXPOSURE TIME, t {hours)
oc
183
6
~
ell
:- ~ -
.----------.~--------.----------.------~-.
Container: 30 ctm 3
Fluid: Water
Chips, dry: 2210 g
pH: 7.0
Remaining water: 1950 g
Ventilation: Hole 0 2 mm
Jt~ ("V:J'"'
()~F~- V '-'
lfvu' f r,' l \
4
) -, t(. 'f;'Vt>
'\...~
~ 1
l ... 1 . i ll 1 { ""'"'I, , ,
"' .,,__,
...... ~
'""' .
__
0 ~
,J
.i(
.1
"' ...
. :-...
....
~ ~
Cl c
tool to chip container
~C.2+---------~....--===-----+------------t--------------j
Wt~ t chip~
~
~
dr ied in air
before testin
0 ~----------r----------r----~~~~~------_,
20
Fixure 5 . Effect l~{ drying lime qf wet magnesium chips on hydrogen .formation,
before storing in a slightly vented container
~ Chips:
Wetch~s:
44 2
kgdays after machining
tored
0.2
0.1
'--I
~---Ill
-.----!
t-
. ----... .
--
4
6
R
EXPOSURE TIME, t (days)
--
10
12
184
6. CONCLUSION
The analysis of experimental results, presented above, provides some useful information on the formation of hydrogen gas from wet magnesium chips. It has been found that:
1. A cutting fluid with a high pH-value must be selected. The generation of hydrogen
gas in aqueous solutions is strongly influenced by the hydrogen ion concentration or
pH-value. Cutting fluids with higher pH-values will generate less hydrogen gas.
2. To maintain safe machining operations and handling of wet chips, accumulation of
hydrogen gas should be prevented in the machine tools and the containers for chip
storage.
3. A 50 h exposure of newly generated chips to indoor or outdoor atmospheres in
isolated areas will allow the formation of a gray, protective film which terminates the
generation of hydrogen gas.
4. Chips should be stored in nonflammable, ventilated containers in isolated areas.
Three 25 mm holes at the top of the containers and barrels should be sufficient to
avoid dangerous hydrogen concentrations.
5. In addition to the available information on safety when handling magnesium,
the following additional safety procedures are suggested:
Separation of the adhered cutting fluid from wet magnesium chip with the aid of
chip centrifuges.
Turnings and chips should be dried before being placed in containers.
Wet magnesium chips should be stored and transported in ventilated containers
and vehicles.
Further investigation is desirable in order to determine the design of the ventilation systems
in relation to the quantity of metal and water. There is also a need of additional investigation of the influence of environmental temperature and ventilation conditions on hydrogen
formation.
7. ACKNOWLEDGMENT
This research was sponsored by the Royal Norwegian Council for Scientific and Industrial
Research, the Nordic Fund for Technology and Industrial Development, and the Norsk
Hydro.
H.REFERENCES
I.
2.
3.
4.
5.
6.
7.
8.
1. INTRODUCTION
In grinding a thermal damage of the workpiece has to be avoided,reliably. The result of the
grinding process strongly depends on the choice of input variables and on interferences.
During grinding this interferences are usually vibrations and tem'j,erature variations. Besides
the machine input variables and the initial state of the workpiece the topography state of the
186
grinding wheel is of substantial importance for the achievable machining result. The quality
of the workpiece is mainly determined by geometrical characteristics. Furthermore the
integrity state of the surface and sub-surface is of importance [1-3].
In industrial applications various techniques are used to check the surface integrity state of
ground workpieces. Laboratory techniques like X-ray diffraction, hardness testing or
metallographical inspection are time consuming and expensive, while etching tests are
critical concerning environmental pollution. Visual tests and crack inspection are used
extremely rarely because of their low sensitivity. The direct quality control of ground
workpieces is possible with the micromagnetic analysing system. Based on the generation of
Blochwall-motions in ferromagnetic materials a Barkhausennoise-signal is investigated to
describe the complex surface integritiy state after grinding. White etching areas, annealing
zones and tensile stresses can be separated by using different quantities. Acoustic Emissions
(AE) systems are installed in the workspace of the grinding machine suitable to detect
deviations from a perfect grinding process. The AE-signal is influenced by the acoustic
behaviour of workpiece and grinding wheel. Furthermore the monitoring of the grinding
processes is possible evaluating forces and power consumption. Laser triangulation systems
mounted in the workspace allow to describe the micro- and macro geometrical wear state of
grinding wheels.
187
changes, the position of the reflected and focused light on the PSD also changes. The b.r.c is
calculated from surface roughness profiles recorded along the grinding wheel circumference.
The point of interest is the variation in slope of Abott's b.r.c .. For this purpose a new German
standard has been developed with special parameters for describing the shape of curve. These
parameters divide the total peak-to-valley height of a surface profile into three portions
corresponding to three heights: RK, the kernel or core roughness, RPK' the reduced peak
height, and RVK, the reduced valley depth.
mounting in the
grinding machine
lo ser
coolant
position sensitive
detector
grinding wheel
I measuring
quantities
sensor signal
20
40
60
bearing ratio
39/4335
100
IFW
9855
188
magnetization directions are separated by Bloch-walls. There are two kinds of Bloch-walls
to be distinguished; the 180-walls with comparatively large wall-thickness and the 90-walls
having a small wall-thickness. An exciting magnetic field causes Bloch-wall motions and
rotations. As a result the total magnetization of the workpiece is changing. With a small coil
of conductive wire at the surface of the workpiece the change of the magnetization due to the
Bloch wall movements can be registered as an electrical pulse, fig. 2 . This magnetization is
not a continuous process, rather the Bloch walls move in a single sudden jumps. Prof.
Barkhausen was the first to observe this phenomenon in 1919. In honour ofhim the obtained
signal of the addition cif all movements is called Barkhausen noise. The magnetization
process is characterized by the well-known hysteresis shearing. Irreversible Bloch wall
motions lead to remaining magnetization without any field intensity H, called remanence B,.
For eliminating this remanence, the application of a certain field intensity, the coercivity He
is necessary [4,5].
B
---
irreversible
Barkhausen
jumps
_- ;
compression
magnetic excitation
J l
/
''
receiver workpiece
:j/174~-L .:
I
'
Mtension
magnetic
Barkhausen
noise
H
319115871c C IFW
189
workpiece influences the Barkhausen noise. Fig. 3 shows a comparison of micromagnetic, Xray analysis and hardness testing.
)(
"'E
::2:
Q)
"0
'
0.
E
ro
en
en
Q)
....
Q)
~en
c: en
' en
c:
en
:I
ro
ro :I
.s:::.
~ ~
Ill
ro
>
:::c
c:
co
"0
CJ.)
.!1!
'v
'
/' \
'/ \
ro
.s:::.
~
Q)
~
()
>
thermal load
319/15872c 0 IFW
190
measurement is reacting with a delay, because the thickness of this white layer has to reach
a specific height. Otherwise the diamond is penetrating through this rehardening zone.
4. INDUSTRIAL APPLICATION
Many investigations have been made adapting the above described sensor systems for
industrial applications and testing their potentials. Especially the laser triangulation sensor
for the monitoring of grinding wheel wear and the micromagnetic analysing system for the
monitoring of ground workpieces have been applied with high efficiency to avoid thermal
damage.
m icromagnetic measurement
X-ray measurement
measuring time= 1 m in
grinding conditions:
1.8 mm'/mms
vc = 50 m/s
aw =
do= 426 mm
result of etching test
thermal damage
q =50
ts = 1.2 s
r-.--,1 '1--.-.--,-.-.--~~-,~~~--~~~ 10
400
lr-
~ ~1-+----11--+--+--l---+--f
lr-
lr-
!;,
(/)
Q)
....
n
}:1-;J
E.- (-
j WRIL,JI=----t----+--1--1---1--~
~~~~~~~~~~~cq~~~Ll~~~~~
-100
-20 0
'-r c,.-V'-L;.-J'--'=y=T--f--;-f-+----f~f--f--+----f--f-+---J
-400~~~--L-~1~L-~-L~1~L-~1~L-~1~L-~1~L-J1
-300
10
12
14
, 6 18 20 22
24
26 28 30
~
E
:g
6
.c.
~
workpiece:
case hardenend
steel, 16 Mn Cr 5
dw 77 mm
grinding wheel :
=
=6000 ~m /s
319115869c 0 IFW
191
grinding conditions :
O "w = 1.8 mm'/mms
vc = 50 m/s
ds = 426 mm
"- 120
Q.
0::
.c
Ol
Q)
.c
q =50
Is= 1.2 s
v-Yv
IJm
90
,.-...,/
..1<:
ro
a.
-o
Q)
Q)
:::1
-o
~
60
30
0
l max])I
min
RpK(trace16)
/
J.
-11
J-l_.,
I
25
20
15
number of ground bearing rings
10
30
workpiece:
case hardenend
steel. 16 Mn Cr 5
dw 77 mm
319115870 0 IFW
Furthermore the optical monitoring of the grinding wheel has been carried out during the
investigations described above. Fig. 5 presents the microscopic parameter reduced peak
height RPK and the result of the workpiece etching test in dependence upon grinding wheel
wear. It is distinguished between traces one and six and trace seven, placed near to an edge
of the grinding wheel. Regarding the low increase ofRPK on traces one to six no significant
192
change can be recognized. On the other hand RPK measured on trace seven increases
significantly. It is obvious that wheel wear was not steadily on the whole width of the
grinding wheel. The corresponding trace on the workpiece was proved by the etching test to
be thermally damaged. Evidently the measured increase of reduced peak height RPK leads to
thermal damage of the bearing rings. In comparison with a simulation which has been done
the increase of RPK can be interpreted as grain obstruction. Monitoring the grinding wheel
topography makes it possible to detect wheel wear and find the optimum time for dressing
respectively change of grinding wheel.
[2]
[3]
[4]
[5]
[6]
[7]
ABSTRACT:
This study investigated the truing of vitrified-bond CBN grinding wheels. A rotary cup-type diamond
truer was used to condition grinding wheels under positive as well as negative dressing speed ratios.
Cutting speeds between 30 and 120 m/s were used in an external cylindrical grinding mode. The
grinding wheel surface was recorded during all stages of truing and grinding using 2D surface
topography measurements. The performance of each truing condition on wheel topography, grinding
behavior, and on workpiece topography and integrity were compared. Dressing speed ratio has been
found to be a suitable parameter for appropriate CBN grinding wheel preparation.
1. INTRODUCTION
Grinding is perhaps the most popular finishing operation for engineering materials which
require smooth surfaces and fine tolerances. However, in order to consolidate and extend the
position of grinding technology as a quality-defining finishing method, improvements in the
efficiency of grinding processes have to be made. Powerful machinery and, especially,
194
extremely wear resistant superabrasives, such as diamond and cubic crystalline boron nitride
(CBN), are available and are opening up economical advantages.
CBN has excellent thermal stability and thermal conductivity and its extreme hardness and
wear resistance has the potential for high performance grinding of ferrous materials,
hardened steels, and alloys of hardness Rc 50 and higher. Four basic types of bond are used
for CBN grinding wheels: resinoid, vitreous, metal, and electroplated. The vitreous bond
material, also known as glass or ceramic bond material, is regarded as the most versatile of
bonds. It provides high bonding strength, while allowing modifications to the strength and
chip clearance capability by altering the wheel porosity and structure. But the most
important advantage of vitrified-bond CBN grinding wheels is the ease of conditioning.
The conditioning of a wheel surface is a process comprised of three stages: truing,
sharpening and cleaning. Truing establishes roundness and concentricity to the spindle axis
and produces the desired grinding wheel profile. Sharpening creates the cutting ability and
finally, cleaning removes chip, grit, and bond residues from the pores of the grinding wheel
[ 1]. The variations in the conditioning process can lead to different grinding behavior even
with the same grinding parameters [2]. By selecting an appropriate conditioning process and
by varying the conditioning parameters, the grinding wheel topography can be adapted to
grinding process requirements. Higher removal rates require sharp grit cutting edges and,
especially, adequate chip space, whereas good workpiece surface qualities require a larger
number of grit cutting edges [3]. The grinding wheel behavior during the machining process
is influenced strongly by the grinding wheel topography, which is dictated by the
conditioning operations, as shown by Pahlitzsch for conventional abrasives[ 4].
Therefore, an investigation was conducted, to examine the link between grinding wheel
topography and grinding performance, as an initial step towards the development of an inprocess grinding wheel topography monitoring system for adaptive process control.
2. EXPERIMENTAL PROCEDURE
Experimental plunge grinding experiments have been performed on an external cylindrical
grinder. Three vitrified-bond CBN grinding wheels with the following specifications were
used:
CB 00080 M 200 VN1 (diameter d, = 100 mm, width b, = 13 mm),
CB 00170 M 200 VN1 (diameter d, = 100 mm, width b, = 13 mm),
B 126-49-R0200-110-BI (diameter d, = 125 mm, width b, = 10 mm)
Prior to grinding, conditioning of the grinding wheel surface was conducted through the use
of a rotary cup-type diamond truer (diameter dd = 50.8 mm, contact width bd = 1.75 mm),
which was mounted on a liquid-cooled high-frequency spindle The orientation of the truing
device was set up for cross-axis truing. By turning the truer spindle through 20 , the
circumferential velocities in the wheel-truer-contact could be uni-directional (downdressing,
195
qd > 0) or counter-directional (updressing, qd < 0), see Figure l. Three dressing speed ratios
were used to true the grinding wheels:
uni-directional, downdressing:
qd = + 0.75,
uni-directional, downdressing:
qd = + 0.35,
counter-directional, updressing: qd =- 0.5 .
The traverse truing lead of the grinding wheel across the truing cup was set at
fad= 0.23 mrn!U and a radial truing compensation of aed = 1.27 f!m incremented every pass.
Grinding Wheel, Workpiece and Truer Arrangement
a) Down-Dressing
Rotary CupType
Diamond Truer
Bolt
Workpiece
Wheel Spmdle
Bolt
Quill
Spacer
CBN
Grinding Wheel
t___ __
"'~c~1-;,;T:,.,c
r~&""r...-------------
c.;,
L'""o:_.;N-N
'""''"'
. .;,
E
EGRD _____.
196
dw = 92.1 mm and a width of bw = 6.35 mm. A total of 23.5 mm was machined off the
diameter in four separate stages. First, a self-sharpening grind with a diametric stock
removal of 0.5 mm, then a grinding sequence with a diametric stock removal of 3 mm was
followed by two diametric stock removals of I 0 mm. The rotations of workpiece and
grinding wheel were in opposite directions, with a fixed workpiece speed ofvw = 2 m/s. The
specific material removal rate was fixed at Q'w = 2.5 mm3/(mm s), without a sparkout at the
end of each grind, for the 84 grinding tests outlined in Table 1.
Surface topography measurements were performed on both the grinding wheel and the
associated workpiece. The measurements were taken in an axial direction using a contact
stylus instrument after truing (on the wheel) and each grinding stage (on wheel and
workpiece). During each grinding test measurements of grinding power, normal force, and
tangential force were taken. The grinding ratio, the ratio 'of removed workpiece material to
grinding wheel wear volume, was measured after each test series and microhardness
measurements were conducted to check for thermal damage of the ground workpiece
surfaces
Table of Grinding Experiments
Wheel
B 126-49-R0200-110-BI
cutting
speed Ve
30 m/s
75 m/s
30 m/s
75 m/s
30 m/s
75 m/s
120 m/s
down dressing
qd=+0.75
Test
1-4
Test
5-8
Test
25-28
Test
29-32
Test
49-52
Test
53-56
Test
57-60
down dressing
qd=+0.35
Test
9-12
Test
13-16
Test
33-36
Test
37-40
Test
61-64
Test
65-68
Test
69-72
up dressing
qd=-0.5
Test
17-20
Test
21-24
Test
41-44
Test
45-48
Test
73-76
Test
77-80
Test
81-84
3. RESULTS
This experimental work was conducted to examine the influence of vitrified-bond CBN
grinding wheel topography on grinding behavior and on ground workpiece topography and
integrity.
In Figure 2 the specific normal force, F' n , is plotted versus specific material removal, v w ,
for cutting speeds Vc = 30m/sand Vc = 75 m/s. Initial normal forces were high, as the wheels
were used directly after truing without a separate sharpening process, since required
adequate chip space between the CBN grits did not exist. Further grinding gradually opened
up the chip space and grinding forces reduced accordingly.
197
30
-" 24
CD
-o- down-dres.
CD
CD
~18 o---~~~~----+----4----~
~12
... *-~-4----~----~~=+----~
"-"(
()
Ill
0+----+----~----~---+----~
CD
c.
Ill
c.
0
0
500
qd = +0.35
~24 t-~~--~~~~u~p-~d~re;s~~q~d~=~-0~.5~y
~ r=:::: t::-- r-
~ 12
=75 m/s
z 3oft==r=~~~7===~~
--<>-- down-dres. qd = +0. 75
\~
18
"iii
c
Cutting Speed vc
(\
lL
=30 m/s
1000
1500
2000
2500
t-~~~--i===::~~~~==9
0
500
1000
1500
2000
2500
L7U~N~I~V~E~R~S~I~T~Y~07,F~-------------------------------------~~. .
CONNECTICUT
~VAr
The influence of dressing speed ratio, qd , on specific normal grinding force, F' n , was well
pronounced, see Figure 2. Significantly higher normal forces were measured for the
updressed wheels, leading to dull grinding due to the smooth wheel surface generated during
truing. The circumferential velocities of wheel and truer were directionally opposite during
truing and therefore led to increased shearing and decreased splitting of the CBN grits.
Bonding material covered the CBN grains and no cutting edges protruded out of the matrix.
The high normal forces for grinding with updressed wheels can result in the danger of
thermal workpiece damage. The downdressed wheels, especially, with a dressing speed ratio
of qd = + 0.75, machined the workpieces with lower forces. The truing operations with a
positive dressing speed ratio, qd , subjected the CBN grits to higher compressive loads and
increased the likelihood of grit and bond material splitting. The rougher grinding wheel
surfaces result in higher specific material removal rates Q'w , without danger of thermal
workpiece damage.
During the tests it was observed that increasing cutting speeds led to lower grinding forces.
This was expected, since the average ground chip thickness decreased for higher cutting
speeds and the work done by each single cutting edge became smaller.
198
As stated earlier, surface roughness measurements were taken on the ground workpieces in
the axial direction. Examination of surface finish, Ra , showed influences of grinding wheel
grit size, cutting speed, Vc, specific material removal, V'w, and dressing speed ratio, qd, on
workpiece roughness values. A graphical representation is given in Figure 3.
=30 m/s
Cutting Speed vc
0.8 ,.----,.---;---r-----;----,
a:
+----+---+----+----+-----1
0.6 +---+---+---+----+-~
a:"'
0.7
~ 0.5
+---+---+---+----+-..d--1
., 0.4 t=~~~;$:---r--t~---<~
...::;,0 0.3 +i~!J--"""""'~~'----+---+----+-~
0.2
M
::; 0.1
.
(I)
en
+---+---+---+----+-~
+---+---+---+----+-~
0~--4--~--~-~-~
500
1000
1500
2000
0.8
=75 m/s
0.7
0.6
qd
-0.5
en 0.5
en
(I)
c 0.4
.s::.
Cl
::;,
~
(I)
as
't:
::;,
en
2500
0.3
I"
0.2
0.1
0
0
500
1000
1500
2000
2500
~U~N~l~V~E~R~S=I~T~Y~O~F~--------------------~~. . - -
CQNNECTJCUT
~VM
The surface finish, Ra, ofworkpieces ground with low cutting speeds, Vc , deteriorated with
increasing specific material removal, V' w . An explanation might be that low speed grinding
leads to an increase of chip thickness, consequently subjecting the abrasive grits to higher
loads. The high loads crush the vitrified bond material forcing the loosely bonded grits to be
released. This rough grinding wheel surface produced the worst workpiece finish values. In
contrast, the higher cutting speed of Vc = 75 m/s led to better workpiece surface finish
values. The values remained at the same level throughout the whole experiment but were
significantly affected by the dressing speed ratio, qd . The workpiece R. values were always
best for samples ground with updressed wheels. The smooth grinding wheel surface
generated during truing with negative dressing speed ratio, which caused the highest normal
force values earlier (see Figure 2) machined the best workpiece surfaces.
Figure 3 proves that there is a relationship between the dressing speed ratio, qd , and the
ground workpiece surface roughness. In addition, Figure 2 clearly shows the influence of
dressing speed ratio, qd , on normal grinding force, F' n , i.e. wheel sharpness. Tests were
199
conducted to find a correlation between grinding wheel topography and produced workpiece
topography. But examined topography values, including wheel surface finish, R. , average
peak-to-valley-height, Rz , wheel waviness, Wt, and bearing ratio, tp , unfortunately, did not
lead to correlations of sufficient confidence.
c::::::J down-dres.
up-dres.
Qd= +0.35
qd = -0.5
(!J1
.Q
Cl
Cl
c:
c:
'5
c:
",
'5
-
Cl
30
30
75
75
After the finishing grind of each experiment a workpiece was lightly ground to imprint both
the worn and the unworn part of the grinding wheel surface in the workpiece. This wear
sample was traced by the contact stylus instrument, to measure the radial wear depth of the
wheel, from which the final G-ratio was calculated. Figure 4 shows the results. A
considerable improvement in tool life could be noted for the updressed wheels when used
with Vc = 75 rn/s. Conversely, the downdressed wheels wore up to twice as fast as the
updressed wheels. For the low cutting speed, Vc = 30 rn/s, the influence of dressing speed
ratio ,qd , could be neglected. The higher loads caused by larger average chip thicknesses
seemed to be dominant, giving nearly constant G-ratios.
4. CONCLUSIONS
This research has shown that the dressing speed ratio, qd, a parameter describing the ratio of
circumferential velocities in the contact zone between grinding wheel and truing tool,
200
influences grinding behavior and tool life of vitrified bond CBN grinding wheels as well as
the produced workpiece topography. These influences are as follows:
Truing with negative dressing speed ratio (updressing) generates a grinding wheel
surface which produces best workpiece roughness, but grinds with high forces, and
therefore limits grinding performance.
CBN wheels trued with positive dressing speed ratios (downdressing) are sharp enough
to grind with higher specific material removal rates, Q'w , as long as workpiece
roughness is acceptable.
The influence of dressing speed ratio qd on ground workpiece topography increases for
higher cutting speeds Vc, as grinding conditions lead to smaller average chip thicknesses.
Grinding ratio measurements show that CBN wheels, trued with a negative dressing
speed ratio, qd (updressing), experience significantly less wear.
The examination of grinding wheel topography after truing, and during different stages
of grinding, could not establish a reliable correlation between wheel and workpiece
topography, although, a correlation exists. Further investigation is therefore necessary,
to find the appropriate wheel topography parameter, which influences produced
workpiece topography, in order to establish an in-process grinding wheel topography
monitoring system.
Vitrified-bond CBN grinding wheels are expensive tools but they have a high performance
potential. The appropriate adjustment of conditioning parameters such as the dressing speed
ratio, qd , guarantee efficient use of these wheels.
ACKNOWLEDGMENTS
The authors would like to express thanks to Universal Superabrasives and Noritake for
providing the grinding wheels, to the Precise Corporation for providing the truer spindle,
and to Cincinnati Milacron for supplying grinding fluid. Thanks are also due to Dr. Richard
P. Lindsay for his advice during this project.
REFERENCES
l.Salje, E., Harbs, U.: Wirkungsweise und Anwendungen von Konditionierverfahren, Annals
ofthe CIRP, 39 (1990) 1, pp. 337-340
2. Rowe, W.B., Chen, X, Mills, B.': Towards an Adaptive Strategy for Dressing in Grinding
Operations, Proceedings of the 31st International MATADOR Conference, 1995
3. Klocke, F., Konig, W.: Appropriate Conditioning Strategies Increase the Performance
Capabilities of Vitrified-Bond CBN Grinding Wheels, Annals of the CIRP, 44 (1995),
pp. 305-310
4. Pahlitzsch, G., Appun, J.: Einfluss der Abrichtbedingungen auf Schleifvorgang und
Schleifergebnis beim Rundschleifen, Werkstattechnik und Maschinenbau, 43 (1953) 9,
pp. 369-403
202
diction and control of grinding forces in this complex manufacturing method is a key problem for making high quality gears effectively. Also possibility to calculate grinding forces
on the basis of grinding conditions is a basic problem in modelling of generating gear grinding process [ 1].
...,
. ..
I
I
-- ---
--- ---
I
i .
203
grinding wheel curvature, etc. also change which makes calculations of grinding forces
much more difficult than in surface or cylindrical grinding. In fig . 3 an example of changes
of cross-section area is shown for tooth profile shaped in I 0 generating strokes.
"'e
E
0.05
2.5
0.0-i
::I
Q,l
:...
~
=
0
..::
c.>
Q,l
C1
:::!.
Q.
O.D3
1. 5 ~
6:
0.02
::r
"'
"'0
:... 0.01
0. 5 ~
()
0
'
.>
-i
Ill
I)
'
25
en
,:o
a>
1_.
....
20
u:
15
'....
0
:I:
{I
10
en
-3
-."'.
II
II
~u
"'
..
"'
i:D
II>
II
lr-'
"'
"""'
......
j,J
1,-o
Ill
It-'
Time [s)
204
It is visible from this figure that in some initial generating strokes (five in this case)
relatively higher grinding forces are observed, which is due to additional material removal in
the root fillet area.
To calculate tangential grinding force in each generating stroke of grinding wheel the wellknown equation (eg. [2]) was adapted:
(1)
where: F1, is tangential grinding force per unit grinding width, heq is equivalent chip thickness, Fo and fare constants dependent on work material.
Assuming that grinding conditions in particular stroke are similar to surface grinding with
variable grinding depth and that a workspeed Vw is constant the following equation was developed to calculate tangential grinding force in the generating stroke ,i":
bDi
fa.f v.-f dx
I
Sl
(2)
where: boi is grinding width (variable in subsequent strokes), ai is grinding depth (variable
over grinding width), v, is wheelspeed (variable over grinding width due to a conical shape
of grinding wheel).
To calculate grinding forces using equation (2) computer software was prepared and experiments were carried out to compare theoretical and measured values of grinding forces.
3. EXPERIMENTS
Experiments were carried out in the following conditions:
- workmaterials: carbon steel 0.55%C, 650HV; alloy steel40H (0.4 %C, 0.5-0.8 %Mn,
0.8-1.1% Cr), 600HV,
- gear parameters: m=5; z=20 and 30; a=20,
- wheel: 99A80M8V, <l>max=0.330 m;
- tangential feed v,1: 0.165 and 0.330 m min- 1;
- workspeed vw: 0,08 - 0, 24m s- 1
- grinding allowance ae: 5 - 135 !-till;
- rotational speed of grinding wheel n,: 27.5 s- 1 (const.)
In each test tangential grinding forces in subsequent generating strokes were measured with
KISTLER 9272 dynamometer.
205
Preliminary tests were carried out to evaluate model parameters - constants F o and f in
equation (3). Values of these constants are presented in table 1. The exponent f is of the
same value for both workmaterials but constant F o is higher for alloy steel.
Table 1 Model parameters for different workmaterials
Workmaterial
Fo
17
0,3
20
0,3
Having model parameters determined, it was possible to check model validity. Results of
measurements of grinding forces obtained for the wide range of grinding conditions were
compared with calculated values.
In figures 5 and 6 experimental and calculated tangential forces are shown for different grinding conditions. It is visible from these figures that tangential grinding force changes significantly from one generating stroke to another. It can also be seen that calculated values of
grinding forces coincide with those obtained from experiments. The coefficient of correlation of measured and calculated values in all tests was never lower than 0.9. It indicates that
the developed model is correct and calculations based on equation (2) are accurate enough
to determine grinding forces in generating gear grinding process.
Differences of measured values of grinding forces observed in every two consecutive strokes of grinding wheel are due to the fact that in reciprocating movement of grinding wheel
one of them is performed as up-grinding and the other one as down-grinding. It is possible
to include this phenomena into the model by evaluating two sets of constants in equation (3)
- one for up-grinding and the other for down-grinding.
4. CONCLUSIONS
The model of generating gear grinding process described above makes calculations of forces
in generating gear grinding possible. Correctness of the model was proved in a wide range
of grinding conditions. This model will allow theoretical analysis of surface layer creation,
grinding wheel load and wear processes, etc. in this complicated manufacturing process.
206
aJ
16
ft [Nl
14
+--+--l--+-1
12
II
13
15
17
19
II
13
15
17
19 21
d)
c)
20
18
16
Ff
INI
t
[
I
'
__...,_.. Measured
20
Ca lculated
1-1
12
15
10
' -Measured
' -Calculated
10
-1
2
0
I 3 5 7 9 11 13 15 17 1921 23 25 2729 3 I
-1
207
a)
b)
20
-Measured
18
16
F, [N]
-Measured
Calculated
IM-C J
14
12
10
.....
I"\
,_ ~
......-: K/
f\
,w
:-- ~ f--.1
8
6
4
2
0
3
II
13
15
>-<>-
,?
II
13
./
IS
17 19 21
d)
25 F,
18
[N]
- - Measured
F, [N]
16
14
12
10
8
6
A.
_L~ ....
~ ,J ~
..... ~
P'
.r
4
2
0
l 3 5 7 9 II 13 15 17 19 21 23 25 27 29
v:)) ~ -"'~K
oJ
'"'
~-
_,._Measured
-Calculated
---JM-CJ
l ,~'ll
208
REFERENCES
1. Kruszynski B.W.: Model of Gear Grinding Process. Annals ofthe CIRP, 44 (1995) 1,
321-324.
2. Snoeys R., Peters J.: The Significance of Chip Thickness in Grinding. Annals of the
CIRP, 23 (1974) 2, 227-237.
V. Ostafiev
City University of Hong Kong, Hong Kong
D. Ostafiev
University of Melbourne, Melbourne, Australia
emphasizing their close correlation. Under this feed variation some of the system measuring
parameters have minimum or maximum points that relieve optimum feed determination taking into
account all machining conditions. Also, the maximum approach of the tool life has been defined
under the optimum feed. The cutting speed increasing leads to increasing optimum feed and its value
changes for every pair of cutting tool and machining materials. The results show that the turning
under optimum feed would improve surface roughness and tool life at least of 25 percent.
1. INTRODUCTION
The main purpose for precision turning is to get high accuracy symmetrical surface with the
smallest roughness. The solution depends very much on cutting conditions. Usually the
depth of cut is predicted by a part accuracy but cutting speed and feed should be properly
selected. It was found out the minimum surface roughness could be determined under some
optimum feed rate. [1,2]. The optimum feed proposition for fine machining had been made
by Pahlitzsch and Semmer [1] who established minimum undeformed chip thickness
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
210
with the set of gradually changing feed for different cutting speeds. The feed changing had
been led by means of the scanning frequency generator (1 ).The generator frequency
monitoring has been provided by the frequency counter (4) and connected to the CNC lathe
(2) for the feed step motor control (3). The frequency impulse has been recorded at the
same time as all other turning parameters. The feed rate changing was checked by repeated
cutting tool idle movement after workpiece turning and the necessary surface marks were
made for different feed rate values.
The dynamometer (5) for three cutting force component measurements . has been
connected through the amplifier (6) to the same recorder (7) as the frequency counter (4)
for obtaining their signal comparisons more precisely.
E.m.f measurement was made by using slip ring (8) , preamplifier (9), amplifier (10) and
oscilograf (7). The same process was used for vibration measurements by accelerometer
(14) working in frequency range of 5 Hz ... 10 kHz. through. amplifier (15). The machining
power signals from a motor (11) were measured by the wattmeter (12) and through the
amplifier (13) send to the recorder (7). This kind of setup has permitted to record all turning
parameters simultaneously at one recorder to make analysis of their interrelations while
gradually changing the cutting conditions.
The machining surface roughness has been measured by the stylus along the workpiece and
their meanings were evaluated according to the surface marks made for different feed rate
value's. Additional experiments were made for tool life determination under fixed cutting
conditions in the range of feed rate and cutting speed changing.
The machining conditions were chosen as follows: feed rate 0.010- 0.100 mm/r.; cutting
speed 1.96-2.70 m./ s., depth cut 0.5 mm. The workpiece materials were carbon steel S45,
chromium alloy steel S40X and nickel-chromium-titanium alloy steel 1X18H9T.The
standard Sandvik Coromant cutting tip from cemented carbide had been used with nose
radii 0.6 mm.
3. RESULTS AND DISCUSSIONS
The experimental results of turning changing parameters at different feed rates are shown in
Fig.2. It can be observed from the figure that all parameters except for e.m.f. have minimum
and maximum values for feed rate increasing from 0.010 to 0.095 mm /r.
The vibration signal G has changed twice : at the beginning it was increasing up to feed rate
0.020 mm/r. than it started decreasing to the minimum signal at the feed rate 0.050 mm./r.
and after that it increased gradually. The surface roughness, in spite of the increasing feed
rate from 0.010 to 0.050 mm./r., has decreased from Rmax = 5.4 Jlm to Rmax = 3.6 Jlm.
and than increase to Rmax = 7.0 Jlm with feed rate increasing to 0.090 mm./r. The power P
has a small minimum at the feed rate close to 0.050 mm /r. indicating less sensitive
correlation for feed rate changing. The tool life has a maximum T = 72 min. at the feed rate
0.045 mm./r. However, at smaller feed rates 0.006 and 0.080 mm./r. the tool life decreased
to 54- 58 min. (in the amount of23% ).
Analysis of these results pointed out that all turning parameters have minimum value for
surface roughness, cutting forces, power and maximum tool life for feed rate in the range of
0.045-0.50 mm./r. and with cutting speed of 1.96 m/s. The surface roughness has more
close correlation with vibration signal and cutting force that on line monitoring could be
used for cutting condition determination.
211
for cutting. But the optimum feed determination had been described only by the kinematics
solution. The properties of cutting tool-workpiece materials under cutting, machining
vibrations, workpiece design should certainly be taken into account in real machining for
the feed determination.
The micro turning research [2] also shows the change in Rmax is described by the folder
line whose breakpoint depends on the cutting conditions, tool geometry and cutting liquids.
But now it takes a lot of time to find out, on-line, this optimum cutting speed and feed
values for precision turning of different workpiece designs and materials. Because there is a
gradually increasing precision of operation performance the new express methods for
optimum precision cutting condition determination need to be investigated and developed
for industry application. According industry demands in achieving the optimum machining
performance for precision turning many are concerned with the smallest surface roughness
and highest workpiece accuracy .
2. EXPERIMENTAL EQUIPMENT AND PROCEDURES
According many investigations [3, 4] there is a close correlation between machining
vibrations, cutting forces, tool life and surface roughness etc. while turning. Thus the
correlation coefficient between vibrations and surface roughness has been determined as
0.41-0.57.[5]. Also it has been shown the nonmonotonic dependencies of cutting forces,
surface roughness, tool life, vibrations, e.m.f. when cutting feed and speed are changing
gradually. There are some minimum and maximum values for almost every turning
parameter while cutting condition changes. But to find out the optimum cutting conditions
for precision turning all parameters should be studied simultaneously to take into account
their complex interrelations and nonmonotonic changing. The CNC lathe advantage to
gradually change feed and speed by their programming and measuring all the necessary
turning parameters simultaneously opens a new opportunity for the express method of
optimum condition determination. To solve the problem the special experimental setup had
been designed as shown in Fig.l. The setup has been mounted on a precision CNC lathe
p Fe
W N
T G R.n.x
minrnVJlrn
e.rn.f.
JlV
60
100 2.5
f
o
1.0
2.5
4.0
5.5
7.0
8.5
10
1.25
rnrnlr
212
R. Mesquita et al.
To investigate cutting speed influence on the vibration signals, some experiments have been
conducted under gradually feed rate changing for different cutting speeds .The results show
a general trend of increasing feed rate for vibration signal minimums with increasing cutting
speed. Thus the minimum vibration signal for cutting speed 1.96 m/s. is 0.050 mm/r., for
cutting speed 2.3 m/s. is 0.060 mm/r. and for cutting speed 2.7 m/s. is 0.068 mm/r. Also the
minimum pick is much more brightly expressed for a smaller cutting speed than for the
higher speed. The surface roughness has not changed much under this cutting speed
increasing (from Rmax = 3.6 j..lm to 3.1 j..lm.) but tool life decreased significantly in 1.5-2
times. The tools wear investigation has found out any relationship with surface roughness
under this conditions.
Another experiment has been conducted to determine the interrelations for turning
parameters for different kinds of workpiece materials. The minimum vibration signals have
been received at feed rate 0.038mm/r. for steel 1H18H9T, at 0.43mm/r. for the chromium
alloy steel S20X and at 0.45mm/r. for carbide steel S45. The' latest two closer to each other
because the corresponding materials have similar properties. Therefore, there is a positive
correlation between vibration signal and surface roughness indicating the smallest vibration
signal as well as surface roughness for carbon steel S45 (Rmax =3.6 j..Lm) and the bigger
their values (Rmax = 5.6j..Lm) obtained for nickel-chromium-titanium alloy steel 1X18H9T.
4. CONCLUSION
The new method for determining optimum cutting conditions for precision turning has been
developed. The method uses of the high level correlation between precision turning
parameters for their monitoring. The optimum cutting conditions could be more precisely
specified by the determination of vibration signal minimum while feed rate is gradually
changing. The vibration signal could easily be taken on line cutting and all machining
conditions would be taken into account for the optimum feed rate determination.
REFERENCES
I. Pahiltzsch, G. and Semmler, D.: Z. fur wirtschaftich Fertigung. 55 (1960), 242.
2. Asao, T., Mizugaki, Y. and Sakamoto, M.: A Study of Machined Surface Roughness in
Micro Turning, Proceedings of the 7th International Manufacturing Conference in China
1995, Vol. 1, 245-249
3. Shaw, M.: Metal Cutting Principles, Clarendon Press. Oxford. 1991
4. Armarego, E.J.A.: Machining Performance Prediction for Modern Manufacturing,
Advancement of Intelligent Production, Ed.E.Usui, JSPE Publication Series No.I, (7th
International Conference Production/Precision Eng. and 4th International Conference"
High Technology, Chiba, Japan, 1995 ): K52-K61
5. Ostafiev, V. Masol I. and Timchik, G. : Multiparameters Intelligent Monitoring System
for Turning, Proceedings ofSME International Conference, Las Vegas, Nevada, 1991,
296-301
R. Mesquita
Instituto Nacional de Engenharia e Tecnologia Industrial (INETI),
Lisboa, Portugal
E. Henriques
Instituto Superior Tecnico, Lisboa, Portugal
P.S. Ferreira
Instituto Tecnologico para a Europa Comunitaria (ITEC), Lisboa,
Portugal
P. Pinto
Instituto Nacional de Engenharia e Tecnologia Industrial (INETI),
Lisboa, Portugal
1. INTRODUCTION
New patterns of consumer behaviour together with the extended competition in a global
market call for the use of new manufacturing philosophies, simultaneous engineering
processes, flexible manufacturing, advanced technologies and quality engineering
techniques. The reduction of both the product development time and the manufacturing
lead time are key objectives in the competitiveness of industrial companies.
The reduction of lead time can be achieved through the use of integrated software
applications, numerical control systems and a dynamic process, production and capacity
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
214
R. Mesquita et al.
Job
machinins 1ime
time
time
lead time
Within the different components of product delivery time (figure 1), machining and
waiting time could also be reduced, by adjusting the machining parameters for maximum
production rate and using an appropriate planning technique. However, it should be noted
that these components of delivery time will be considered invariable in this study.
2. ARCHITECTURE
In this context, a cooperative research project under development at INETI, 1ST and ITEC
aims the integration of software applications in the areas of design, process planning,
computer aided manufacturing, tool management, production planning and control,
manufacturing information management and distributed numerical control.
This paper presents the architecture of a computer integrated and optimised system for
turning operations in numerical control machine-tools. The system includes a CAD
(Computer Aided Design) interface module, a CAPP (Computer Aided Process Planning)
and tool management module, a CAM (Computer Aided Manufacturing) package and a
DNC (Distributed Numerical Control) sub-system. A manufacturing information
management sub-system controls the flow of information to and from the modules. The
information required to drive each module is made available through a job folder. Figure 2
presents the sequence of manufacturing functions together with the information flow. One
of the key objectives of our integrated system is to reduce manufacturing lead time. The
manufacturing information should be generated with promptness, the machine setup time
should be reduced and the unexpected events at the shop floor (tools not available,
impossible turret positions, incorrect cutting parameters, etc.) should be minimised.
215
With the CAPP system under developement [1,2) it is possible to generated the
manufacturing information in a short time and assure its quality and consistency .
,.........'""....
o-rIIOIIol._.._. .......
T_.ttltc8ooo
216
R. Mesquita et al.
217
delivery at the workstation and tool tables administration for part program generation and
machine tool controller setup.
The integration of process planning and tool management can only be accomplished if a
central tool database exists. This calls for a tool database standard which is not yet
implemented in commercial systems. Consequently, it was decided to implement a solution
aiming to integrate to some extent the process planning and tool management functions,
using independent software applications [5].
It is used a tool management system (Corotas from Sandvik Automation) which supports
all tool related sub-functions- tool identification, tool assembly, tool measuring, inventory
control. A Zoller V420 Magnum tool pre-setter is used for tool measuring. The proprietary
measuring programme - Multivision was interfaced with the Corotas package.
The tool lists generated by the CAPP system convey all the data required for tool
management in the Corotas system through a file formatted as required by the TM system,
for data import. Since a common database does not exist, tool data exchange between
CAPP system and TM is performed through import and export functions. A file export
facility, specifically designed for Corotas, was built.
Corotas operates at the tool room level together with the pre-setter system. Once Corotas is
fed with the tool lists produced by the CAPP system (one tool list per workstation), the
toolkits (a set of tools required to machine a part at a workstation) are defined, the tool
items are identified and the number of sister tools is calculated. The tool items are allocated
to the job, removed from its stock location and assembled. The assemblies are labelled
(with the tool label code) and delivered for measurement and pre-setting at the tool presetter workstation.
The link between Corotas and Multivision is already implemented in Corotas. Tool
nominal values are send to the tool pre-setter and the tool measured dimensions are
returned. After tool measuring, the actual dimensions of the tools are written to a file,
which is post-processed for the particular CNC controller and stored in the corresponding
job folder. The TM function supplies the job folder with all NC formatted tool offset tables
and tool drawings, and provides the workstation with the toolkits, properly identified, for
machine setup.
4. INFORMATION SYSTEM
Considering the target users, composed by SME's, the underlying informatic base system
is a low cost system, easily operated and maintained, widely spread and open. A PC
network with a Windows NT server and Windows for Workgroups and DOS clients is
used. Industrial PCs (from DLoG) are attached to the machine-tools controllers for DNC
(Direct Numerical Control) and monitoring functions. Complementary ways of
communication are used - data sharing through a job central database and a message
system for event signalling, both windows based, and file sharing for the DOS clients.
In the proposed architecture, a job manager is at a central position and plays a major role as
far as the information flow and the synchronisation are concerned (Figure 3). Assisted by a
scheduler, the manager defines the start time of each task, attributes priorities, gathers the
R. Mesquita et al.
218
documents and pushes the input documents to the corresponding workstation, using a job
management application. This application presents a view of the factory, keeps track of all
110 documents and promotes document update by tracing its dependency chain. The
system manages a hierarchical structure of information composed by objects, such as, job
folders, workstation folders, documents, workstations and queues.
At each workstation, the operator uses a task manager to select a job from the job queue, to
identify the required input documents, to run the application and to file the output
documents.
At the workstation located in the shop floor level, the operator uses his local task manager
(built with the DLOG development kit). After selecting the job from the queue, all the
required manufacturing information about the job and machine setup is available. Batch
size, due date, operations list, tool data, drawings, fixturing data, NC program, tool offsets
and instructions can be presented on the screen. If a CNC part program has to be modified
at this workstation, the required modifications are registered for evaluation and updating at
the process planning workstation.
TMS - Assembling /
Pre-sell ing
9 ;;;
A snapshot of the on-going production and of the use of productive capacity is built out of
the information collected at the workstations. Machine signals are fed into DLOG IPCs and
filtered so that relevant status changes will be detected and sent to the monitoring station.
Additional software indicate order completion and messages for process and job diagnosis.
5. PROCEDURE DESCRIPTION
Typical Portuguese job shops manufacture small batches of medium to high complexity
parts, being critical the product delivery time. The system described in this paper is
particularly suitable to the following company profile.
There is a limited number of qualified staff. Computer systems were selected and are used
to assist a particular function . Usually, there is no formal process plan, with detailed
operations lists, cutting tools list and optimised machining parameters (records of previous
experiences are non-existing or are out of date). All this information is selected
empirically, on the fly, at the CAM station. The generated NC program is always modified
219
at the machine controller. There is no tool database and the amount and type of cutting
tools is limited by part programmer experience and knowledge. Tool management function
is rather limited and prone to errors or delays. Tool holders are located near the machinetools and there is no tool pre-setting device (dry cutting and tool offset changes are
compulsatory). The comments included, by the part programmer, in the NC listing provide
the operator with the information required to select the needed tools (no availability
checking is made). Machine-tool operator has to assemble, mount (in an undefined turret
position) and measure all the cutting tools. The machining parameters are adjusted during
the operation (machining cost and time is not a constraint). The number of required tools
(sister tools) is unknown. Tool replacement is determined by the observed surface quality.
As a result of this procedure, machine setup, even for very simple parts, takes a long time,
being the whole process highly dependent on the planner expertise who is also the CAM
system operator.
The integrated and optimised procedure that can be used with the proposed system is
described in this section. A geometrical and technological model of the part is created in a
CAD system and exported through an IGES file. CAD files delivered by the customer can
also be used as input data for the process planning phase. The CAPP system interprets this
model and automatically selects. the sequence of operations and required machine-tools
(workstations) and generates the elemental sequence of operations, as presented in figure 4.
Tools are selected from a tool database containing only existing tools. A different set of
tools will be found by the system if a larger database of existing tools is available, since a
minimum time and cost criteria is used. Machining parameters are determined by the
optimisation process.
Concurrently with part programme generation, all the tool management functions are
performed, based in the same set of manufacturing information provided by the CAPP
system. This procedure enabled the generation of the documents presented in figure 5, as
shown at the DNC terminals. Optimal and existing cutting tools together with tool setting
220
data and ready to use part programmes are made available just-in-time at the workstation
for minimum setup time.
Iool
1 Facing
Z fYlcrnal Cylll'ldrlct
3 CoPJ-i n Cop.,r-out
6
%0EI01&1&2
r tr.r::
" In
o.soo 2 .soo
0. 1:0
Z.JG
I EH 0 .600 5 .500
O.JO
Z?9
m o.soo z.soo
t.zo
Copy- In 4 Copy-au t
Cop.j-1 n I Copy~ou. t.
Cowin I Copj-out
9b0001010Z
'lb0001010Z
~lr
~IJII!'q<11"
list.rcrr. t ;r;t
Lllbe l
cod~
~rlpt.ion
lr.
or.
II.
11111111111111111.i113113SI011'" .. 11'1111111111111111115112:01"111JII11.1
01
01
'9600(1 10102:01
~ 1 0 1 02
'J600010102
0. 10
0.10
~n tllllt..,
f:ro~.H""
,~ 'J:] '~l
(a)
'96000 10 10 1
4J60CI(Io101&t
Z.60
ZOO 0 .st.IO 1.800
'$1 IIU!Uf.-plirift t;(l W it rW
0.70
1'32 0.600 5.200
'*'000 101 02
wh~tl~;z,"crf~)
?
8
9
...
.,.,
list.opc:r.bl
~~~ '~m
'Cur '~lJ
(b)
Fig. 5- Operations list (a) and tool list (b), as viewed at the machine workstation
6. CONCLUSIONS
The implementation of an integrated system as described in this paper can contribute to the
competitiveness of SME job shops. Particularly, a higher machine productivity and shorter
product delivery time can be achieved. The consistency of the manufacturing information
can be improved and the process can be technologically optimised. The synchronisation of
tool related functions and the quality of the numerical control program allow the reduction
of the machine setup time. The described system can contribute to maintain the consistency
of planned and achieved machining costs and time and to reduce the lead time.
REFERENCES
1. Mesquita, R. and Henriques, E.: Modelling and Optimization of Turning Operations.
Proc. 30th Int. MATADOR Conf., UMIST, Manchester, 1993, 599-607
2. Nunes, M., Henriques, E. and Mesquita, R.: Automated Process Planning for Turning
Operations, Proc. 1Oth Int. Conf. Computer-Aided Production Engineering, Palermo,
Univ. Palermo, 1994, 262-272
3. Mesquita, R. and Cukor, G.: An Automatic Tool Selection Module for CAPP Systems,
Proc. 3rd Int. Conf. Advanced Manufacturing Systems and Technology, CISM, Udine,
1993, 155-165
4. Mesquita, R., Krasteva, E. and Doytchinov S.: Computer-Aided Selection of Optimum
Machining Parameters in Multipass Turning, Int. Journal of Advanced Manufacturing
Technology, 10 (1995) 1, 19-26
5. Mesquita, R., Henriques, E., Ferreira, P.S. and Pinto, P.: Architecture of an Integrated
Process Planning and Tool Management System, Proc. Basys'96, Lisboa, 1996
G. Cukor
University of Rijeka, Rijeka, Croatia
E. Kuljanic
University of Udine, Udine, Italy
ABSTRACT: The basic idea of the paper is to overcome the barrier of cutting conditions
optimization estimated either on the basis of minimum unit production cost or minimum unit
production time. In order to evaluate both criteria simultaneously and their mutual dependence, the
concept of double-criteria objective function is proposed. The approach adopted in solving the
constrained nonlinear minimization problem involved a combination of theoretical economic trends
and optimization search techniques. Finally, popular multi-pass rough turning optimization strategy
of using equal cutting conditions for all passes is shown to be useful approximation but more
rigorous computer-aided optimization analysis yielded unequal cutting conditions per pass.
1. INTRODUCTION
The trend of present manufacturing industry toward the concept of Computer Integrated
Manufacturing (CIM), imposes as an imperative, the Computer-Aided Process Planning
(CAPP). The development of a cutting conditions optimization module is essential to the
quality of CAPP software.
Optimization of multi-pass rough turning operation means determination of optimal set of
cutting conditions to satisfY an economic objective within the operation constraints. The
solution to the problem of selecting optimum cutting conditions requires sophisticated
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
222
computer-aided optimization search techniques. However, the results obtained from the
optimization search will depend on the mathematical models of the process, and to a greater
extent on the optimization method used.
In principle, the cutting conditions are usually selected either from the viewpoint of
minimizing unit production cost or from the viewpoint of minimizing unit production time if
cost is neglected. It has also been recognized that between these two criteria there is a range
of cutting conditions from which an optimum point could also be selected in order to
increase profits in the long run. Furthermore, maximum profit is in reality the major goal of
industry.
This work has been developed under the assumption that an optimum economic balance
between the criteria both the minimum unit production cost and the minimum unit
production time will theoretically result in maximum profit rate criterion. A solution using
the multi-criteria optimization procedure is introduced as the basis of further discussion.
2. MODELLING OF MULTI-PASS ROUGH TURNING OPERATIONS
The economic mathematical models of multi-pass rough turning operations have been
formulated by many investigators [1, 2, 3]. However, optimized cutting conditions in view
of a given economic objective are in general assessed on the basis of some form of too/life
equation [4]. In the multi-pass turning each pass is denoted by the corresponding workpiece
diameter Df [mm], depth of cut ap1 [mm], feed/; [mm] and cutting speed Vcf [m/min]. The
unit production cost and the unit production time for ip passes can then respectively be
expressed as:
(1)
(2)
with
'p
Di = D 0
2LapJ-t
(3)
j=l
(4)
and
'p
L:apj = aw
j=l
(5)
223
where c1 =unit production cost ($/min], t1 =unit production time [min], c, =tool cost per
cutting edge [$], I= length to be machined [mm], 1j =tool life [min], tnp =non-productive
time [min], lc =tool changing time [min], t, =tool reset time [min], Do =initial workpiece
diameter [mm], and aw =total depth to be machined [mm].
In order to find a compromise solution the above mentioned criteria can be aggregated into
a global one using the following two-criteria objective function [5]:
y(x)
1-w
Y2
(6)
where w is the weight coefficient, yl"(x) and y2"(x) represent minimum values of the
corresponding criterion when considered separately.
It should be noted that two-criteria optimization becomes one-criterion for w = 1, from the
viewpoint of minimum unit production cost. Ifw = 0, then this is the optimization from the
viewpoint of minimum unit production time. For 0 < w < 1 yields a compromise solution to
obtain an optimum economic balance between the unit production cost and time. Hence, the
cutting conditions, at which this occurs, will theoretically result in maximum profit rate.
The objective function (6) should be minimized while satisfying a number of process
constraints which limit the permissible values of the cutting conditions Vcf>h and ap1 for each
pass. Most of the constraints are quite obvious, such as:
min., max. depth of cut
min., max. feed
min., max. cutting speed
chip shape
min., max. spindle speed
available power of the machine
allowable cutting force
available/allowable chucking effect
rigidity of machining system
min. tool life, etc.
These constraints need not to be discussed further. Some other variable bounds and
operation constraints is given in [6]. However, the following constrained nonlinear
minimization problem has to be solved:
minimize y(x)
subject to g 1(x) {~;
=}
0, j
=1, ..., m
(7)
where m is the total number of nonlinear inequality and equality constraints. Since the
mathematical model of the multi-pass rough turning optimization problem has an explicit,
multivariable, nonlinear objective function constrained with nonlinear inequality constraints
and some equality constraints, the optimization method selected should fulfil the model
features.
224
3. OPTIMIZATION METHOD
The optimization method described herein is based on optimization procedure of multi-pass
turning operations previously developed [2], but is coupled with general nonlinear
programming methods.
For the successful operation of advanced and more sophisticated flexible machining
systems, it is essential that the cutting conditions selected are such that easily disposable
chips are produced. Since cutting speed has only a secondary effect on chip-breaking, when
compared with feed and depth of cut, the possible cutting region to satisfy the chip-breaking
requirement can be represented by the ap-f diagram. This diagram is defined as combinations
of depth of cut ap and feed f which produce easily disposable chips. Such diagrams are
generally available from cutting tool manufacturers and an example is shown in Figure 1.
-;--------------workpiece
feasible
region
"
I;:!
!opt
- _aEoe!.._
ap2opt
E----
:=J F
fmax
Feed, rrun
Let us assume that, in the workpiece shown in Figure 2, area ABCD has to be machined. To
start with, the feasible and non-feasible regions are separated in the aP-f plane by a concave
curve of the most significant constraint acting on the process, as illustrated in Figure 1. The
point 0 at which the objective function (6) is a minimum, is selected as the optimum point.
This point always lies on the boundary separating the feasible and non-feasible regions. In
order to ensure finding global optimum, a combination of techniques both the Direct Search
ofHooke and Jeeves and Random Search can be successfully applied [ 5].
225
Direct search is performed three times at different starting points chosen with respect to
feasibility and criterion values out of randomly generated points. For handling the
constraints, the modified objective function method is implemented. The constraints are
incorporated into the objective function (6) which produces an unconstrained problem:
minimize
y(x) + CF
f k(g;(x))
(8)
j=l
Penalty function is used in order to apply a penalty to the objective function at non-feasible
points, thus forcing the search process back into the feasible region.
The resulting shape of the workpiece after the first pass would be given by DEFB. This is
now considered to calculate the optimum cutting conditions for the second pass. An
identical ap-j diagram is again assumed, but the optimization search techniques may result
different cutting conditions. This could be due to several reasons, one of which is the
change in the workpiece flexibility and another, the reduced workpiece radius at which the
cutting forces are applied. This procedure of determining the optimum cutting conditions
for each pass is repeated until the sum of the optimum depths of cut equals the total depth
aw to be machined.
In addition, from Figure 1, it is obvious that the assumption of equal depths of cut is not
generally valid, especially if a considerable amount of material has to be removed with
roughing passes. Namely, when the total depth aw is outside the feasible region, as usual,
then the number of passes has to be increased until the plausible depth of cut inside the
feasible region is found. Only a limited number of ap values can be obtained. For instance, in
Figure 1, the first feasible depth of cut ap = a.j3 lies below the optimum point. It is evident,
that by choosing this value, a small error is made. However, by multiplying this small error
with the total number of passes we obtain a bigger error which significantly affects the
reliability of achieved optimum. Therefore, the assumption of equal depths of cut for all
passes is not expected to yield optimal solution.
4. NUMERICAL STUDY
Applying the above optimization method, the interactive program system PIVOT for multicriteria computer-aided optimization of cutting conditions in multi-pass rough turning
operations, applicable in CAPP environment, is developed. Detailed computer flow
diagrams are given in [5, 6].
The application of the PIVOT program system has been tested in industrial work
conditions. More elaborate description of the industrial setup in Figure 3 is given in [5]. The
226
optimization of each single pass (case a) was compared with the popular optimization
strategy of using equal cutting conditions for all passes and hence ignoring the decrease of
workpiece diameter Di (case b).
-~~----- -~---~---~-~--- .g ~!
r----
-----
---~ ~~
693
machine tool:
workpiece material:
tool:
tool material:
Pass
Vc
ap
tl
CJ
[mm]
[min]
[mm]
[m/minl
f$1
3.755
.401
22.567
6.414
84.200
1
a
.376
3.984
85.570
2
2.761
.490
85.409
3
6.975
3.500
.456
23.583
80.667
1
b
.456
80.667
3.500
2
3.500
.456
80.667
3
Conditions for Cases a and b: tnp = 5 min, tc = 0.5 min, tr = 0.25
35
min, Co= 0.04 $/min, 1 = 1 $, T= 1836.3 Vc-1.3 4 j" 016
a/
reduction in
227
5. CONCLUSIONS
In this study, the mathematical model of the multi-pass rough turning optimization problem
has been formulated. This model has the explicit, multivariable, nonlinear objective function
constrained with nonlinear inequality constraints and some equality constraints. Also, a
method for cutting conditions optimization has been proposed, based on which the PIVOT
program system was developed.
The approach adopted in solving the nonlinear constrained minimization problem involved a
combination of mathematical analysis of the theoretical economic trends and optimization
search techniques. The implemented optimization method is based on the multi-criteria
optimization under the assumption that an optimum economic balance between the unit
production cost and time yields a maximum profit rate. The estimation contains the weight
coefficient so that the level of the profit rate may be modelled.
The numerical study and results have supported the proposed optimization method and
highlighted the superiority of optimization of each single pass over the optimization strategy
of using equal cutting conditions for all passes. Therefore, any optimization method which
has forced equal cutting conditions for all passes can not be considered valid or applicable
for both the multi-pass rough turning operations on CNC lathes and the constraints
considered in this paper, except in conventional machining.
REFERENCES
1. Kals, H.J.J., Hijink, J.A.W., Wolf, A.C.H. van der: A Computer Aid in the Optimization
of Turning Conditions in Multi-Cut Operations, Annals of the CIRP, 27(1978)1, 465-469
2. Hinduja, S., Petty, D.J., Tester, M., Barrow, G.: Calculation of Optimum Cutting
Conditions for Turning Operations, Proceedings of the Institution of Mechanical Engineers,
199(1985)B2, 81-92
3. Chua, M.S., Loh, H.T., Wong, Y.S., Rahman, M.: Optimization of Cutting Conditions
for Multi-Pass Turning Operations Using Sequential Quadratic Programming, Journal of
Materials Processing Technology, 28(1991), 253-262
4. Kuljanic, E.: Machining Data Requirements for Advanced Machining Systems,
Proceedings of the International Conference on Advanced Manufacturing Systems and
Technology AMST'87, Opatija, 1987, 1-8
5. Cukor, G.: Optimization of Machining Process, Master of Science Thesis, Rijeka:
Technical Faculty, (1994)
6. Cukor, G.: Computer-Aided Optimization of Cutting Conditions for Multi-Pass Turning
Operations, Proceedings of 3rd International Conference on Production Engineering
CIM'95, Zagreb, 1995, D27-D34
230
A tool monitoring system is required to optimise the cutting operations. A cutting tool
needs to be changed for excessive tool wear or for a sudden tool breakage. Tool monitoring
systems have to adapt themselves to identify these main faults, that is tool breakage and
excessive tool wear. So, it is necessary to develop sensitive, accurate and reliable
techniques.
231
E(n)=_!!_
where
E(n) is the expected number of produced parts, Co is manpower cost per minute,
is the working time per part,
ta
P1
For a cutting speed it is possible to determine reliability and productive time and to
calculate the associated cost. Varying the cutting speed the relative cost curve presents a
minimum which represents the optimal working conditions.
5. THEMATHEMATICALMODEL
Once the signals of vibrations and tool flank wear are detected there is the problem of
interpreting these signals as historical series by means of a mathematical model.
The mathematical model is the ARIMA (Auto Regressive Integrated Moving Average).
This model was created to explain and forecast the behaviour of economic parameters (Box
& Jenkins) [7, 8]. By means of the ARIMA model the last data of a series (z1), which
represents a time dependent phenomenon, can be expressed as a function of the previous
data plus a linear combination of random factors and of white noise (a1.):
232
Zt
=<!>1 Zt-1 + <!>2 Zt-2 + ... + <j>p Zt-p + ar+ 81 ar-1 + 82 ar-2 + ... + 9q ar-q
Yt
Both in monovariate and bivariate models it is necessary to define the model order, that
is, the coefficient p of the autoregressive part and the coefficient q of the moving average
part (model identification phase), and to estimate the model coefficients <1> and e
(parameters estimation phase).
The vibration model (ARIMA) enables the vibration level for the next operation to be
forecast. If the forecast level exceeds the limit level (defined by the correlation between
vibrations and wear) the tool has to be changed before performing the next operation. The
equation of the forecast ARIMA model is :
Zr(l)
6. SIMULATIONS
The assumptions described above have been verified by means of a simulation of the
actual process. The series of the average wear data, detected in the University Labs, is used
to obtain the simulated wear series, under the hypothesis of a normal wear distribution
around the average values and with an increasing variance as the number of passes
increases (Figure 2).
The vibration series is generated by an autoregressive bivariate model previously
defined by means of the two starting series, using wear numerical values derived from the
wear series previously generated, as described above. So, for each simulated series the
correlated vibration series is obtained.
The equations of the ARIMA model, which best fit the vibrations and wear data
collected during the experiments at the University Labs, have been included in a simulation
programme. The results of the ARIMA model, in terms of definition of tool substitution,
have been used for the calculation of the cost function.
The simulation programme works as follows:
233
1.
Wear an9 vibration curves calculation for each working pass using the
models previously defined;
2.
For each pass the wear and vibration curves are compared with limit values
"a priori" defined: if the result is negative the next pass is performed.
3.
If the comparison is positive there are two possibilities:
A) The wear curve exceeds the limit value before vibrations. This is the
case in a sudden tool breakage, the last part produced is scrapped.
B) The vibration curve exceeds the limit value before wear. The tool is
changed and the last part produced is good.
1. Through the cost function, described above, and on the basis of the total number
of produced parts the average unitary cost of the operation is calculated.
The process is repeated for a fixed number of cycles; the average values of the unitary
cost, obtained for the different parameters (wear limits accepted), are calculated. Then the
vibration level is changed and other unitary cost curves are calculated.
The cost curve becomes more significant as the number of cycles increases and the
minimum position is more precise as a smaller increment of the limit vibration level is
used. Figures 3, 4, 5 show the comparison of the results obtained with the simulative and
forecast methods at different wear limit levels. Results obtained with forecast method are
always slightly better than the simulative ones.
7. RESULTS
To check the validity of the method the cost curves, obtained with variable tool
substitution interval (ARIMA), Figure 6, have been compared with the cost curves
obtained with a fixed tool substitution interval (fixed number of parts), Figure 7.
The results show that the model based on the comparison between the vibration values
and the relative imposed limits gave results which are slightly better than the results
obtained with a fixed number of parts. A big advantage can be obtained using a tool
substitution method based both on the comparison of the vibration level for the pass x with
the imposed limit and on the comparison of the forecast value for the x+ 1 pass with the
same imposed limit. In more detail, the comparison with the forecast vibration value is
performed first, then the comparison with the simulated yibration is calculated (in this way
the effects of a wrong forecast are ignored).
8. CONCLUSIONS
Figures 6 and 7 show that the ARIMA model gives a better optimisation of the residual
tool life and lower costs than the strategy of changing the tool after a fixed number of parts
produced.
In conclusion, the model proposed by the authors seems to be a good methods for the
definition of the tool substitution interval in unmanned machining operations.
ACKNOWLEDGEMENTS
This work has been made possible thanks to Italian CNR CTll 95.04109 funds.
234
REFERENCES
1. J. Tlusty, G. C. Andrrews: A Critical Review of Sensors for Unmanned Machining,
Annals of the CIRP Vol. 32/211983.
2. H. K. Tonshoff, J. P. Wulsfberg, H. J. J. Kals, W. Konig, C. A. Van Luttervelt:
Developments and Trends in Monitoring and Control of Machining Processes, Annals
of the CIRP Vol37/2/88.
3. J. H. Tam, M. Tomizuka : On-line Monitoring of Tool and Cutting Conditions in
Milling, Transaction of the ASME Vol. 111, August 1989.
4. S.S. Sekulic : Cost of Cutting Tools and Total Machining Cost as a Function of teh
Cutting Tool Reliability in Automatic Flow Lines, Int. J. Prod. Res. Vol. 20 N. 2 1982.
5. C. Harris, C. Crede: Shock and Vibration Handbook, Vol. 1 Me Grow Hill , England
1961.
6. E. Ceretti, G. Maccarini, C. Giardini, A. Bugini: Tool Monitoring Systems in Milling
Operations: Experimental Results, AMST 93, Udine April1993.
7. G. E. P. Box, G. M. Jenkins: Time Series Analysis: Forecasting and Controls, HoldenDay, San Francisco 1970.
8. L. Vajani: Analisi Statistica delle Serie Temporali, Vol. I & II CLEUP, Padova 1980.
50
45
00
'il
30
25
.
Q
.c
01
Forecast data
--Simulative data
35
..'''
..'..
'
40
20
15
10
20
40
Pass
60
80
100
235
0.6
0.5
...
<U
0.4
0.3
0.2
0.1
0
10
20
30
40
50
60
70
80
90
100
Pass
140
'
' '
139
138
-----Simulative
model
---Forecast model
137
136+---------~-------+--------~--------~--------~------~
10
15
20
25
30
Vibration level
Fig. 3 - Comparison between simulation anf forecast costs (wear limit= 0.1 ).
140
t;;
e<.1
....
139
!!
a
=
.
....
138
bl)
<U
<
-----Simulative model
137
---Forecast model
136
0
10
15
20
25
30
Vibration level
Fig. 4- Comparison between simulation anf forecast costs (wear limit= 0.25).
236
....
"'
'"'
140
sa..
139
..=
138
<"
137
..:.
toll
<C
-----Simulative model
---Forecast model
136
30
25
20
15
10
Vibration level
Fig. 5- Comparison between simulation anf forecast costs (wear limit= 0.35).
....
140
"'
.'"'
~
Q
=
=
..
."
139
138
toll
<C
:.
<
137
136
30
25
20
15
10
Vibration level
Fig. 6 - Slops of unitary cost at different wear limits for the forecast model (ARIMA).
141
....
"'
'"'
sa..
=
..
....
140
toll
<C
:.
<
139
138
137
136
20
30
40
50
60
70
80
90
100
Pass
Fig. 7 - Slops of unitary cost at different wear limits for fixed tool substitution interval.
KEY WORDS: Data and Knowledge Sharing, CE, CAD/CAPP Intergation, STEP
ABSTRACT: This paper deals with a methodology approach to object-oriented data and
knowledge sharing for concurrent part design and process planning. CE server, a special
application, interfaces, dynamic data and knowledge records, and logic model for expressing
part/process-planning data and knoweledges are presented. Here, activity-based solving logic is the
keypoint of realizing data and knowledge sharing. ISO 10303 is referred to and EXPRESS
language is used for modelling a logic model which is composed of four units of functionality.
With the help of mapping mechanism of ONTOS database system, physical schemata are
generated. The correspondent interfaces can be realized by means of using clear text encoding of
exchange structure specified by ISO 10303 Part 21. CE server is just used for generating a special
application, scheduling it, and sharing the dynamic data and knowledges related to a designed part
during the whole procedure of concurrent part design and process planning. At last, conclusions
are given.
1. INTRODUCTION
Concurrent engineering (CE), also called as simultaneous engineering, is considered to be
feasible and practical product development mode in the industrial applications of today and
future. Since 1988, researchers have been focusing on the CE issues and have also provided
many available methodologies and application prototypes. Some large companies such as
GE have used some ofCE methodologies for their industrial practicesPl. However, CE is a
very complex systemized technique and deals with a quantity of technical, social, and
cognitive problems to have been found and to be discovered. It is still under development.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
238
So a research project, named ,Models and Methods for an Integrated Product and Process
Development" (SFB361), supported by Deutsche Forschungsgemeinschaft, and being
researched at RWTH Aachen led by the Laboratory for Machine Tools and Production
Engineering(WZL), is just for solving the issues on CE application. This paper focuses
only on one aspect of sub-project in this project, that is, dynamic data and knowledge
sharing for concurrent CAD/CAPP integration.
From the angle of global viewpoints, CE researches can be particularly emphasized from
one of three aspects which are listed as follows, although some overlaps exist:
- high-level configurating, controlling, and scheduling mechanism (such as activity-based
modelling, activity planning, overall structural configuration and scheduling, etc),
- data and knowledge sharing (D&K sharing), and
-specific application tools or systems in low level (such as DFM, CAD, CAPP,etc).
The typical presentations for the first keypoint research include a methodology and PiKA
tool developed at WZ021 , structured activity modelling by K.Iwata[ 31, etc. The second
keypoint research just deals with SHARE and SHADE project[4][51 , PDES/STEP-based data
preparation for CAPP by Qiao[61 , etc. As to the researches on ap~lication tools in low level,
we can find many cases about design for manufacturability[7][8J[ 1, environmental conscious
designP 01 , modified process planning, etd9n111
Mainly from the angle of dynamic data and knowledge sharing, together with activitybased modelling, we will discuss the integrating issues for concurrent part design and
process planning procedure in this paper. Here, developing a logic model for unified
representations of dynamic part/process-planning data and knowledges, achieving a kind of
physical realization, realizing the correspondent interface functions, and creating a CE data
server and further extending it into activity-based scheduling tool are just main goals in this
research. In the following sections, the detailed contents will be described. Here, section 2
is for the analysis of data and knowledge sharing procedure, section 3 for the overall logic
architecture of data and knowledge sharing, section 4 for the unified part/process-planning
integrated model, section 5 for the methodology of interface development, and section 6 for
the realization of CE server. At last, conclusions are given.
2. ANALYSIS FOR PROCEDURE OF DATA AND KNOWLEDGE SHARING
In order to analyze the procedure of data and knowledge sharing, first of all, we explain the
following concepts:
Dynamic Data and Knowledges: So-called dynamic data and knowledges are defined
as ones that are generated during the procedure of concurrent part design and process
planning. They include initial, intermediate and final data and knowledges related to a
designed part, for example, part geometric and technical information, activity-based
management data, resource data, etc. In this paper, we formalize four types of data and
knowledges listed as follows:
- feature-based part/process-planning data,
- dynamic supporting resource data,
- dynamic knowledges, and
- activity-based management data.
Activity and Meta-activity: In this paper, activity is defiined as a procedure to finish a
239
specific basic design task under concurrent integrated CAD/CAPP environment, such as
designing structural shapes of part. While an activity can be decomposed further into three
meta-activities, that is, designing a basic task, analyzing and evaluating a basic task, and
redesigning a basic task. Realizing a meta-activity needs a functional tool. In fact, each
functional tool may have a specific data structure of itself. It means that sometimes it is not
avoidable to exchange data between meta-activities or functional tools with the help of
database.
:............-.1..........~..:
According to above concepts,
; M-Activity: Redesigning:
we may analyze how to
.--.
generate and reuse dynamic
data and knowledges inside an Fig.! General D&K Exchange Procedure for Function
activity. As shown in Fig.l,
when a meta-activity is finished, the correspondent functional tool generates dynamic data
and knowledges and proceed them into the database. When a meta-activity begins, the
functional tool reuses dynamic data and knowledges from database. It should be pointed
out that no exchange exists except ime
Scenario
the generating-operations when :f;inishe<Hi~..-----.
sever-al functional tools are
240
a special application. Application
tool library is used for describing possible application systems
which contain the data structures of
themselves, such as feature-based
modeller, process planning system,
manufacturability evaluation tool,
etc. According to the same reasons, interface library is used. In
Fig.3, further, the main functions of
CE server are listed and the
information correlationships among
functional modules are also indicated with arrow lines.
4.DEVELOPING UNIFIED PART/
PROCESS-PLANNING INTEGRATEDMODEL
A Special Application:
.-----til
................................1............................... ,
1-a-~g-Mta".ai~t~
M<xl\'atlnfo
241
The realization of an interface depends on the requirement of data transfer, the related
entity group in physical schemata, the data structure of used application tool and its
openness, etc. Here, the first item indicates what data need to be exchanged, the second
declares which aspects of physical schemata are dealt with, and the third, on the one hand,
means what relationship can be obtained between the entity group in physical schemata and
the data structure of used application tool, on the other hand, either interactive or
automatical interface types are needed according to the openness of application tool. As
shown in Fig.6, typically, an interface for generation and reuses of dynamic data and
knowledges can be created by using clear text encoding of exchange structure specified in
ISO 10303 Part 21. Here, mapping between physical schema of database and clear text
encoding of exchange structure can be realized by combining the hierarchical modulized
constructing-blocks which deal with the correspondent classified entity or entity group.
The smallest combinating-blocks are entity-related. In addition, some of combinatingblocks are linked to either the sub-unit of functionality or a group of entities between which
there are inheritant relationships. In our research, these constructing blocks are being
programmed in C++ language based on ONTOS. According to the logic model for
concurrent part design and process planning, a hierarchical classification for constructing
242
On-line Dynamic Data and Knowledge Sharing: On-line sharing occurs during
concurrent part design and process planning. In order to realize it, definition, planning, and
scheduling are three basic operations. In the aspect of definition, activity space, abstract
feasible application tools, abstract feasible interfaces are dealt with. Here, activity space
hierarchy which deals with activities, the correspondent meta-activities, triggering events,
state set, etc, can be illustrated by using Fig.8. Similar to this, definitions for abstract
application tools and interfaces can also be obtained. In addition, we also provide a
managing function so as to maintain the correctness of above definitions. In the aspect of
planning, a series of activities can be selected with the correspondent mechanism of CE
server in order to generate a scenoario which specifies the static procedure for executing
concurrent part design and process planning. In the aspect of scheduling, an event-driven
intelligent scheduler based on rule-based reasoning is used. The formalization of the
correspondent scheduling knowledges depends on the current executed event and state
changes. The following is a rule for generating a new event of meta-activity:
IF: (Current event of activity from scenario is Activity (Structural-Design-Stage, $W, Structural-Shapes)) .&.
243
Here, $W and &X mean variables. From it, we can know that dynamic states play the
important roles in the procedure of formalizing knowledges. As shown in Fig.9, dynamic
states during executing an event or a meta-activity can be generated by initial state input,
intercepting demands for inserting an additional activity, and state generator. The state
generator is main source of generating dynamic states and will run after calling the
application tool and finishing the correspondent operations.
( Activity Space )
..1
Activity, )
....
..1
(Activity, )
...
....
...
..1
(Activity, )
....
...
~--~
244
interfaces oriented from the standard text format of STEP can integrate some application
tools which contain the data structure of themselves with database, and that using the
functions of intelligent CE server, such as defining, planning, scheduling, and managing,
assures a flexibility to reach the goal of CE application.
8. ACKNOWLEDGEMENTS
Dr.Ping-Yu Jiang would like to express his thanks for the financial supports from
Alexander von Humboldt Foundation and especially thanks to Prof. Dr.-lng. W.
Eversheim, Dr.-lng. Matthias Baumann, and Mr. Richard Gra.Bler for academic discussions
and facility utilizations from Laboratory of Machine Tools and Production Engineering
(WZL), Technical University of Aachen, The Federal Republic of Germany.
9. REFERENCES
1. Lewis, J.W., et al.: The Concurrent Enfineering Toolkit: A Network Agent for
Manufacturing Cycle Time Reduction, Proc. of Concurrent Engineering Research and
Application, Pittsburgh, PA, 1994
2. Bochtler, W.: Dr.-lng. Diss.Manuscript, WZL der RWTH Aachen, 1996
3. Iwata, K., et al.: Modelling and Analysis of Design Process Structure in Concurrent
Engineering, Proc. Ofthe 27th CIRP Int. Seminar on Manuf. Sytsems, 1995,207-215
4. Toye; G. et al.: SHARE-A Methodology and Environment for Collaborative Product
Developmemt, Proc. of IEEE Infrastructure for Collaborative Enterprise, 1993
5. McGuire, J.G., et al.: SHADE-Technology for Knowledge-Based Collaborative
Engineering, Technical Report, Standford Univ., 1992, 17p
6. Qiao, L., et al.: A PDES/STEP-based Product Data Preparation Procedure for CAPP,
Computer in Industry, 21(1993) 1, 11-22
7. Gupta, S.K. and Nau, D.S.: Systematic Approach to Analysing the Manufacturability of
Machined Parts, Computer-Aided Design, 27(1995) 5, 323-342
8.Mill, F.G., et al.: Design for Machining with a Simultaneous Engineering Workstation,
Computer-Aided Design, 26 (1994) 7, 521-527
9. Ping-Yu llANG and W.Eversheim: Methodology for Part Manufacturability Evaluation
under CE Environment, to appear in Proc. of CESA'96, 1996
10. Johnson, M, et al.: Environmental Conscious Manufacturing--A Life-cycle Analysis,
Proc. of 1Oth Int. Conf. on CAD/CAM, Robotics, and Factories of the Future, 1994
1l.Herman, A., et al.: An Opportunistic Approach to Process Planning within a Concurrent
Engineering Environment, Annals ofthe CIRP, 42 (1993) 1, 545-548
12. ISO 10303 Part 1: Overview and fundamental Principles. 1992
13. ISO 10303 Part 11: The EXPRESS Language Reference Manual, 1992
14. ISO 10303 Part 21: Clear Text Encoding ofthe Exchange Structure, 1993.
15.Marczinski, G: Verteilte Modellierung von NC-Planungsdaten: Entwicklung eines
Datenmodells fiir NC-Verfahrenskette auf Basis von STEP, WZL der RWTH Aachen. DrIng. Diss., 1993, 135Seite
16. Chan, S, et al.: Product Data Sharing with STEP, in Concurrent Engineering:
Methodology and Application, Edited by P.Gu and A.Kusiak, Elsevier, The Netherland,
1993, 277-298
17. Gu, P and Norrie, D.H.: Intelligent Manufacturing Planning, Chapman & Hall, London,
1995
18.Schiitzer, K: Integrierte Konstruktionsumgebung auf der Basis von Fertigungsfeatures,
Dr.-lng. Dissertation, TH Darmstadt, 1995, 202 Seite
1. INTRODUCTION
Nowadays, Computer Aided precision planning functions have to be included in CAPP
systems, especially for precision manufacturing. Although many authors have discussed
this issue, few existing CAPP systems offers the functions of automated tolerance
assignment or tolerance verification, and furthermore, very little has been discussed from
the point of view of NC machines specification [1]. Application of NC machines in
manufacturing production makes precision dimensional requirements of every surface
dependent (apart from the class of the machine tool) only by the tool and workpiece
position, respects the absolute machine tool co-ordinate system. Therefore, the tolerance
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
246
chains reflected in the part drawing in designing phase, normally, has a minor effect in the
accumulation of tolerances in manufacturing design phase.
The aim of this work is to develop an interactive Computer Aided Clamping Selection and
Tolerance Verification Module, inside an integrated CAD-CAPP-CAM system [2],
proposing the integration of Process Planning decisions, as surface clamping selection and
operational sequence determination, with algorithms that controls and guarantees part's
precision requirements.
2. MODULE'S DESCRIPTION
At present, in modern production, most NC lathes are equipped with a pos1t1oning
resolution of 1 !liD. Various machining errors in the finishing turning, however, degrades
the accuracy to a level of approximately 10 11m [3]. But, such accuracy (that can be
considered fully acceptable for most of parts with normal precision requirements), can
degenerate if we do not consider in the manufacturing design phase the influence of other
factors, like part's deformation under cutting forces, set-up errors, tool changes errors, tool
wear, etc.
Substantial research has been carried out regarding optimal tolerance allocation; the
majority of them are for tolerances design purpose, and only a few of them deal with
manufacturing tolerances [1][4].
However, in the latter ones, most of the studies consider the manufacturing process as a
fixed one, proposing the distribution of the tolerances on each phase of the machining
process. Such approaches, assure the precision requirements of part's design, without
considering the role of precision requirements in manufacturing process design (process
planning).
Furthermore, in most of tolerance distribution approaches, the dimension chains are
constructed grouping all dimensions (tolerances) that form the sum dimension (tolerance),
for a determined operational sequence. That is undoubted the right solution for manual
conventional machine tools, but may cause an unjustifiable accumulation of tolerances,
and consequently, very closed tolerances for individual operational dimensions in
manufacturing phase.
In this work, refereed to symmetrical rotational parts, the authors propose to consider only
errors that derive by the change of the workpiece co-ordinate system (set-up errors), and
"real" tool position (tool change errors and tool wear), respect to the absolute co-ordinate
system (machine tool co-ordinate system). The modules for these functions are:
clamping selection module, which analyse all possible set-up conditions, searching the
best one, which can guarantees the part's precision requirements, and
tolerance control modules, called by clamping surface selection module, any time when
a precision requirement is violated.
2.1 CAD AND CAM INTERFACES
The implementation of a Computer Aided Module for Clamping Selection and Tolerance.
247
Verification, certainly pretends, the existence of automatic interfaces with CAD and CAM
systems. An interface with CAD system is implemented [2], extracting by 2D CAD
wireframe models (IGES files), the geometrical, technological and precision information
(Extract function of Data pull-down menu, fig .l ).
R :~ d ia l
Tolerance T<:sl
0 10
1'0
OK
OK-
16 1"'"'6
18 OJo
OK
NO
.2
~43%
A xi:~ l
6
11
16
1_1 8
92%
("yl indric::d
Tolerance Test
0 40
056
0 106
046
0.16
40mm
i'-:0
OK
I"U
I"U
02
.Ct l
93.5%
92%
111111
No 3
NO
Exh:mal Face
10 _
6 _
10
18
~ -
I I
~
I I
7 '1
II
I
1J
17
II
19
21
In the study, AutoCAD 12 is used as CAD systems. The CAM modules can also generate
script files for NC Programmer of Lathe Prod. Package V4.60 (NC Microproducts) [2].
The system is implemented in Visual C++ language using graphical user interfaces (GUI)
based on Microsoft Windows 3.11 (fig.l and fig.4), permitting an easily user interaction
with the system, by means of pull-down menu's functions, and windows dialog-box
interfaces. The interface with the user is realised in different model data views, depending
on the user demands and the data quantity of the information.
3. CLAMPING SELECTION MODULE
The choice of the clamping surface is one of the most important task in process planning,
due to the great influence of the set-up position, in the determination of operational
sequence. The data flow in clamping selection module is presented in fig.2 . All part
surfaces are scanned by algorithm, constructing a clamping matrix containing the surfaces
which geometrically can be potentially clamping surfaces (cylindrical surfaces which
satisfy a set of geometrical constraints, depending on the type chuck used). Then checking
their capability, these surfaces are tested to guarantees all radial precision requirements,
248
4. RADIAL TOLERANCE
VERIFICATION MODULE
The implemented algorithms in
the presented module, takes in
consideration the uncertainties
created by the elastic part's
deflection under the cutting
forces,
the
influence
of
clamping device and tool wear
errors.
Influence of cutting forces: The
elastic part's deflection depends
upon the geometric construction
of the part, its dimensions,
clamping system and machining
conditions (cutting forces).
Normally, the most common
clamping device used for
cylindrical surfaces is a threejaw chuck, and for such
clamping device we can
considered the part as a shaft,
fitting at the supposed clamping
surface, under the cutting forces
at the right hand of the
machining surface (Fig.3).
Constraints
Cutting Forces
Machine Accuracy
Tool Wear
of= (2 M 1 + M)l3(l 2 1 2 E)
I
1-
Jl
These values represent for the tolerance control module, the pre- clamping selection
determined error by the influence of cutting forces.
The algorithm calculates these values for all examined potentially clamping surfaces,
considering as "i"-section, the right vertical extremity of the part surface, related with a
radial precision requirements (fig.3).
249
C 1 * [ 1- exp(C/x)]
where:
Fig. 3 - Elastic part deflection
f (x) - errdr function;
x - length of cutting,
(C 1, C2) = f ( V, a, f, tool material).
The respective data are stored in the database of clamping selection module.
b - the consideration of clamping device's influence and machining accuracy as fixed
values, using predetermined errors values (experimental results represented in the
literature), or allowing the user to impose his values, according to the real accuracy of a
specified NC machine tool (an user interface is implemented to change system's default
values).
If none of clamping surfaces guarantees the precision requirements, the system advises the
user, and imposes a reduction of the cutting force, altering feed rate and depth of cut, or
permits the user to impose his decisions by means of a dialog-box.
In the worst case, evaluating the impossibility to guarantees the radial precision
requirements with finishing operations, the clamping selection module adds in the process
plan finishing non-turning operation (e.g. grinding), determining the stock removal amount
for such operations.
250
and are activated when an axial precision requirement could be violated., depending on
changes of workpiece co-ordinate system relative to absolute co-ordinate system (machine
tool co-ordinate system). In the proposed solution, tolerances chains are created with the
minimal number of components, taking into account that in NC machines, during
manufacturing process, the tool positions relative different part's surfaces, depends only by
the position of workpiece and machine tool co-ordinate systems.
If we consider Ai-j, the dimension between the surfaces i and j, related with an axial
precision requirement, it can be presented using two different situations:
.
- both surfaces i and j can be realised in the same set-up (which depends by the position of
clamping surface). In such case, Ai-j represent for the system, both design and machining
dimension;
- surfaces i and j must be realised in different set-ups; in that case this dimension is not
guarantee and the system compose Ai-j dimension (tolerance), creating a tolerance chain
with four components:
where:
- Ai-s -between surface i and set-up surfaces;
- Aj-s -between surface j and set-up surfaces;
Ai-j = Ai-s + Aj-s +As+ At
- As - mean of set-up errors;
- At - mean tool change errors.
251
The procedure for tolerance distribution is based in the unified model developed by Bjorke
[7]. The real determination of individual tolerances uses the following formula:
i=l
where:
n1
number of dimensions
determinable tolerances;
n,
number of dimensions
predetermined tolerances.
with
with
In the approach have been determined the normalised tolerance value Twi for beta
distribution [7], based in the selected confidence level, and are considered as
predetermined tolerances the set-up errors and the tool change errors, using the
experimental results available literature.
If the same dimension take place in different dimension chains, the system determines the
smallest value as the tolerance for that dimension, and consider that value as a
predetermined tolerance for the other dimension chains.
After the examination of all axial precision requirements of the part drawing, the system
determine as clamping surface, the cylindrical part surface that allows largest tolerances (if
all tolerances are larger than process tolerances). Contrarily, the system advice the user and
add a finishing non-turning operations (e.g. grinding) to guarantee the violated tolerances.
In every phase of the test, the clamping module gives to the operator the testing results,
showing its decisions.
In fig.l is presented the user interface of Tolerance Control Module. The fourth column in
the matrices of potentially possible clamping surfaces, represent the confidence level for
the clamping surfaces that can violates axial precision requirements of part drawing. It
generates alternative process plans for every possible clamping surface (fig.4), helping the
operator in the decisions, if any possible set-up position can not guarantee the precision
requirements of the part's drawing.
5. CONCLUSIONS
The implemented system realise the integration of Process Planning functions like surface
clamping selection and operational sequence determination with algorithms that controls
and guarantees the part's precision requirements.
It composes an alternative tool to surpass some critical problems apparent in existing
CAPP systems as:
the reliability of existing CAPP system in the definition of optimal process plans in
conformance with quality drawing requirements;
the possibility that the system decisions are controlled and influenced by alternative
user decisions, in every phase of process planning.
The step by step method used (pull-down menu functions situated in a GUI Interface),
permits not only a good system's visibility, but also the evaluation of different user
decisions (generation of alternative process plans, evaluation of confidence levels, etc.).
252
Although the created system doesn't interfere in product design phase, it can evaluate the
correctness of input information and in case of incompleteness or incorrectness,
automatically make the necessary changes in the input part drawing.
First Setup
- Rough Contour
-Grooving 056 mm
- Rough Contour
-Grooving 040 mm
~finishing ( 036
-Grooving 046 mm
::li
mm )
rrnfNI
:-~ - -
Fig. 4- GUI ofprocess plans for the examined clamping surface (two view model)
ACKNOWLEDGEMENTS
This research has been supported with Italian MURST funds. The authors thank Prof.
Attilio ALTO, for the precious suggestions and the enthusiastic support during the work.
REFERENCES
[I] H.C.Zhang, J.Mei, R.A.Dudek: Operational Dimensioning and Tolerancing in CAPP,
Annals ofthe CIRP, Vo1.40/l/1991, 419+422.
[2] L.Galantucci, M.Picciallo, L.Tricarico: CAD-CAPP-CAM Integration for Turned Parts
based on Feature Extraction from IGES files, CAPE I 0, Palermo, 1994.
[3] T.Asao, Y.Mizugaki, M.Sakomoto: Precision Turning by Means of a Simplified
Predictive Function ofMachining Error, Annals ofthe CIRP, Vol.41/1/1992, pp.447+450.
(4] B. Anseimetti, P.Bourdet: Optimisation of a workpiece considering production
requirements, Computers in Industry, 21, 1993, pp.23-34.
[5] M. Rahman, V.Naranayan: Optimization of Error-of-Roundness in Turning Processes,
Annals ofthe CIRP, Vol.31/l/1989, 81+85.
[6] A.Dersha: Computer Aided Process Planning for Symmetrical Rotational Parts,
Research Report for Post-Dott. Research Activity at Politecnico di Bari, Italy, 1996.
[7] 0. Bj0rke: "Computer Aided Tolerancing", Tapir Publishers, Trondheim, 1978.
ABSTRACT: In the design of technological processes, i.e. in detennining of the process parameters,
selection of optimal parameters is important factor, regardless of the optimization criterion. If the
first order experimental plans are used for finding the response function of measured variable
(criterion function), problem is reduced to the determination of boundary values of individual
parameters, which correspond to the extreme value of criterion function. Application of the second
order experimental plans, however, makes certain difficulties, since, in this case, criterion functions
are nonlinear (they usually consist of second order elements and interaction elements, besides linear
elements), namely the saddle surface. Considering of many parameters (three, four and more), and
many optimization criteria, creates even greater problem. The paper deals with the optimization
procedure for such functions by analyzing different cases regarding the number of parameters and
complexity of the model. The algorithm is tested on suitable practical examples. The article is
accompanied with suitable graphs and diagrams.
1. INTRODUCTION
In the process of planing of experiment it is important to choose the correct model for
calculation of criterion function. Selection of model is defined by the nature of analyzed
technological process. The application and elaboration of the model of welding processes
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
254
results in data about linear and interactive influence as well as the square contribution of
each factor.
The paper presents the mathematical model of the square function with two independent
variables, generally formulated as:
(1)
where: bois a constant; b, and b2 are parameters describing the linear relationship between
x, and Y(x) or x2 and Y(x), respectively; bu and b22 are parameters describing the square
relationship between x, and Y(x) or x2 and Y(x), respectively; b12 is a parameter describing
linear interaction between x, and x2 in their relation to Y(x). If there are more factors in the
design, equation (1) is expanded accordingly.
After the experimental data processing and determination of coefficients in equation (1), it
is very important to select such a graphical presentation of criterion function, that would
enable relatively fast perceiving of the optimal solution of the problem. Since the threedimensional presentation of results will not always make this possible, it is necessary to
convert the equation (1) to a suitable form in plane coordinate system.
The paper develops a method for translation of criterion functions given in (1), into the
canonical form, suitable for the presentation in plane coordinate system. The accuracy of
this approach is verified and confirmed on a practical example.
2. DEVELOPMENT OF OPTIMIZATION METHOD
2.1. Method description
Square approximation of real function, given by the expression (1), can present different
types of response surfaces, i.e. those with maximum, minimum or saddle point. It is
dependent on the values of its coefficients.
To calculate the optimum of given function (1), it is, in the first instance, necessary to
determine its first partial derivatives:
8Y
+ 2bu x, + b12 X2
(2)
b2 + 2b22 X2 + b12 X1
(3)
= b,
8x1
oY
8x 2
Ifwe equalize the expressions (2) and (3) with zero, the solution of the obtained equation
system will produce values of the stationary points. To determine the type of extreme of
response function in stationary point, it is necessary to calculate second derivatives of
given function:
8 2Y
- - = 2bu
ox?
(4)
255
(5)
(6)
The type of extreme can now be determined from solution of the square equation (7).
(2bu - b}(2b22 - b') - b12 2 = 0
(7)
where Y(x) is value of function Y(x) in the middle of response surface; bu', b22' are
transformed coefficients; x1', x2' are translated axes
Interpretation of canonical function types (8) is given in Table 1.
Coefficient
Si ns
Case Relations b'u b'22
1 b'u= b'22
2
b'u= b'22 +
+
3 b'u> b'22 4
b'u>b'22
+
+
5 b'u= b'22 +
6 b'u= b'22
+
7 b'u> b'22 +
8
b'22 = 0
-
Contour
ty_Qe
Geometrical
interpretation
Middle
point
circles
circles
ellipses
ellipses
hyperbolae
hyperbolae
hyperbolae
circularprominence
circular subsidence
elliptical prominence
elliptical subsidence
symmetrical saddle
symmetrical saddle
stretched saddle
max.
min.
max.
mm.
saddle point
saddle point
saddle point
straight lines
stationary ridge
none
256
x2
x2
~~
xi
x2
x'2
rotation
x'1
xi
c:=:>
xi
The results of experimental research [4] of welding process and obtained optimal
parameters of the impulse MIG welding of aluminum alloy AIMg3, with wire (<I>= 1.2 mm)
made of AlMgs, were used to verify the selected model.
The equation of response function for the width of penetration, given in paper [4], is the
following:
Y = 9.456+ 0.211x,- 0.223xrx2 + 1.106x 12
(9)
oY = 0.211 + 2.212
ox]
Xi-
0.223 X2 = 0
oY = - 0.223 Xi = 0
(10)
(11)
ox2
Solution of this equation system determines the value of the stationary point:
X1
= 0, X2 = -0.946
The values of second derivatives of response function in stationary point are the following:
02 Y
2 212
ox~= .
= o'
2Y
' ox~
02 Y OX]OXz
-0.223
Expanded form of the square equation (7) for this example is expressed by:
b' 2 + 2.212. b' + 0.0497
0'
257
Y(O, 0.946)
9.456
The equation of response surface m canonical form, for the case of penetration, ts
represented as:
Y'- 9.456 = -2.189 x 1'2
(12)
13
:::::
.:2
-;; 12
'-
Q
:::::
(.)
c..
'-0
...c:
II
-::;
~
10
258
-0.5
\--~-
---t-
"'
~]
-II-! f'--1'---f'-f
-1.5
-1
-0.5
r-i!
---~'-
0.5
1.5
XI
260
l. INTRODUCTION
Design and analysis of Flexible Manufacturing Systems (FMS) is most of all based on
simulation, but, as everybody knows, this methodology does not provide optimal solutions.
Usually, by optimal solution, we mean the optimum setting of system parameters, related to
a performance measure of the FMS. But, in many cases, some parameters, while influencing
the system performance, are not controllable "in process". Anyway, their value can be set
during simulation, according to a plan of experiments. In order to obviate this drawback J.
S. Shang [1] proposed to use a methodology developed by the Japanese researcher Taguchi
[2]: he suggests to study the variability in the system performance induced by these
uncontrollable (Noise) factors (NF) in order to select the best setting (the least sensitive to
that variability) of the controllable ones (CF).
,
While Taguchi method considers just qualitative or discrete variables, the statistical
methodology or Response Surface (RSM) is used for quantitative and continuous input
variables. Actually these two methods have been also applied sequentially [1]: first Taguchi
to set qualitative variables and to consider NFs, then RSM to fine-tune the solution. But at
least two problems arise immediately. First, the best setting of qualitative factors by Taguchi
method is left unchanged while applying RSM: but this can be considered correct only after
proving that no interaction exists between qualitative and quantitative variables. Second, the
NFs in the first step are changed into CFs in the RSM, but this is absolutely conflicting. In
this research we propose a different approach to get the best solution to our problem,
basing on the belief that both the interactions between CFs and NFs and inside CFs need to
be studied.
2. SYSTEM CHARACTERISTICS AND SIMULATION MODEL
The layout of the FMS [3] under study is shown in fig. 1.
WSl
WSll
SAl
wss
WS8
261
The system consists of 11 WorkStations (WSs) including a receiving department (WS1) and
a storage/shipping department (WS 11 ). All jobs enter the system through WS 1 and leave it
through WS 11. Parts are handled inside the system by means of Automated Guided
Vehicles (AGVs). Each AGV moves a part between the WSs along a unidirectional path
layout. Along this path layout three staging area are strategically positioned, each of which
consists of three links able to accept one vehicle each. Each WS has limited input (Bij j= 1,
2, ... , 10) and output (BOj j=2, 3, ... , 11) buffers where parts can wait before and after an
operation. Five part-types are processed concurrently. Parts arrive at the Bll (WS 1 input
buffer) with interarrival of 4 minutes (the order of entry is established according to a
dynamic loading rule LR) and then they are mounted to fixtures at the WS 1 and held in the
BO 1 (WS 1 output buffer); finally they enter the system on a FCFS in BO 1 basis. In order to
avoid system congestion, the parts residing in the BOj are transported when both the
requested transport is feasible and a queue space at the destination WS is available. Parts
visit the WSs according to their routing (T AB.1) If no machine in the current WS is free,
the part remains in its BI; as soon as a resource becomes idle a part is chosen from the BI
queue basing on a Shortest Processing Time (SPT) dispatching rule. After being processed
it waits in the BO until the required transport is feasible. A transport of a part is feasible if:
- an AGV is free;
- a place is available in the BI of the next WS according to the part routing (considering
also the parts already enrouted to it);
- the part has the highest priority in the queue of the parts awaiting for a transport.
Usually central buffer areas are provided in order to prevent system locking or blocking; if a
capacitated system, as our system, is liable to blocking and locking problems, will depend
on the operating policies used to run it.
Upon the completion of a part transfer, the idle AGV sends its availability to the AGV_DR
(AGV dispatching rule) controller. The AGV_DR controller, basing on the AGV_DR used,
selects a new part, if no part can be transported or if the AGV queue is empty, it singles out
the nearest staging area; if no place is free, the AGV will go around the path layout until a
task is assigned to it by the controller or a place in a staging area becomes free.
A SIMAN [4] discrete simulation model was developed to represent the FMS.
As observed by Garetti et. al. [5] in the actual production the plant will be found to operate
both under steady state conditions and in conditions of fulling and emptying (transient)
because the FMS is seen by the production planning system as a "machine" to which is
assigned "a job" to be finished within a certain available time.
Moreover, the effectiveness of a loading rule is revealed not only in its steady state
performance, but also in the rapidity with which the steady state performance is achieved;
the loading rule is also responsible for the mix to be realised when the transient is
disappeared. Therefore simulations were performed considering the production horizon
required to produce the entire volume (V=300 units).
O'keefe and Kasirajan [6] suggest to use a steady state approach to overcome the bias
introduced in the results by the starting and ending conditions, while they argue that most
manufacturing systems never reach a steady state.
262
.10
.20
.30
.20
.20
Routing
1-2-4-8-9-10-11
1.2.4.7.9.6.10.1 1
1-2-7-9-6-10-11
1-2-3-5-9-6-l 1
1-2-4-8-l 0-ll
.25
.30
.10
.10
.25
8.0-6.0-22.8-8.0-9.2-4.0-2.0
8.0-6.0-9.2-12.4-8.0-7.6-14.0-14.0-2.0
~ 0-4.5-23.4-7.2-9.6-3.0-2.0
8.0-6.0-17.2-20.4-6.8-26.8-2.0
8.0-6.0-26.4-9.2-4.0-2.0
IW orkstation
!Number of resources
4
7
10
11
263
VR = M,(t)
M,
where M, V is the volume of part i to be realised (V is the total production) and
264
and the WIP (work in process), while six are the input variables.
Among the CFs, two are qualitative and two quantitative, while inside the NFs we
distinguish a qualitative variable and a quantitative one. Table 3 summarizes the level
assumed by each of these factors.
Table 3 Factor Levels
LR
AGV DR
AGV#
BUF DIM
MIX
MTBF
MB
FCFS
STD
LQS
4
6
8
3
4
5
Levell
Level2
150
300
BNB
4. RESULTS
According to the levels of each variable, a factorial design of 216 (2 3 x33) runs has been
considered. On both the output variables we performed an Analysis of Variance, one of
which is shown in table 4.
Table 4. Analysis of Variance on flowtime
Of course, the analysis has been performed only on the CFs, while NFs are used to create
replicates: in fact we aim at selecting the best level of the input setting corresponding not
only to the minimum in the performance variable, but also the least sensitive to the
265
variability induced by NFs. As we want to keep under control this variability, the only table
of the ANOV A cannot help us if we do not associate to it an analysis of means performed
on significative interactions. This analysis of means for the highest .interaction
AGV DR*AGV# is shown in Table 5.
Table 5 Analysis of means
AGV DR
FCFS
STD
LQS
ALL
4
120.14
24.88
297.90
26.66
324.64
24.90
247.56
94.79
AGV#
6
116.09
25.63
132.01
33.80
127.69
29.99
125.26
30.33
8
112.40
25.08
113.22
25.70
112.76
25.42
112.80
25.05
ALL
116.21
25.05
181.05
88.31
188.36
100.77
161.87
84.81
where in each cell we find the mean and the standard deviation (SD) offlowtime.
If we consider the columns of this table, we can immediately see that we cannot consider 4
vehicles in the system, as we have great values both for the mean and for the SD (except for
cell FCFS-4); in the other two columns we do not find a high variability, especially in the
last one, corresponding to 8 vehicles in the system. If we repeat the analysis omitting the
observations corresponding to 4 AGVs, we get that AGV_DR is no more significative,
while the only significative interaction is the LR *BUF_DIM, on which we perform an
analysis of means. Examining the corresponding table 6
Table 6 Analysis of means
BUF DIM
LR
MB
BNB
ALL
3
99.87
8.24
151.48
31.42
125.68
34.59
4
99.85
8.21
135.74
28.19
117.79
27.40
5
99.90
8.25
127.35
20.58
113.62
20.80
ALL
99.87
8.12
138.19
28.56
119.03
28.41
we immediately see that we cannot use the second loading rule, to whom both higher means
and SDs correspond. Repeating the analysis with just the last three CFs and only the first
loading rule we get no more significative interactions and just the AGV# as significative
factor. This means that, setting the LR at its first level and considering just the highest two
levels for AGV#, we can select arbitrarily the vehicle dispatching rule, considering that we
266
get better performance (lower flowtime and less induced variability) with 8 vehicles. The
same results we get performing the analysis on the other output variable.
In order to strengthen our analysis, we applied Minimax and SIN ratio to the columns we
get separating the four replicates (2 X 2) given by the NFs (the SIN ratio for a smaller the
I n
better output variable is "-I Olog(;;- l)i2 ) ).
i=I
MINIMAX
We considered the maximum value of the output variables among the 4 replicates
corresponding to each of the 2x3 3 level set of the CFs (said column MAX); we chose, as
optimum input level set, the one corresponding to the minimum in the above mentioned
column MAX. According to this methodology, the optimum input level sets are MB- FCFS
- 8-4 (or 5), that agrees perfectly with the analysis performed in the above section.
SIN ratio
On the replicates described in the Minimax section we computed SIN ratios, getting a
column of2x3 3 values. We chose, as optimum input level set, the one corresponding to the
maximum in the above mentioned column. According to this methodology, the optimum
input level sets are the same obtained in the MINIMAX analysis.
5. CONCLUSION
With all the methodology applied in this research we have got a univocal solution to our
problem. But it may happen that some interaction cannot be cut off. Anyway, its stressing
help us to deal with it adequately.
REFERENCES
I. Shang, J. S.: Robust design and optimization of material handling in an FMS, Int. J. Prod.
Res. (1995), 33 (9), 2437-2454.
2. Taguchi G.: System of Experimental Design, (1987), Vol. I & 2. Quality ResourcesKraus & American Supplier Institute.
3. Taghaboni-Dutta, F. & Tanchoco, J. M.A.: Comparison of dynamic routeing techniques
for automated guided vehicle systems. Int. J. Prod. Res. (1995), 33 (10), 2653-2669.
4. SIMAN IV Ref Guide I989. System Modeling Corp.
5. Garetti, M., Pozzetti, A., Bareggi, A.: An On-line Loading and Dispatching in Flexible
Manufacturing Systems, Int. J. Prod. Res. (1990), 28 (7), I271-1292.
6. O'Keefe, R. and Kasirajan, T.: Interaction between dispatching and next station selection
rule in a dedicated Flexible Manufacturing System, Int. J. Prod. Res. (1992), 30 (8), 17531772.
7. Egbe1u, P. J. & Tanchoco, J. M. A.: Characterization of automated guided vehicles
dispatching rules, Int. J. Prod. Res. (I984), 22 (3), 359-374.
268
different models have been proposed for the optimization of single or multi-pass operations.
The geometric programming allows to analyze the case of two passes turning operations
[1,5]; in this case the functions non linear that describe the optimization model can be
linearized to build a more simple model. This latter can be afterwards analyzed with the
classical linear programming techniques. Another approach uses the multi-objective
programming to extend the optimization to multi-pass turning operations [2]. In this case
the problem is approached without the linearization of the functions, but easily transforming
the constraint inequalities in constraint equations, that are afterward solved assigning a well
defined precedence. These two examples represent two different applications in the solution
of a Constraint Satisfaction Problem.
The constraint programming technique represents an alternative and more efficient way to
solve a CSP, because it allows to save specific knowledge of the problem without the
necessity to separate the problem and its model of representation. Furthermore, this
technique is characterized by some features which accelerate the search of the solutions;
these are the constraint propagation (that allows the reduction of the domain of analysis),
and solution search algorithms, such as the Tree Traversal and the Backtracking. In this
paper the concepts of constraint programming have been used for the parameter
optimization of two-pass turning operations; the first part describes the constraint
programming method and the fundamental equations that control the strategies for the
selecting of the optimum process conditions. Afterwards two examples are presented, in
order to highlights the characteristics of the proposed method.
2 PROBLEM POSITION
Manufacturing is an activity which combines productive factors and products features. In
removal chip processes, the manufacturing cost Cmr of a part is composed by various
items, that include the cost of cutting phase, unproductive times, tool and tool change (CL,
C1, Cu Ccu). The total cost for piece can be expressed with the following equation [1,6]:
(1)
where Np is the number of pieces that the tool can work before it is sharped or replaced
(Np=T/k). The expression (1) can be explicited in function of the work place unit-cost Cp,
the unproductive times t1, the tool change time tcu, the cutting time tL and the tool life T.
Cror = CP {t 1 +tL +tru/Np)+Cu/NP
(2)
vc .T" . fm . d
= CT
(3)
where f, Vc and d are the cutting parameters (feed, cutting speed and depth of cut) and n,
m, x and Cr are constant values. The choice of cutting parameters influences the cost
equation: higher cutting speeds give for example lower working times, that mean lower
cutting phase cost; moreover there is a lower tool life and this increases the tool costs for
the wear and the tool breakage. In additional to these economic considerations, the final
product features are also characterized by quality and functional specifications; these
269
features depend from the process capability and from the market requirements which limit
the productive factors with constraints. In turning, constraints are represented, for example,
by the available machine spindle power P, the maximum radial deflection Dmax, the surface
roughness requirements Ra. The addition of the constraints modifies the initial problem,
because optimal values are obtained applying only economic considerations could not
satisfy the limitations imposed by the system. The operative parameters that effect the
analysis, the type of constraints that parameters have to respect, and the function cost to
optimize, arrange the chip removal process as a Constraint Satisfaction Problem.
3 THE CONSTRAINT PROGRAMMING
The constraint programming can analyze and solve problems which belong to the category
ofthe CSP [7]. In general a CSP is represented with a set of variables, their variability field
and a list of constraints; the search of the solution consists in the determination of the
variable values which satisfy a whole of imposed limitations. This CSP category is
sometimes defined as a finite resources problem because its goal is to find an optimal
distribution ofthe resources (values of the variable which belong to the variability fields and
satisfy the constraints). In the more complex case in which the CSP has also an objective
function, the CSP aim is the optimization of this function in the respect of the operative
limitations tied to the nature of the problem.
3.1 CONSTRAINT PROPAGATION AND REDUCTION OF ANALYSIS DOMAIN
One of the fundamental operations in constraint programming techniques is to consider,
through the specific knowledge of the problem, the variables which have a sensitive
influence on the solution; for a completely definition of a variable of a problem it is
necessary to assign the type (integer, real, boolean) and its variability range, that is the set
of values that the variable can assume. The choice of each variable needs to consider
different elements as for example the required level of the problem complexity and the
importance of that variable in the model. It is evident that an increase of the variable number
increases the complexity, but at the same time extends the field of the analysis.
The constraints are an other important element of the model; they define the existing
relations among the variables. In the traditional programming the constraints represent
control relations, because firstly it is necessary to assign a value to the variable (included in
its variability range), and then to verify if this value satisfies the constraints. This method
requires recursive phases of assignment and control of the variables, is done also for those
values which don't respect constraints. In the constraint programming technique instead, the
assignment reduces variability ranges automatically without any preliminary values
assignment, and before the solution phase. If for example two integer variable x and y have
the same variability field (0, 10), and must satisfy the relation x<y, the values 10 ofx variable
and 1 of y variable are not solutions of the given problem, because they don't respect the
imposed constraints and thus they don't need to be considered.
The domain reduction of a variable, which is the exclusion of all the values that don't satisfy
the constraints, occurs through the constraint propagation, which is the extension of the
domain reduction of a variable to the domains of the other ones. Let us consider for
270
example three integer variables x, y and z, with variability range (0, 10), and two constraint
equation x<y and x+ y<z: the application of the first constraint reduces x and y intervals
respectively to (0,9) and to (2, 10), without effect on z variable. The second constraint
works on new domains and involves a successive reduction of x, y and z domains,
respectively to (0,8), (1,9) and (2,10). This effect of the propagation represents an
important factor to decrease the complexity of the CSP, because the domain reduction
allows to start the solution phase using one variable values which respect the constraints.
3.2 TECHNIQUES TO SEARCH THE OPTIMAL SOLUTIONS
The reduction of the variable domains and the constraints propagation are useful means to
reduce the complexity of the model, but they are not still sufficient to solve it completely;
many values of the variable could be infact potentially solutions of the problem; moreover
not always the propagation of the constraints is able to exclude all the variable which do not
completely satisfY the problem. This indetermination is resolved with a following phase
which uses opportune solving techniques.
The Tree Traversal technique [7,8,9] realizes for example a non deterministic search of the
solution, in function of the available variables and the breadth of variability domain; the
process starts with the building of a tree structure, where the root is the select variable for
the beginning of the search, the nodes represent the other variables of the problem and the
branches characterize the values assigned to the variables. Using the constraint propagation,
the choice of the beginning branch determines a reduction of the domains of the residual
variables ; if all the constraints are respected, the whole of the actual values of the variables
represents one of the solutions. In the hypothesis that during the constraint propagation
appears an inconsistency in the solution phase (the values of the variable don't succeed to
satisfY the constraints), it follows the phase of Backtracking [7,8,9]. In this case the original
domains are restored, excluding the value of the variable that has determined the
inconsistency, and it is selected a following value of the new domain to continue the search
phase. If this search has success, the solution is obtained, while in contrary case it is
activated a new phase of backtracking.
4 CSP IN CHIP REMOVAL PROCESSES
The optimization of chip removal processes could be treated as a CSP and then approached
and solved with a constraint programming technique. Two examples are reported in this
paper.
4.1 OPTIMIZATION OF TWO PASSES TURNING PROCESS
The goal is the optimization of the cutting speed and the feed in two passes turning
operation (roughing and finishing type) with a constant depth of cut in roughing (dR) and in
finishing (dp). The elements of the problem are the variables (Vc and f), the constraints and
the objective function cost. The hypothesized Vc and f variability fields are respectively
(0,2) [mm/rev] and (0,500) [m/min]. The objective function cost is obtained from the
equations (2) and (3).
C TOT = K OJ Vc "ot
fPm
+ K 02 V"
02
fP 02
+ CP t I
(4)
271
The constraints are related to the operation type and to the process (turning). In the
roughing operation the feed is limited to a maximum value fmax
(5)
In the finishing operation the constraints are linked to the required
Ra:
K 1 Vc " f~ <
- Ra
(6)
Ra
K 2 -f~' <
-
(7)
The constraints relative to the turning process concern the available machine spindle power
P and the max radial flexion Omax (half of the dimensional tolerance on D diameter), in the
hypothesis of turning between pointed center and counterpart center; they are described
with the following equations:
K 3 Vc"' -f~' dy' :s:P
L-X) 2
K~Va.f~-(d+~J' [ G,+Gp ( L-
(8)
+Gcp'
(X) 2
64000-(L-X) 2 -X 2 ]
JnEDL
::::;8max ( 9 )
where L is the turning length of the semifinished part, X is the distance of the tool from the
pointed center (hypothesized equal to L/2) and Gc, Gp and GcP are respectively the
compliances of the tool holder, of the pointed center and of the counterpart center [ 1]. The
values of the constants are reported in the tables 1 and 2.
n=0.25
m=0.29
L=500[mm] X=250[mm] x=0.35
D=50 [mm]
a. 1=-1.52
Gcp=0.75 [mm/daN]
Gc=l.3 [mm/daN] Gp=0.15 [mm/daN]
y4=0.9
~3=0.78 ~4=0.6 y3=0.75
a.3=0.91 a.4=-0.3 ~,=1.004 ~z=l.54
P = 10 [HP]
K'4=240
K3=5xl0 2
Kz=12.796
K,=2.22xl0 4
Table 1. Constant values ofthe constraints
dF=.85[mm]
dR = 4.15 [mm]
Cu=2000[Lire/cut. edge] Cr=300
a.m=1 a.oz=l/n-1
tcu=0.5[min.]
KOl =
1t.
L. D. Cp /1000
drm=5[mm]
tps=1.2[min/piece]
Cp=400[Lire/min]
tpf=l.O[min/piece]
~m=1
~oz=m/n-1
272
Crormin through a search in a range of Cror1ower. This is realized with an additional constraint,
which limits Cror variable to the upper bound of Cmr1ower range; the constraint propagation
proceeds this time from Cror to the Yc and f variables, and has the goal to reduce Cror'ower
range values.
The proposed approach has been realized using an high constraint programming language
[9]. In the first phase the constraint assignment (5)-(9) and the domain reduction of the
variable is realized using the functions IlcTell(expression 1 operator expression 2) and
IlcSolveBounds (variables, precision), where expression 1 and expression 2 are
respectively the analyzed function and the assigned constraint functions, while operator
represents the relation among the two expressions. The term precision represents the
numerical approximation during the Domain Reduction. The second phase is realized
through an iterative procedure, which has the goal of converging toward a minimum of
Cmrlower range value. The results obtained with this approach are reported in table 3; in the
same table are reported the solutions gained with the geometric programming technique [ 1]:
(CrQrh\ough
(Vc)Rough
(t)Rough
(Cror)Rough
(Vc)Rough
(t)Rough
Constraint programming
[2293.52- 2294.27)
[Lire]
(Cror)Finish
[171.412 -173.973] [rn!min]
(Vc)Finish
[0.14857- 0.14968] [mrn!rev]
(f) Finish
Geometric programming
2729.26
[Lire]
(Cror)Finish
188
[rn!min]
(Vc)Finish
[mrn!rev]
0.11
(t)Finish
[921.676- 922.036]
[Lire]
[218.855- 228.140] [rn!min]
[0.29936- 0.29964] [mrn!rev]
929.017
246
0.3
[Lire]
[rn!min]
[mrn!rev]
U22
822.0
9 2' B
02894
218 8 1!1
cut apucf
,... 210 7 1
(mfmlnl .-"' 22251
22C -4 3
~ .,._
02195
_ / ,.
0 29115
0 1486
226 28
Some considerations are necessary to explain the differences between the two methods: the
solutions of the constraint programming are variable domains, because the search has been
addressed toward the extraction of the least breadth range that contains Crormin, this is
represented by the figures 1 and 2; they highlight the limited variability of the cost in the
calculated domain of speed and feed.
All the values of Yc and f included in these intervals allow to obtain a cost that belongs to
the domain solution of Cror, and this gives to the process planner a certain flexibility in the
273
choice of cutting parameters, avoiding the sensibility analysis of the solution in the case of
small variation of the parameters. The domain breadth highlights the weight of the variable
on the objective function Cror: where the domain breadth is larger, the sensibility of
solution to its changes is smaller. Another consideration is that the values obtained with the
geometric programming are more higher than those ones obtained with the constraint
programming; this fact highlights that the geometrical programming finds more conservative
solutions to the problem.
4.2 TOOL RELIABILITY
An important feature of the constraint programming technique is that is easy to extend the
model together with the problem evolution: the addition of new variables and constraints is
infact translated in terms of input of new code lines to the original structure of the program.
In the following example is introduced the tool reliability R(T) in the model of cost
optimization of two passes turning. The problem is resolved describing the constraints and
the objective function using the three variables (Vc, f, R(T)) without the need to reduce the
complexity transforming the model in a two variables one (for example speed and reliability
for fixed values offeed (3]). The new Cror function takes in account the risks connected to
the premature tool failure using the time and cost penalties Pr and Pc [3,6].
(10)
pR = ( c p pT + pc ) . ( 1- R(T)) /N p
(11)
The input of the reliability variable R(T) presupposes the knowledge of the statistic model
which better describe the tool failure behavior; in the example the W eibull distribution has
been used, with null threshold parameter, and ~ and J..L as shape and position parameters
(R(T)=exp( -(T/J..L) 13 ). On the base of these hypothesis it is possible to express the expected
Np value and the expected tool life tR with a definite reliability:
tR
=Texp[ln(ln(l/R))/13]/r(W+ I)
t U = ( t R + [ t R ]RoO 999) / 2
NP
(12)
(13)
(14)
274
[Lire]
[m/min]
[mm/rev]
[Lire]
[m/min]
[mm/rev]
5 CONCLUSIONS
The optimization problems of cutting parameters in the chip removal processes have been
approached and solved analyzing the model as a CSP, and applying a constraint
programming technique. The use of this method has highlighted some advantages respect
to other techniques, such as the coincidence of the problem with the representation model
and its simplicity of implementation and extension. Peculiar characteristics of the proposed
method is the reduction of the analysis domain using constraint propagation and
backtracking; these techniques have allowed to find not only the solution intervals of the
variables which optimize the objective function cost, but also to weight the solution
sensibility to each variable.
REFERENCES
1. Galante G., Grasso V., Piacentini: M. Ottimizzazione di una lavorazione di tornitura in
presenza di vincoli, La Meccanica Italiana, 179 (1984), 45-49
2. Galante G., Grasso V., Piacentini: Approccio all'ottimizzazione dei parametri di taglio
con la programmazione multiobiettivo, La Meccanica Italiana, 181 ( 1984), 48-54
3. Dassisti M., Galantucci L.M.: Metoda per la determinazione della condizione di ottimo
per lavorazioni ad asportazione di truciolo, La Meccanica Italiana, 23 7 ( 1990), 42-51
4. Kee P .K.: Development of constrained optimisation analysis and strategies for multi-pass
rough turning operations, Int. J. Mach. Tools Manufact., 36 (1996) 1, 115-127
5. Hough C. L., Goforth R. E.: Optimization of the second order logarithmic machining
economics problem extended geometric programming, AilE Transactions, 1981, 151-15
6. Bugini A, Pacagnella R., Giardini C., Restelli G.: Tecnologia Meccanica - Lavorazioni
per asportazione di truciolo, Citta Studi Edizione, Torino, 1995
7. Smith B.M., Brailsford S.C., Hubbard P.M., Williams H.P.: The progressive Party
Problem: integer linear programming and constraint programming compared, School of
Computer Studies Reserach Report Series- University of Leeds (UK)- Report 95.8, 1995
8. Puget J.F., A C++ Implementation of Constraint Logic Programming, llog Solver
Collected papers, Ilog tech report, 1994.
9. ILOG Reference Manual- Version 3.0, Ilog, 1995
276
K. Mertins et al.
1. INTRODUCTION
Today's market of automotive industries is characterized by a global competition. This
situation forces car manufacturers around the world to redesign their corporate structure
and to go new ways in performing all enterprise functions.
In the field of production conceptual headlines like ,,Lean Production", ,,Agile
Manufacturing" and ,,Market - In Orientation" reflect the requirements for new production
management and control systems. Reducing the costs of production, increasing the stability
of the production against disturbances and in - time delivery to the customer are the top
goals to be reached. The high complexity of the production processes and logistics of
automotive production requires extremely flexible mechanisms for production planning and
control tasks to fulfill these requests.
During the last years IPK has developed new strategies for production planning and control
which meet the special requirements of different types of production. Conducting industrial
projects as well as international research projects the IPK is able to combine public funded
research with experiences from industrial cooperation. Thus research is driven by actual
needs of industrial partners and industrial projects are based on latest research results.
In 1995 IPK started a project with a car manufacturer from Asia. The target of the project
was to develop a future oriented production management system for a new plant in Asia.
Based on an actual state analysis of ,state of the art" production management systems in
automotive industries and a detailed goal determination for the new system in the first step
the project partners designed a concept for a ,pull - oriented" production management
system which includes the planning of the production sequence, the production control as
well as the material handling tasks for body, paint and trim shop. In the next step a system
specification was done as a basis for the system realization which was completed beginning
of this year. During the system design and realization phase a prototype and simulation
system was implemented to support the development process and to evaluate the benefits of
the new system [1].
2. THE TRADITIONAL APPROACH: PUSH- ORIENTATION
In today's automotive industries the common way is to look upon the production system as
one continuous production line from body to trim shop. This philosophy finds its
expression in one central production planning procedure for all shops (figure 1).
277
Sequence Control
Body Shop
WBS
Paint Shop
PBS
TlimShql
Yard
Sequence Planning
Based on a daily production plan, which includes the customer orders to be produced
during a certain day, the sequence planning creates the production sequence. This sequence
determines the order how the customer orders should be processed in the production line.
The goal of this planning procedure is to calculate the most efficient sequence for the total
production process. So the sequence planning performs a global optimization of the daily
production based on the criteria of the different production areas:
Trim Shop
For the assembly process at the trim shop the sequence planning has to consider the
aspects of
workload leveling (spreading of heavy option cars within in sequence) and
smoothing of part consumption to minimize the line side buffer inventory.
K. Mertins et al.
278
Paint Shop
The main criteria for the optimization of the painting process is to minimize the costs
for color changes in the painting booths.
As a first result of the planning procedure the production sequence as a list of customer
orders is fixed to initiate the production process at the point of ,,Body Shop In". Each
customer order refers to a certain car specification which represents the instructions for the
total production process from body to trim shop. After the first body shop operations
(usually after ,,floor completion") each unique body, identified by a Vehicle Identification
Number (YIN), is assigned to one customer order. This assignment, called christening,
applies to the whole production and provides the basis for all further control actions.
Sequence Control
The main idea of sequence control in the traditional production management system can be
summarized under the headline ,,Keep the Sequence". Based on the planned production
sequence which triggers the pt:oduction start at ,,Body Shop In" the main task of sequence
control is to compensate possible sequence disturbances. For this reason the ,White Body
Storage" (WBS) between body and paint shop and the ,,Painted Body Storage" (PBS)
between paint and trim shop are used to recover the originally planned sequence as the input
of the succeeding shop. All these control actions are performed on the fixed assignment of a
unique body (YIN) to a certain customer order.
Basically the ,traditional, push-oriented" approach can be characterized by:
Basic Strategy
For this reason the IPK Berlin developed a concept for production management in
automotive industries which focuses on decentralized, short planning and control loops. The
279
The initial step of the development was to change the point of view on the car building
process. Instead of looking upon the production as one continuous line the project team
defined the different shops as autonomous production units. As a result of this
segmentation the new system includes separate planning and control procedures for body
shop, paint shop and trim shop. To unlink the different shops to independent planning and
control units ,,shop-products" were defined for each segment. In the new system the
objects of body shop's planning and control actions are the different body types to be
produced, the paint shop is dealing with paint types as a combination of a body type and
color and the trim shop products are defined by a paint type plus trim shop options. As a
further step to achieve more autonomous and flexible units a combined system of multiple
christening and customer anonymous production has been designed. In order to keep the
flexibility of production management as high as possible, a basic philosophy of the new
system is to deal with product types instead of unique customized cars as long as possible.
For example the body shop produces bodies of different body types. For the production
management tasks it is of no meaning if a certain body will be painted in red ore blue at the
paint shop. The body shop only needs the information which body types it has to produce.
So the first step of the multiple christening is the identification of a body type. After the
paint shop production management has decided the color for a certain body the next
christening point determines the paint type of the car. The last christening point is the point
where the ,classic" christening takes place. Instead of at the begin of body shop the
connection between customer (buyer) and a certain vehicle can be established at the ,signoff' as the last station at the trim shop. The implementation of customer/supplier relations
between the different shops represents the main philosophy for the ,shop-to-shop"
communication within the new system . Each shop orders the preceding shop to deliver a
certain amount of its different shop products at a certain point of time (figure 2).
280
K. Mertins et al.
Body Shop
Sequence Planning
..........t
J'
B/S Sequence
Body Shop
was
Paint Shop
PBS
....,+~..-.
T/S Sequence
~:
TrimS"q>
Yard
U,.lld:
TIS S.qut1nce
PIS 8equt1nce
BIS sequence
WBS
PBS
Sequence Planning
Based on the above described basic strategies for the new production management system
the appropriate logic for sequence planning and control was developed. The sequence
planning generates optimal production sequences for trim, paint and body shop based on the
demand determined in the daily production plan (figure 3). The sequence planning
procedure is performed in the following main steps:
l. Based on the daily production plan, which includes the trim type of the ordered
car for each customer order to be processed at the certain day, the optimal trim
shop production sequence is created by taking all relevant trim shop criteria for
workload leveling and smoothing of part consumption into consideration.
2. The. generated trim shop sequence is devided into several time buckets. Each
bucket contains a defined amount of bodies, which depends on the cycle time at
the assembly lines and the capacity of the PBS. Considering the paint type of one
car only each bucket represents one order for the paint shop.
281
3. After the paint shop has received the orders from trim shop. The paint shop
production sequence of paint types is calculated. The sequence planning of paint
job production management now rearranges the order of bodies corresponding
to paint criteria for color grouping. In order to fulfill the demand of the trim
shop the paint shop sequence planning is only applied within the buckets.
4. After the paint shop production sequence is created the orders for body shop
(containing the needed body types within a body shop time bucket) is derived.
5. The body shop sequence planning calculates its optimal sequence by rearranging
the order of bodies within the bucket borders. This sequence initiates the
production at the body shop.
Dally Prvductlon Plan
TiS Sequence
PIS Sequence
t..g.lnd
TIS Sequence: planned sequence for Trim Shop
PIS Sequence: planned sequenc6 for Paint Shop
SIS Sequence: planned sequ6nc6 for Body Shop
The main application fields of sequence control are the buffers between the different
production units (White Body Storage (WBS) between body and paint and Painted Body
282
K. Mertins et al.
Storage (PBS) between paint and trim). The sequence control compensates disturbances of
the production process in the preceding shop to prevent negative effects on the succeeding
shop production. Corresponding to the concept of multiple christening and shop-products
the basic control logic can be described as follows. Based one the actual inventory of bodies
in the buffer (e.g. WBS) the control algorithm selects a body of a product type (e.g. body
type) which is required to be introduced next in the succeeding shop (e.g. paint shop) to
fulfill the planned production sequence of this shop. While determine the next body to be
processed the according step of christening takes place automatically. To react on major
disturbances of production the sequence control provides supporting functionality for
decentralized re-planning of the affected part of the planned sequence.
4. CONCLUSION
Today's automotive production requires a flexible production management system to react
on unexpected events which are a natural part of a more and more complex and unstable
production environment. Traditional, centralized system which base on a strict line
philosophy and a pull oriented strategy of sequence planning and control can not fulfill these
requirements. The IPK Berlin developed in cooperation with a car manufacturer from Asia a
new type of production management system for automotive industries. The system uses a
pull-oriented approach which is based on customer/supplier relations between different,
autonomous production segments. Each production unit performs sequence planning and
control actions independently regarding the orders from the succeeding segment as the
customer.
The system was implemented and is now in the phase of on-line testing at a new plant in
Asia. Results of performed simulation runs during the development phase as well as first
results from the plant-tests have shown that the new system is very robust against
disturbances of the production process. As a result the correspondence between planned
and actual produced sequence increases. Significant benefits are a higher reliability of the
production for after assembly services and the possibility to reduce line-side buffer and
warehouse inventory of parts.
REFERENCES
[1]
[2]
Mertins, K.; Rabe, M.; Albrecht, R.; Rieger, P.: Test and Evaluation of
Factory Control Concepts by Integrated Simulation. Accepted for 29th
ISATA'96, Conference Proceedings, Florence, Italy, 1996.
Mertins, K.; Rabe, M.; Albrecht, R.; Beck, S.; Bahns, 0.; La Pierre, B.;
Rieger, P.; Sauer, 0.: Gaining certainty while planning factories and
appropriate order control systems - a case study. Proceedings Seminar
CAD/CAM'95, Bandung, 1995, p. 8B 1 - 8B20.
D. Benic
University of Zagreb, Zagreb, Croatia
Production planning and control (PPC) is the essence of the manufacturing management.
Traditional view at the manufacturing considers: man, machine, material and money. The
purpose of the management is to determine the quantities and terms when some resource
must or could be available. It is the base for the material resource planning (MRP) as the
base for planning the operations and business activities. Some of resources for some
activities will be common (material) and some activities' shares or attaches to the same
resources (man or machine). The consequence of sharing resources will be in overflowing
the production lead time, because of having not enough free resources that enable
continuous job-flow through the system. Traditional manufacturing management anticipates
that problem, but does not provide appropriate solution. It results with the consequence that
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
284
D. Benic
most of MRP systems are not completely implemented in manufacturing which leads to
various strategies of the manufacturing management (MRP II, OPT, JIT, etc.).
Changes in manufacturing request appropriate conceptual and practical answers that enable
all advantages of increasing performances of modem technologies. The paper considers
some solutions related to intelligent-based framework for solving some tasks in PPC. The
emphasis is at the production planning and scheduling in a taught methodological way that
enables efficient decision support for use in the concept of intelligent manufacturing. The
concept of intelligent manufacturing is the frame that couples process planning and
manufacturing management with recent MS/OR methods and advances in AI. Due to the
traditional view at the management, new factors are considered: method and time (figure 1).
Some of the systems that enable some aspects of'intelligent' manufacturing are given in [1],
[2], [3] and [4]. This paper presents some result.s of the research that focuses to conceptual
and methodological aspect of modelling 'intelligent' decision support.
TRADITIONAL MANUFACT\JRING
IN1T:.LLIGENT MANlJFACT\JR!NG
285
enr
decision making
\)
ding lmpronment
!}
t(J
the problem
The intelligent framework in manufacturing management that enables continuous and neverending process improvement (Deming cycle, figure 3) is similar to the natural process of
genetic adaptation. Such framework, that came from the biological sciences, can be present
as a never-ending loop that is 'the wheel' of the natural selection. Such kind of optimisation
calls the algorithm of genetic adaptation. It is an optimisation progral]l that starts with some
encoded procedure, mutates it stohastically, and uses a selection process to prefer the
mutants with high fitness and perhaps a recombination process to combine properties of
(preferably) the successful mutants.
-TilE MANAGEMENT OF MANUFAC11JRING
TASKS
I MANlJFACTURfNG
PROCESSES
PLANNING
METHODS
-~TERJAL PLANNING
.i.
~APACrriES PLANNING
0---f.
6- f-
'
ll
\\
<>rk
100
I
1-~
-malerial
jnJ;ormatiOIL_ _
~
-
integer programming
dynarnic P~"Qg.remming
queuing theory
HEURISTICS
ARTIFICIAL
INTELLIGENCE
production Jj.yttc:-mJ
SCHEDULING
~--~
-- -
en<'!.)'
- linc:a1 PftW&ZUn1ing,
discrete simulations
~
GROUP TECIINOLOGY
(LAYOUT I FMC I FMS)
OPERATIONAL
RESEARCH
MANUFACTURING
Conlltraincd-bascd reasonin&
fuzzy logic
neut81 net'\\Ork$
genetic algorithms
product
refu.<e
moqcy
286
D. Benic
Holism is the key for the success in manufacturing planning and control that guarantees
acceptable solutions. The clear modelling is how the conceptual model can be observed by
focusing the model to narrow sub-areas that are closely connected with the system.
Intelligent decision support is the superior framework for solving the wide range of
combinatorial problems. All those problems can be represent as network systems that consist
of nodes and arcs. The network entities are in direct and meaningful connection with the
specific problem, especially with the problems in manufacturing and transportation. Nodes in
network always represent some stationary or mobile resources (machines, robots, buffers,
conveyers, vehicles, palettes, parts, jobs, etc.). Arcs represent their connections in the most
common sense (material-flow, transportation routes, etc.). Problems that can be solved are
closely related to logistics: (i) the problem of finding the shortest (longest) path in the
network, (ii) the general transportation problem, (iii) the assignment problem and (iv) the
general scheduling problem. In compare to the traditional MS/OR and the usage of related
methods (figure 4), approach that uses AI is superior in solving combinatorial problems.
3. THE FRAMEWORK OF INTELLIGENT DECISION SUPPORT
The large scale of AI methods is probably the most promising tool for solving the problems
of manufacturing planning and control In such a sense artificial neural networks, constraintbased reasoning (CBR) and genetic algorithms (GA) are applicable.
Artificial neural networks are computing systems that process information by their dynamic
state in response to the external inputs that have been proven as an effective approach for
solving problems. One reason of their effectiveness is in their surprising ability of being well
excuses for heurism. It is probably related with the fact that they are natural base for the
intelligence. The possibility of learning is a natural aspect of intelligence and in the such way
of neural networks too. But when using them in cases where practically every problem is a
new one, it is more than unreasonable to use all their capabilities. By preventing their
capability to learn, they become a well resource for algorithms that can give reasonable
initial solutions. It finally results as a capability that stimulates a great deal of interest that
results in their usage in wide range combinatorial problems.
CBR is powerful AI tool for solving the problem of combinatorial optimisation and is based
upon the constrained propagation technique as a process that requests deductive framework.
The clauses that declare constraint variables and their domains, as well as the clause defining
the constrains, are logical premises. The constrained values of the variables comprise the
consequences of constraint propagation. Changes in domains trigger the propagation
procedure and the system maintains the consequences of deductions and the results of
constraint propagation through the same premise-consequence dependencies. This scheme
can retrieve and explain the results of constraint propagation as well as the consequences
derived by the rule system. A practical way of applying the technique is to formulate the
knowledge representation particular to the first-order logic - logic that deals with the
relationship of implication between assumptions and conclusions (IF assumption THEN
conclusion). When solving the problem by the top-down mechanism, reasoning is backward
287
from the conclusion and repeatedly reduces goals to subgoals, until eventually all subgoals
are solved directly by the original assertions. The bottom-up mechanism reasons forward
from the hypotheses, repeatedly deriving new assertions from old ones, until eventually the
original goal is solved directly by derived assertions. Such kind of problem solving
approaches enables that system can 'act' some basic aspects of intelligence. The system is
capable to explain why the specific solution is selected. Also, the system can give an
explanation how the specific solution was found.
Applied to manufacturing, the algorithm of genetic adaptation is closely related to a system
that learns and solves tasks due to the build-in decision procedures. Even the level of the
'intelligence' in today manufacturing systems is close to 'intelligence' of the most primitive
organisms in nature, many tasks can be automated by appropriate procedures that contents
of one or several methods that can enable the 'intelligence'. The possibility of a real-time
(optimal) control seems important because it enables to system to quickly react to
unpredictable disturbations. Another aspect that enables intelligent manufacturing is the
hierarchy of the system functions and decision procedures as instances of a complex scheme
of a decision support. Also, only the concept of distributed management and control enables
true simplicity and efficiency that result with increased effects to the system as an entire and
system units as instances that solve some specific tasks. The key for the success is in
simplicity and is placed in the triangle of 'intelligent' unit, 'intelligent' method and holism.
4. THESCHEDULINGPROBLEM
The scheduling is the problem of appropriate operation sequencing for the each product that
enables successful completition of the production time-phased plan. It implies assigning
specific operations to specific operating facilities with specific start- and end-time indicated
with high percentage of orders completed on time, high utilisation of facilities, low inprocess inventory, low overtime and low stockouts of manufactured items. The new attitude
to product and component designing that concerns concurrent engineering is in idea of
schedulability as the key that enables good schedule [5]. The design rules that enable good
shedulability are: (i) minimising the number of machines involved in process, (ii) assigning
parts/products to the machining/assembly cells, (iii) maximising the number of parallel
operations, (iv) maximising the number of batches assigned to parallel machines and (v)
allowing for the usage of alternative manufacturing resources.
The solution that we describe is based upon the usage of neural networks and CBR. Bidirectional Hopefield neural network (BliNN) identifies poSSible bottlenecks in work
sequences and facilities. It produces the solution for the simple scheduling problem where
the set of i jobs must be distributed to j facilities. Every j-th facility can be assumed to
every of i-th job and, generally, there are ij possible alternatives. Each alternative represents
one possible weighted connection between input and output layer. Neurones in both levels
are directly and recurrent connected between output and input layer. Each output layer
neurone computes the weighted sum of its inputs and produces output vi signal that is then
288
D. Benic
operated on by the simple threshold function to yield the input Vj signal. To produce the best
possible solutions, system enables several strategies for determine the output node ( min
e(J), max e(J), min-max e(J) and max-min e(j)) and only one strategy - min e(i), to
determine the input node. The e(j) represents the network energy function:
e(J) = Lviwi
(1)
The method guarantees efficiency in finding the initial solution (practically at one moment)
that is then improved by the IF-THEN production rules that gives the initial sequences:
IF last job i at workplace j determines lead time THEN
IF job i has an alternative k at some other workplace and k:;z!j THEN
IF rearranging of i-thjob at workplace k minimises lead time THEN
rearrange the job and calculate new lead time
(2)
The purpose of the procedure (2) is in balancing the utilisation of the facilities at each
sequence and to reduce the sequence lead time as much as possible.
The production system as an intelligent simulation allows description of the specific
knowledge and represents objects being reasoned about but rules as well. Such kind of
simulations enables assigning additional preferences such are activities connected with the
specified facility and fixing it start-, duration- and/or end-time. Two classes of rules take
actions that reduce the lead time. First one rearrange the orders of the activities:
IF (R-th facility is bottleneck it the (k+ l)-th sequence) .and. (activities of R-th facility in
k-th sequence are distributed at other various facilities) THEN (rearrange order of
activities in R-thfacility by the end-time of activities in previous (k+l)-th sequence where
l=0A1A2A ... AN)
Second one is consist of rules that enables reducing of the slack times between sequences:
IF tRTk <tRok+l THEN t!U}+l
=ma.x{tRJk,tR~
+dRok}
(3)
(4)
where t RTk is the maximal time Tin k-th sequence at R-th facility, t lU} is the start-time P
k-th sequence at R-th facility, t!U}+l is the start-time P (k+l)-th sequence at R-th facility,
t Rok is the start-time of 0-th activity in k-th sequence at R-th facility and d ROk is duration
of 0-th activity in k-th sequence at R-th facility. The main system procedure is the eventdriven program. Rule (3) causes event that triggers following rules:
289
IF t ROk has changed THEN end-time of 0-th activity in k-th sequence at R-th facility is
changed
IF R-th facility at k-th sequence has more than one activity THEN start-times and endtimes of all other activities in k-th sequence at R-thfacility are changed using (3) and (4)
The main system procedure successive from the end-sequence to the begin-sequence
rearranges orders of activities at each facility by previous rules and constructs the feasible
solution that guarantees (at least) good solution of the scheduling problem. Because of using
the best possible solutions from the first phase, system saves the computer time by reducing
the number of simulation experiments. Traditional system that consists four sequences, four
jobs where each job in each sequence can be performed by the every of four facilities
requests up to 1024 experiments for each priority strategy or their combination. Our system
requests only one experiment with concerning no priorities.
5. THE TRANSPORTATION PROBLEM
The transportation problem is the important aspect that must be seriously considered when
developing practical solutions for 'intelligent' manufacturing. In such a sense the task of
finding the shortest path in the network is of great importance because the claim is in realtime control.
,The solution we propose uses A1 production system with BHNN and CBR in a tough
methodological frame capable to quickly identify well (in most cases optimal) solution. The
networks that represent the transportation problems consist of nodes and arc where nodes
represent the stages and arcs represent the possible transportation routes. Each node
represents one stage of the transportation route and is with weightings' wij (transportation
flows) connected with all other J nodes including node as itself (i=j). Node at k-th stage is
connected with each of} nodes in ( k + 1)-st stage and with each of i nodes in (k-1 )-st stage.
In such way there are ij alternatives through k-th stage. Complete network representation
requests coefficient that represents the current state of node activities. For input signal vi this
state is fixed at level -- -1 and for output signal Yj at level -- 1 . The calculation of the
energy level for each neurone follows ( 1) and corresponds with the energy activation
function of neural network where Vi connects i-th andj-th neurone. The vi value is --Ill if
connection exists or -- 0 if connection does not exist. To produce the initial solution, system
enables four strategies as it was explained in section that describe the solution for the
scheduling problem. Backpropagation starts with the node that represents the output layer
and the exit from system. In such sense, the number of network layers depends about the
number of nodes (transportation stages) and the transportation routes. Input network layer is
the node that represents the input in a system. The procedure guarantees high efficiency in
finding the initial solution for the system with one input and output. The initial solution is
then easy to improve by IF-THEN rules:
290
D. Benic
IF (two nodes at the transportation route that are not directly connected
have direct connection) .and. (the value of transportation intensity junction
is less then those in the transportation route) THEN
(5)
IF (two nodes at the transportation route are directly connected) .and. (the value of
transportation intensity junction is less when using indirect route through one or
several nodes that are not on the transportation route) THEN
rearrange transportation route by using this network connection
(6)
Iterative improvement for all initial solutions prevents the system to 'stuck in' some local
optimum and maximises probability that the selected solution is the global optimum. Even
only the formal mathematical programming can validate the optimality of the solution,
system guarantees that in the most cases optimal solutions can be achieved very quickly.
6. CONCLUSION
The paper presents some ideas and results that concern the possibility of implementing the
AI in solving some problems of manufacturing planning and control. The results point to a
more than usable ideas, principals, methods and practical solutions that can be easily fit into
the manufacturing management decision support. The framework with such methods is
superior to traditional MS/OR one, because quickly produces reasonable (in most cases
optimal) solutions. That is why it is applicable in real-time control of the manufacturing
systems and can be easily fit into a computer-assisted decision support. The structure of the
solution that enables such intelligent-based support must be hierarchical, distributed and
modular to provide efficient decision support for all manufacturing units. Also, the"
framework of CBR enables some basic aspects of intelligence as it is the capability to
explain why and how the specific solution is selected.
REFERENCES
1. Kusiak A.: Intelligent Manufacturing Systems, Prentice-Hall, 1990
2. Chao-Chiang Meng, Sullivan M.: LOGOS - A Constraint-directed Reasoning Shell for
Operations Management, IEEE Expert, 6 {1991) 1, 20-28
3. Karni R., Gal-Tzur A.: Frame-based Architectures for Manufacturing Planning and
Control, AI in Eng., 7 {1992) 3, 63-92
4. Bugnon B., Stoffel K., Widmer M.: FUN: A dynamic method for scheduling problems,
EJOR, 83 {1995) 2, 271-282
5. Kusiak A., HeW.: Design of components for schedulability, EJOR, 76 (1994) 1, 49-59
6. Underwood, L.: Intelligent Manufacturing, Addison-Wesley, 1994
7. Freeman J.A.: Simulating Neural Network with Mathematica, Addison-Wesley, 1994
8. Freuder E. C., Mackworth A.K.: Constraint-based Reasoning, The MIT Press, 1994
9. Benic, D.: An Contribution to Methods of Manufacturing Planning and Control by
Artificial Intelligence, Ph.D. Thesis in manuscript, University of Zagreb
T. Mikac
University of Rijeka, Rijeka, Croatia
KEY WORDS: Manufacturing system concept, Manufacturing system planning, Computer aid.
ABSTRACT: The process of manufacturing systems planning is a very complex activity, where the
great quantity of data, the planning frequency, the complexity of the manufacturing system models,
the need for shorter time of project's elaboration and other factors of influence, create a need to
develop a computer programming aid to planning. A particularly important phase of the planning
process is the early phase of the project's elaboration, which represents the basic support of further
detailed elaboration of the project. In this phase, for reasons of speed, as well as quality of the
project's concept, it is necessary to use an organized programming aid. In this paper, a software
program for manufacturing system planning (PPS) is described. This was developed and tested on
examples, representing an efficient mean of more rationale, creative and quality planning activity.
1. INTRODUCTION
The present moment is characterized by an intensive development of science, and in the field
of industrial production the market changes are strong and frequent, so there's the problem
of planning the adequate manufacturing systems (MS) which would satisfy those demands
[1,2,3].
292
T. Mikac
The complexity of the planning process itself is increased by the bigger planning frequency,
by manufacturing systems which are getting more complex, the need for a shorter time of
project's elaboration and the demand for high quality of project's solution that reflects itself
in the choice of objective criteria and the choice of an optimal solution with minimal
differences between planned and real characteristics.
What is particularly important here, is the early phase of planning, which refers to the
definition of an approximate global project's concept [4], therefore the realization of an
adequate program aid to the activities related to this planning phase contributes to the
efforts of finding adequate answers to the solution described.
2. PPS SOFTWARE PROGRAM CONCEPT
A developed methodology for MS planning is used to achieve the following effects:
the increase in project's solutions quality
the shortening of project's elaboration time
the possibility to increase consistency of data and information, and the possibility to
generate more variants of the solutions for manufacturing program assortment
segmenting
joining the adequate model of a basic manufacturing system (BMS) to generated
groups of products with regard to the assortment and the quantity of applied
manufacturing equipment and its layout, already in the early phase of planning
cutting down the planning costs through increase of planning speed, productivity
and quality in the early phase of planning
the improvement of information flow and relieving the planner of routine activities
The concept of a PPS software program aid is based upon:
- adequate data-basis
- numerical data processing
- interactive activity of the planner
Managing the activities related to the MS concept definition with a PPS program implies the
use of commands that proceed partly automatically or by interactive activity of the planner.
The structure of data-base is such that it is possible to have their simple and continual
additional construction, and their application on smaller computer structures (PCs).
The interactive activity enables the planner to use his creativity and experience by choosing
those global input parameters of the project's task which are not automatically comprised in
the program's algorithm. Thus it is possible to enable the changing of project's tasks and
parameters and to realize a number of alternative solutions of groups of workpieces as a
segment of the manufacturing program total assortment, and an adequate choice of an
associate BMS model as a basic module for a complex MS.
The computer aid is developed in the Pascal program language and the fundamental
information flow of the PPS software program is represented in figure 1.
293
WORKPIECES
DATA
TECHNOLOGICAL
PROCESS
DATA
PRODUCTION
EQUIPMENT
DATA
COMMAND ORDERS
DATA
PROJECT
GLOBAL
PARAMETARS
REPORTS
Planning of
Production
BMS MODELS
DATA
System
SOFTWARE
REPORTS
SUMMARY RESULTS
294
T. Mikac
PRODUCTION EQUIPMENT
Code
Price
Amortisation t1me
Layout
Netto and brutto
layout area
TECHNOLOGICL PROCESSES
MANUFACTURING ASSORTMENT
Workpiece name
Name
Workpiece code
Code
Operal1on number
Annual quantity
295
trij =
t rij
( 1)
(2)
296
T. Mikac
srij
IM
tgrij
IK i I
(3)
lli
srij
2:
j= 1
2: sic .
ll s
= i=1
Tj i
..!..:::...!...-----
Ss
(4)
Sic
n
2: 2: s rij
j=1
-'m'---= -i=1
( 5)
"'
~sic
i=1
The levels of exploitation, regarding the nec~ssity of optimal grouping of adequate BMS
models, differ by detailed criteria with regard to production control organization that is the
workpieces flow through the system, into line and return levels. Thus four levels of
manufacturing equipment exploitation are identified, which serve not only as a workpiece
grouping criterion but also as a basis for future joining ofBMS to so formed groups.
3.3. JOINING OF A BMS MODEL
To each of these workpieces groups formed by segmenting of assortment with similarities
of determined design, technological and productive characteristics, suitable BMS models
adequately formulated for this planning phase and characterized by a limited number of
input parameters of the project's task are joined automatically by the computer.
The models are systematized with regard to. a larger number of criteria, so as to allow the
difference between line and return models based on the way of manufacturing process
controlling, expressed by a possible number of operations on one manufacturing capacity,
on the quantity of manufacturing equipment in the BMS, on the number of workpieces
processed on the BMS, on the characteristics of the workpieces' flow through the system
expressed indirectly with the correlation coefficient and the proposed equipment layout in
the BMS and as a lower border on one of the equipment exploitation levels.
At the PPS program such modules are entered into the model's data-base with the
interactive work of the planner, but the data-base can be completed or reshaped also for the
purpose of later planning phase or final MS planning.
297
After the model has been entered in the model's data-base, the computer automatically joins
adequate BMS models to single workpieces groups based on previously formulated and
explained criteria, allowing the final listing of single BMS concept solutions as in figure 3.
On it, a workpieces group and a joined BMS model, the quantity and the type of applied
manufacturing equipment with the price, annual load, total investment in equipment, and the
levels of its exploitation (time and value) can be seen. In the solution's listing the range of
correlation coefficients, as a characteristic for leading the manufacturing process is also
expressed. Based on this listing with regard to the informations from manufacturing
equipment data-base, it is also possible to make a calculation of the total BMS surface
required, as another criteria for optimizing the concept project's solution.
Group name
'**
24
1
2
3
4
5
6
Went
GROUP 22
PRIRUBNICA-29
'**
Util fact.
3
VILJUSKA-32
Mach1ne types
Pnce
LATHE- TOK5
RADIAL DRILLING MACHINE -BUR1
RO-T ABLE -RST1
MILLING MACHINE -GLH1
DRILLING MACHINE -BUJ2
IND HARDENING MACHINE -INK1
280000
75000
5000
130000
40000
200000
Mcnt
6
Corr.co.
Tech
Econ
min
max
0.623
0.705
-3
Sum
EtaMax
Sr
0.967
0.979
0.402
0.633
0,407
0,351
13220,00
Sp
Sr/Sp
1
1
1
1
1
1
0,967
0.979
0.402
0,633
0.407
0,351
Value
280000
75000
5000
130000
40000
200000
730000
4. CONCLUSION
The program aid developed on the basis of the described procedure allows a quick, efficient
and scientifically based elaboration of MS project concept, as a basic ground for further
detailed continuation ofMS planning.
A developed program comprises:
the generation of a data-base
segmenting of a CMS into BMS, based on suitable grouping of similar workpieces (by
design, technological and productive characteristics)
dimensioning of the system by capacity
defining and joining ofBMS models suitable in the concept's implementation phase, by
which the manufacturing equipment group structure is deterrninated, mode of
manufacturing process control, surface and price.
T. Mikac
298
The quality is represented also by interactivity, specially in the case when planner's
intervention is needed in order to include influence characteristics or project's limitation that
could not have been quantified of algorithmically processed.
Generally, by application of such program the total planning time is shortened and the
quality of project's solution is increased.
List of symbols:
tnj-
qgjK;Snj-
S;c-
REFERENCES
1. Vranjes, B.: Jerbic, B.; Kunica, Z.: Programska podrska projektiranju proizvodnih
sistema, Zbornik radova Strojarstvo i brodogradnja u novim tehnoloskim uvjetima, FSBZagreb, Zagreb, 1989, 159-164.
2. Muftic, 0.; Sivoncuk, K.: Ekspertni sistemi i njihova primjena, Strojarstvo 31(1), 1989,
37-43.
3. Tonshoff, H.K.; Barfels, L.; Lange, V.: Integrated Computer Aided Planning ofFiexible
Manufacturing Systems, AMST'90, Vol. I, Trento, 1990, 85-97.
4. Mikac, T.: Grupiranje izradaka i izbor modela proizvodnog sustava, Zbornik radova 2.
medunarodnog savjetovanja proizvodnog strojarstva CIM'93, Zagreb, 1993, I79-I89.
I. INTRODUCTION
300
2. METHOD DEVELOPMENT
Firstly, it is necessary to adopt certain terminology in the field of arrangement of elements.
There are three basic concepts in arranging of elements:
a)element of arrangement - is a geometrical object (triangle, quadrangle, polygon, circle,
ellipse, or any plane surface originating from them), which is a part of a product or a semifinished product. The element of arrangement can be defined with two or three data. which
relate to its dimensions.
301
and width (W) for 2D arrangements, or length (L ), width (W) and height (Z) for 3D
element arrangements.
c)refuse - remainder of material after cutting out the elements.
'
R
+----0
..
X
(}
/X
/o
..
y
302
1J = 6 . R 2 ;r = ;r = 0.7854
6R4R 4
From the preceding section follows that, considering maximum efficiency, case a) is
more interesting than case b). The following are some specific features that will occur, and
which should be considered:
a) shape ofthe cutting object is not standardized,
b) dimensions of the palette are standardized,
c) handling units will be folded over each other on the palette, in order to achieve
compact and solid cargo.
These specific features of prismatic units arrangement on a palette are shown in Fig. 5.
I B.
I
(1)
min (CX)
with conditions:
AX~Bz
pY;g:SN
x; = 0,1 ... n
303
Length (mm)
Width (mm)
Name
Rotation
Quantity
1
2
3
4
5
6
200
450
500
600
300
400
100
150
350
500
600
700
PLl
PL2
PL3
PL4
PL5
PL6
y
y
y
y
y
y
1000
1000
1000
1000
1000
1000
Scheme number
2
3
4
5
6
1
1
1
4
5
2
2
2
4
5
7
3
4
4
5
7
8
4
5
5
7
8
9
5
6
7
8
9
10
Efficiency (%)
6
7
8
9
10
11
73.24
88.45
88.65
91.69
92.94
96.09
304
Optimal solution of the problem is given in table 3. It can be concluded, from presented
results, that required quantity of handling units has been achieved. Also, table 3 gives
comparison of number of schemes needed for transport, and data about connection between
handling units and appropriate scheme of arrangement.
Scheme number
Scheme quantity
PLI
PL2
PL3
PL4
PL5
PL6
5
7
9
10
11
Required
Achieved
184
500
125
75
225
0
0
2
10
0
1000
1000
0
0
8
0
0
1000
1000
0
0
2
4
2
1000
1000
0
2
0
0
0
1000
1000
0
2
0
0
0
1000
1000
3
0
0
0
2
1000
1002
3. CONCLUSION
When modular principle of arrangement is not applied, the linear programming model
proved to be very suitable for the problems of optimal space utilization on a palette.
REFERENCES
I. B. Madarevic: Rukovanje materijala, Tehnicka knjiga, Zagreb, 1972.
2. P. Bauer: Planung ind Auslegung von Palettenlagern, Springer-Verlag, Berlin, 1985.
3. N. Stefanic: Matematicki modeli kod krojenja materijala, Magistarski rad, FSB Zagreb,
1990.
M. Lucertini
University "Tor Vergata", Rome, Italy
F. Nicolo
Terza Universita, Rome, Italy
W. Ukovich
University of Trieste, Trieste, Italy
A. Villa
Polytechnic of Turin, Turin, Italy
KEY WORDS: production systems, flow lines, push systems, pull systems, perturbations.
ABSTRACT: general, abstract model is proposed for a production system consisting of a flow
line, made of several machines, which can produce different products. Based on this model,
a formal definition of push and pull concepts is provided. Different operating conditions
are considered, and an abstract characterization of the set of all fesible production plans is
proposed. Some of its general properties are then investigated: in particular, a minimum wip
production plan is characterized, apt to tackle bounded perturbations of processing times.
INTRODUCTION
After the first enthusiasm that hailed the advent of Just-In- Time (JIT) and pull
systems, many practitioners realized that "the techniques used to implement JIT are,
in many ways, identical to those found in the "out-dated" reorder-point and/or push"
( cf. for instance [7]). Is there any "real" difference? And, if so, what is it? Several
authors tried to give their answers, but it is quite possible that a completely general
and satisfactory answer cannot be given, for the following reasons.
One reason could be that many of the exciting performances attributed to Kanban are
mainly due to the manufacturing environment where it is applied (see [5]). Another
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
306
M. Lucertini et al.
reason could be that performances of Kanban systems depend on the use of appropriate
modelling and/or optimization techniques (see [1], [4], [7]). As a third r~ason, maybe
the most important one, the differences between push and pull are not so sharp as they
may appear at a first glance, because the two concepts are still not well defined: in fact,
a traditional push system embedded in an MRP system computes release dates from
the due dates by subtracting the leadtime of the line, or of the more general production
system: in a way, this is an application of the pull concept.
The last simple remark suggests that the real difference is that push systems have
large or very large closed loops, while in pull systems loops are as small as possible.
Being this statement true or not, it is clear t,hat we need well defined basic quantitative
models, which allow to analyze the system structure and to evaluate the opportunities
it provides. The purpose of this paper is to give a first very basic deterministic model
in the most simple situation of a line of single server stations (a chain of machines)
that must process a given sequence of parts.
When we have a chain of machines with intermediate buffers, it is clear that, in some
sense, push implies to process jobs at the earliest time, while pull implies processing
jobs at the latest possible time. Does this mean that push always implies a forward
computation of the schedule from the release dates, and pull a backward computation
from the due dates? The simple model presented in this paper provides a motivated
answer to such a question. Furthermore, it also helps to understand at a very basic
level a few classical problems, such as the role of buffer sizes (see for instance [8]), and
the effect of Kanban on the Work-In-Process (see [3]).
We consider a production system consisting of a simple flow line, made of several machines, which can produce different products. The basic ingredients of the system are:
n machines, denoted as j = 1, 2, ... , n, that can process parts to provide finished products; m parts denoted as i = 1, 2, ... , m, that have to be processed by the machines.
Each part i has a release time a(i), and a due dated;. Each pair (i,j) part-machine has
a processing time d(i,j). Furthermore, there are n buffers (one before each machine),
denoted as j = 1, 2, ... , n in the same order as machines. Buffer j has a capacity k(j).
A machine can be: busy, when there is a part on it, or idle. A waiting part can be
either in buffer j, waiting to be processed by machine j, or it can be on machine j still
waiting to be processed, or after it has been completed. For the sake of simplicity, we
always assume zero transportation times.
The state of a buffer at a given time instant is specified by the number of parts it
contains. However, for our purposes it matters to discriminate between two relevant
conditions only: full, when the buffer accommodates a number of parts equal to its
capacity. Otherwise, the buffer is non-full.
The system operates as follows. Each part always retains its identity. No assembling
operations are performed between parts of the system, nor a part can be disassembled
307
A GENERAL I\10DEL
b( i) due date of part i, i.e. the time at which part i must have left the last machine
k(j) capacity of buffer j.
\Ve choose the following variables to represent the way our system operates:
308
M. Lucertini et al.
x(i, I)
y(i,n)
x( i, j)
x(i,j)
y(i,j)
y(i,j)
>
<
>
>
>
>
a( i) i =I, 2, ... , rn
b(1:) i=I,2, ... ,rn
y(i- I,j) i = 2, ... , m,, j =I, 2, ... , n
y(i,j-1) i = 1, 2, ... , 1n, j = 2, ... , n
x(i,j) + d(i,j) i = 1, 2, ... , rn, j = 1, 2, ... , n
a:(i- k(j + 1),j + 1)
j = 1, 2, ... , n - 1, i = k(j + I) + 1, k(j + 1) + 2, ... , rn.
(I)
(2)
(3)
(4)
(5)
(6)
Eq. 1 says that each part cannot enter the first machine before it is made available.
Eq. 2 requires that each part must have left the last machine before its due date. Then
any part (except the first one) cannot enter a machine before the previous part has left
the same machine, according to Eq. 3. Also, any part cannot enter a machine (except
the first one) before it has left the previous machine, by Eq. 4. Furthermore, Eq. 5
states that each part cannot leave a machine before the processing time has elapsed,
since the moment in which it entered that machine. Finally, each part i cannot leave
a machine j if the output buffer j + 1 (i.e. the buffer following that machine, with
k(j + 1) places) has no empty positions. Considering part i, buffer j +I is full if parts
i - I, i - 2, ... , i - k(j + 1) lie in that buffer. The first of those parts leaving that buffer
is i - k(j + 1), according to the FIFO discipline. This is expressed by Eq. 6.
OPERATING STRATEGIES
Due to its staircase structure, the Production Planning Problem can be solved in a
relatively easy way using iterative procedures.
We first consider two basic solution methods, which correspond to popular operating
strategies for our system: push and pull.
4.1
PUSH SYSTEMS
Definition 4.1 In a push system, input and output times are determined, starting
from the release dates, in the ascending order of i and j. More precisely, for each
piece (in the ascending order}, all times are determined for each machine (again, in
the ascending order}.
According to the above definition, in a push system input and output times are determined from what happened in the past. Past events "push" the system forward from
the past. So push systems are causal systems, in the sense that each event is completely
determined by what happened in its past.
309
Note that, in determining input and output times in a push system, it is essential to
scan pieces in an outer loop, and machines in an inner loop. If the opposite sequence
would be adopted (scanning each piece for each machine), it could be impossible to
determine some output times, since they depend from input times of previous pieces
(i.e. with lower index i) to the following machine, due to the finite capacity of buffers
(see Eq. 6).
Once all times have been determined, their feasibility with respect to the due dates is
verified. It is therefore natural to set times as early as possible. Such an earliest time
assumption turns out to be an essential feature of the push systems.
As a consequence, the constraints of Eqs. 3 and 4 of our model give the:
push constraints
(7)
Eq. 7 says that part i enters machine j when it has left the previous machine (j- 1)
and the previous part (i - 1) has left the same machine. They are called push constraints since they express how what happens upstream in the line affects downstream
operations.
The earliest time assumption transforms the constraints of Eqs. 4 and 5 in the
pull constraints
y(i,j) =
max{:r(i,j)
j = 1, 2, ... , n- 1, i = k(j
(8)
Eq. 8 says that part i leaves machine j when it has finished processing and the previous
k(j + 1 )th part has entered the following machine (thus making available a place in the
j + lth buffer). Those constraints are called pull constraints, since they express how
what happens downsheam in the line affects upstream operations.
Note that the push strategy is described by both push and pull constraints. The
presence of pull constraints in this case is due to the fact that in a push system, input
and output times may be affected not only by what happens upstream, but also by
what happens downstream, due to the finite capacity of buffers. In the case of buffers
with an infinite capacity, the influence of downstream operations disappears.
So in general the push concept signifies that current events depend from previous
events. "Previous" must be intended in strict chronological sense in the definition of
push systems, whereas in the definition of push constraints it refers to the order in
which machines are visited. Such an ambiguity disappears with infinite buffers.
4.2
PULL SYSTEMS
Pull systems are defined in a completely symmetric way with respect to push systems.
310
M. Lucertini et al.
Definition 4.2 /n a pull system, input and output times are determined, starting from
the due dates, in the descending mder of i and j. Like for push systems, also in this
case for each piece (outer loop), all times are determined for each machine (inner loop).
So in a pull system, event times are determined backward in time in order to guarantee that specified conditions (event times and clue elates) will be met in the future.
Future events "pull" the system forward from the future. In this sense, kanban can be
considered as a smart way to implement pull systems using feedback.
Once all times have been determined, their feasibility with respect to the release dates
is verified. It is therefore sensible to set times as late as possible. Such a latest time
assumption is the key feature of pull systems.
The conditions ruling a pull system may be deriv~d from Eqs. 1 ...;- 6 using a reasoning
which parallels the one used for push systems. They are symmetric to the conditions
of Eqs. 7 and 8, with x and y swapped, max replaced by min, and sum and difference
swapped. The only asymmetry concerns the index of k in the pull constraints, which
becomes just j for pull systems (by the way, the role of push and pull constraints is
also swapped). This is due to the fact that buffers are named after the machine they
precede.
Given a, b, d and k, the set of all feasible production plans ( x, y ), that is the set of all
the space
5.1
~m+n X ~m+n.
Now we present some interesting properties of the set D. The reader is referred to [6]
for formal proofs.
Let ( xe, ye) denote input and output times obtained according to the earliest time
assumption for given a, b, d and k, and (x 1, y 1) the corresponding times obtained
according to the latest time assumption. Then
={(x, y):
Theorem 5.1
n~
Corollary 5.1
n =f 0 -:::=}
5.2
xe::;
X::;
=f 0.
The following results show the relation between the schedules of two sets of parts with
different processing times.
311
Theorem 5.2 Consider two sets of parts that have to be processed by the same system: suppose they have identical release and due dates, and processing times d and d,
respectively, such that d(i,j):::; d(i,j) V' i,j. Then n:::: D(a,b,d,k) 2 D(a,b,d,k):::: 0.
As a special case, we have the following result
Corollary 5.2 For the same conditions of Theorem 5.2, ifD and
then
5.3
Now for each production plan we consider the \VIP w(t) it produces at any given time
t, i.e. the number of parts that are on a. machine or in an intermediate buffer. Note
that
w(t) = ITV(t)\
with
t:::;
b(i)},
(9)
i.e. W(t) is the set of parts within the system at. timet.
Then the above results can be used for dealing with the following problem, where
processing times are subject to nonstochastic uncertainty, i.e. they are only known to
lie anywhere within given ranges (for other problems tackling such a kind of uncertainty,
see for instance [2]).
y0 ) solves the Prodnction Planning Problem with given due dates band bujJe1
si::cs I.:, suitable release dates a. and any processing times dE D ='= {d\d:::; d}
(.T 0 ,
2. the WIP w 0 (t) produced by (:r 0 , y0 ) at any timet is not larger, for all t, than the
WIP w( t) produced by any other production plan satisfying 1.
Theorem 5.3 For any given a and k, D( a, b, d, k) is the largest set of production plans
which arc feasible for a, b, k and any processing time d E D.
Now the solution of Problem 1 is immediate.
312
M. Lucertini et al.
CONCLUSIONS
The chain model presented in the paper allows us to say that only in special situations
the earliest time schedule (push) implies forward computation of the start times (pure
push) and the latest time schedule implies backward computation (pure pul~. In the
chain model these two situations correspond to infinite and zero buffer capacities,
respectively.
In the simple context of this paper, all the possible schedules (feasible production
plans) lie within a polyhedron in the space of the times at which parts enter and leave
machines. In such a situation, pull produces the lowest WIP.
Of course, the model presented in this paper is just a simplified model, since in practical
cases several parallel machines may be available at each stage. Our results apply to
this case only if part sequences do not change on each machine, which is a rather strong
assumption in practice. Nevertheless, such a simple model turned out to be convenient,
for instance, to gain a deeper insight into relevant concepts such as push and pull.
REFERENCES
1. Bitran, G.R., and Chang, L., "A mathematical programming approach to a deterministic Kanban system," Management Science 33, 1987, 427-441.
2. Bianchini, F., Rinaldi, F., and Ukovich, W., "A network design problem for a
distribution system with uncertain demands," SIAM Journal on Optimization,
accepted for publication.
3. Karmarkar, U.S., "Kanban systems," Paper Series N. QM8612, Center for Manufacturing and Operation Management. The Graduate School of Management,
University of Rochester, Rochester, N.Y., 1986.
4. Kimura, 0., and Terada, H., ''Design and analysis of pull systems: A method of
multi-stage production control," International Journal of Production Research
19, 1981, 241-2.5:3.
5. Krajewski, L..J., King, B.E., Ritzman, L.P., and Wong, D.S., "Kanban, MRP, and
shaping the manufacturing environment," Management Science 33, 1987, 39-57.
6. Lucertini, M., Nicolo, F., Ukovich, W., and Villa, A., "Models for Flow Management", International Journal of Operations and Quantitative Management, to be
published.
7. Spearman, M.L., a.nd Zazanis, M.A., "Push and pull production systems: Issues
and comparismis," Operations Research 40, 1990, 521-532.
8. Villa, A., Fassino, B., and Rossetto, S., "Buffer size planning versus transfer line
efficiency," Journal of Engineering for Indust?y 108, 1986, 105-112.
314
The knowledge of the behaviour of metals as a function of temperature, strain and strain
rate is a basic step for (i) optimising the industrial process and (ii) dimensioning the forming
machines tooling-set. To obtain data on the characteristics of the materials it is important
that temperature, strain and strain rate replicate the parameters of the real process.
The precise characterisation of the material is a fundamental step when the forming process
is designed forTi and Ni alloys components, which have a growing applications in hi-tech
industry (aerospace and aeronautics). These alloys present a narrow range of temperature,
strain and strain rate for the formability. For these reasons a particular care should be taken
in fine controlling these parameters during the test.
This paper is focused on the optimisation of a procedure for determining the behaviour of
these alloys at forging condition. The procedure should be set-up in order to :
- assure homogeneity of deformation and temperature in the specimen during the whole
compression test ;
- evaluate true stress-true strain curves of Titanium and Nickel alloys in hot forging
condition.
2. THE EQUIPMENT IN PHYSICAL SIMULATION
Methods and equipment used in physical simulation depend upon the process to be studied.
The physical simulator should offers a wide range for thermal and mechanical parameters, in
order to replicate the operating condition of the real process. When these requirements are
satisfied, it is possible to replicate the thermal mechanical history on the specimen and to
determine the effects of thermal (temperature and heating/cooling rate) and mechanical
parameters (strain and strain rate) on material and process.
There are different kinds of testing machine [ 1] :
- servohydraulic load frame connected to a furnace or an induction heater . This system
is satisfactory for process applications where the temperature is changing slowly during
the process ;
- cam plastometer . The simulation takes place at one defined temperature. It can be
considered a single 'hit' device. Multiple 'hit' programs are possible in principle, but the
time to change strain rates and temperature, usually make the replication of multy stage
forming processes very difficult or impossible ;
- torsion testing machine . The tester provides shear data for simulation in a wide range of
strain rates (up to 100/second) and strains. The most notable benefits of torsion testing
are the large amount of strain possible without necking or barrelling, typically up to
strain of 5.0. The temperature is usually hold constant or may be changed at slow rates.
A GLEEBLE 2000 System [2], installed at DIMEG's lab in Padua, has been utilised to
conduct the tests presented in the paper. It is an electronically controlled, hydraulically
operated testing machine used for :
- thermal and mechanical analysis of materials for research,
- quality control,
- process simulation, and
315
Fig.J
Fig. 2
. Fig. 1 shows a Ti-6Al-4V specimen that presented a non uniform temperature distribution
along the axis; in Fig. 2 it is evident the final deformation of a specimen (same alloy) with
the end surfaces cooler than mid section.
316
The Gleeble heating systems is based on Joule effect : the current passes through the
specimen and heat it up. With this system isothermal planes are obtained in the specimen,
but an axial gradient is introduced due to the cooling effect induced by the punches. In
order to reduce the axial gradient the following measures should be adopted:
- reducing heat loss from the end surfaces with a thermal barrier ;
- increasing electrical resistance of the surfaces in order to increase temperature;
- reducing mass ratio between punches and specimen.
The system is equipped with several adjustments, which allow to operate on a wide range of
specimen, as concerns size, shape and resistivity. Thermal power should be selected
according to the requirements relevant to specimen size, heating rate and thermal
distribution. The combination of nine transformer taps with a switch for four different
specimen sizes provides the most suitable power range depending on the specimen
characteristics; the best switches combination gives the minimum thermal gradient along the
specimen axis.
The following three kinds of test [7] have been conducted to reduce barrelling :
(i) heating tests for selecting the right thermal power of the system;
(ii) tests with conventional lubricant ;
(iii) tests with different layers of lubricant.
(i) heating tests for selecting the right thermal power of the S)'Stem
Several tests have been conducted on <j>l2xl4 mm long specimens heated in the range 600oc without lubricant (scheme in Fig. 3). The
measurement of the specimen axial gradient has
been done in steady conditions (60 seconds after
reaching the programmed temperature).
Four thermocouples were spot welded inside four
drilled holes to place them at the specimen core.
Specimen
Best combination of switch positions and taps have
Fig.3
been investigated. Fig. 4 (a) and (b) show respectively the result of measurements at 645 C in the best and in the worst condition. It is
evident that even in the best situation (a) the maximum At, close to 35 C, is not acceptable.
900
Isothermal conditions are important in flow stress measurement. When a thermal gradient
exist along the axis of the specimen, barrelling occurs during deformation, no matter what
lubricant is used (Fig. 5).
:I.
640
..... o ..
!P ''
)(
:t
640
0 Size2
- ><-
.... a
esoJ
- ..
T tcJ
610
317
630
620
6 10
Theotical
600
~------~~~----)~------)(
a
0 S1ze 4
Theoneal
600
590
590
580 ~-------+--------+-------~
Thermocouple
(a)
[]
580 ~------~-------4------~
Thermocouple
Fig. 4
(b)
Three different lubricants, MoS 2 powder, graphite foil and tantalum foil have been tested [8]
with the twofold aim to reduce:
- friction at high temperature ;
- heat loss introducing a thermal barrier.
MoS2 powder is relatively simple to use
when mixed with alcohol, but some
difficulties may arise during heating
because of the bad electric contact
between specimen and punch surfaces. It
is usually applied at a temperature below
600 C, above which it breaks into the
Mo oxide.
Graphite foil can be used above 600 C,
but only if the diffusion does not become
a problem. During tests at high
2
1,8
temperature a piece of tantalum foil can
1,6
1.4
be utilised between the specimen and the
1.2
graphite foil as a diffusion barrier and to &cp/d 1
0.8
protect graphite from high temperature,
0.6
0.4
avoiding self burning.
0.2
Tantalum foil is used for two reasons : it
0 ~---+--------~----~--~
0
0.2
0.4
0.6
0,8
is a good thermal barrier and, due to its
Time (s)
high resistivity, increases the electrical
resistance
at
the
punch-specimen
Fig. 6
interface.
The best results have been obtained with tantalum-graphite foils, but they aren't acceptable
318
in term of temperature distribution and final deformation : barrelling is still evident in Fig. 6
that shows the ratio between E<jl (strain calculated with a crosswise gauge) and El (calculated
with a lengthwise gauge).
In an ideal condition, assumed volume constancy and uniform deformation, the ratio should
be equal to 1 during the whole test.
(iii) tests with di(ferent lqyers of/ubricant.
The solution tested by the authors is a kind of "sandwich" of lubricants. It has been noticed
that using tantalum-graphite foils, the thermal gradient along the specimen axis is reduced.
Several tests with different combinations of
lubricants have been performed. Fig. 7
shows the final solution :
- MoS2 applied on the specimen surface ;
- a sandwich with two graphite foils with
two tantalum foils on each side of the
specimen.
Thermocouples
1020
980
1000
Thermocouple 1
t.T =140 C
995
T [OC]
990
Thermocouple 2
985
980
975
820~--+---~--~---+--~----
<D
"'
Time[s]
(a)
Fig. 8
!=j
"';!
&i
~Time
f3
fsJ
(b)
Good results have been obtained even in terms of deformation using the "sandwich": Fig. 9
presents the E<jl I El ratio during the test, which is almost constant and close to 1 (ideal
319
condition). Fig. I 0 shows the deformation of the specimen during the test where barrelling
can be neglected.
2
1.8
1.6
1.4
1.2
]_
0.6
"' o.e
1
0.4
0.2
0
0
0.2
o.
o.6
Time (s)
0.6
Fig. IO
Fig. 9
4. CHARACTERISATION OF Ti AND Ni ALLOYS
The developed test configuration allowed the characterisation of materials which present
high sensitivity to temperature and strain rate. [8]
True stress - true strain curves have been calculated for Ti-6Al-4V in the following
conditions :
850, 880, 9IO oc
- Temperature :
30
I, 3, 5, 7, 9 s -t
- Strain rate :
25
20
E=1
10
0~---+----+---~----~
0.2
0.4
0.6
0.8
Fig. II
Four stages as described above have been performed at strain rate of 4, 5, 7, 9 sl Fig 12
320
20
15
0~-----r~--~--~--~~---+----~
o.2
o.4
0,6
0,8
Fig. 12
5. CONCLUSIONS
Some progress in the setting-up of the procedure to characterise materials such as Ti and Ni
alloys have been presented. Effects of different lubricants as concern temperature uniformity
and barrelling have been investigated. A multi-layered lubricant, tantalum, graphite and
MoS 2, has been tested giving good results due to its effects (thermal and diffusion barrier,
lubricant, local increase of resistivity). Using this sandwich the Ti-6Al-4V and Nimonic 80A
materials have been characterised.
REFERENCES
1. H. E. Davis, G.E.Troxell, G.F.W.Hauch, The testing of engineering materials, McGrawHill1892
2. Ferguson, H.F., Fundamentals ofPhysical Simulation, DELFT Symposium, Dec. 1992
3. Schey, J.A., Tribology in Metal Working: Friction, Lubrication and Wear, ASM, June
1984
4. K.Pohlandt : 'Materials Testing for the Metal Forming Industry'. Springer-Verlag 1989
5. George E. Dieter, Workability Testing Techniques, American Society for Metals, 1984
6. K.Pohlandt, S.N. Rasmussen, Improving the Accuracy of the Upsetting test for
determining Stress-Strain Curves, Advanced Technology of Plasticity 1984
7. Dal Negro T., De Vivo D., Ottimizzazione del Cicio di Stampaggio a Caldo di
Superleghe di Nickel e Titanio Mediante Simulatore Fisico di Processi Termomeccanici,
thesis degree in Italian, DIMEG June 1996
8. E. Doeghe, R. Schneider, Wear, Friction and Lubricants in Hot Forging, Advanced
technology of Plasticity 1984
9. H.G.Suzuki, H.Fujii, N. Takano K.Kaku, Hot Workability of Titanium Alloys, Int.
Symposium on Physical simulation of welding, hot forming and C. casting, Ottawa 1988
322
preserve the superplastic condition throughout of the process. In hot blow forming, the rate
of pressurisation is normally established so that the strain rates induced in the sheet are
maintained within the superplastic range of the material. In the practice, the pressurisation
rate is determined by trial and error techniques or by analytical methods.
The first engineering model was proposed by Jovane [4] that, on the basis of the membrane
theory and elementary geometrical considerations, gave the pressure versus time
relationship. Basically, this model assumes that: i) the material is isotropic and
incompressible, ii) the regime is membranous, iii) the elastic strains are negligible, iv) the
material is not strain-hardenable, and v) at any instant, the membrane is part of a thin
sphere subjected to internal pressure, i.e. the curvature and thickness are uniform and a
uniform biaxial tension state exists. Unfortunately, the hypothesis of a uniform sheet
thickness of the dome is not consistent with the experimental results that show a thinning
of the sheet during forming owing to stress, strain and strain rate gradients. In practice, the
indeformability of the die in proximity of periphery makes the circumferential strain
negligible and lower than the meridian strain: then, due to the continuity law, the strain
component along the thickness direction increases. In order to reduce such inconsistency,
Quan and Jun [5] considered both the non-uniformity in sheet thickness and the anisotropy
of the sheet along the thickness direction. They, starting from a basic theory of plastic
mechanics for continuous media and using both the Rosserd's viscoplastic equation [6] and
the constitutive equation with a strain rate sensitivity varying with strain rate [5], provided
a theoretical analysis of the free bulging that is more consistent with the real processes. In
this model, the previous basic hypotheses [4] were assumed and, beyond the anisotropy in
the thickness direction, the geometrical shape of the mid layer of the sheet as part of a
spherical surface during the bulging process was also considered.
Although the complex analytical models and those based on the finite element method
(FEM) [7,8] provide a better accuracy, they require longer computation times. Therefore,
from an engineering point of view and for its simplicity, the most interesting approach
remains the model proposed in [4] that, however, needs to be improved.
Recently, a model that, beside the basic assumptions, assumes that the median part of the
formed dome is spherical at any instant and that each meridian passing to the dome apex is
uniformly stretched, was developed [9]. However, this model leads to results that are very
similar to the ones proposed by Jovane [4].
All the models proposed neglect bending of the sheet at the clamps; moreover, during
deformation the rotation of the sheet is allowed at the extremities similarly to a frictionless
hinge. In this way, at any instant, the membrane maintains a spherical shape, also in the
region next to the periphery. This does not correspond to real SPF processes since it means
to build a zero entry radius die. Such a die causes strong local stress concentrations and the
corresponding stress gradients can lead to a dramatic thinning and rupture of the sheet.
Therefore, in order to obtain a realistic model, the die entry radius cannot be neglected. The
effect of the die entry radius is considered in the present paper and a new model
incorporating such aspect is proposed. In particular, the sticking friction condition at the
sheet-die interface is assumed since such condition, although represents a limit situation of
the real process, provides a better realistic starting point than the frictionless one.
323
2. THE MODEL
The model, aimed at obtaining the pressurisation rate that maintains the superplastic
condition during the process, was developed according to the scheme in fig. 1.
Thennocouple
Sheet
Cylindrical
die
'->"
Fig. 1
Back
Pressure
The pressure versus time relation was obtained by using the equation of equilibrium of the
forces applied to the membrane. The state of equilibrium depends on the applied pressure,
material properties, strain levels and boundary conditions. The basic assumptions of the
model are the following: a) the material is isotropic and governed by the cr =kern law, b)
the volume is constant, c) the elastic strains are negligible, d) the material is not strainhardenable, with very low yield strength, and e) the regime is membranous. Such
hypotheses are similar to those used in previous models [4,5]. Beside these hypotheses, the
present model assumes that: f) the sheet is rigidly clamped at the periphery where bending
is allowed around the entry die profile, and g) no sliding at the die-sheet interface occurs;
The last condition causes a reduction in sheet thickness from the initial value to the
uniform value in the spherical part of the membrane.
For sake of simplicity, the process was divided in two stages. The stage 1 deals with the
study of the optimum conditions that guarantee the superplastic flow of the membrane
from the initial configuration to the instant when the free bulging region assumes the
hemispherical shape. In the stage 2, the membrane is in contact with the cylindrical die.
R=~
a
(1)
324
where x is the abscissa of B that indicates the end point of the contact region between sheet
and die, r is the curvature radius of the die entry and a is the radius of the initial
undeformed blank (Fig. 2).
ojl2a
Fig. 2
For the constant volume law, the volume of the initial blank (V 0 ) is equal to the volume of
the formed blank (VAB + Vac) that is calculated by means of the Guldino theorem:
(2)
Therefore:
(3)
where S0 is the thickness of the undeformed blank, p, h2 and s are, respectively, the radius,
height and thickness of the part of the sphere representing the membrane in the region
where no sheet-die contact occurs, Xa is the position of the centre of gravity of the part of
the sheet between B and C. In order to obtain an equation relating the thickness s to the
parameters X and R, p and h2 must be expressed as a function of X and R. Since:
x = r sin a
y = p sina
x+y=a
(4)
. = R((l-X)/ X)
(5)
h 2 =(y/x)h 1
(6)
(7)
325
(8)
where cosa., that can be obtained versus x by considering the triangle Ff>B, is equal to:
( r 2 -x 2)1/2
cosa=-'---------'---r
(9)
Putting eqns. (8) and (9) into eqn. (7) and using the parameters X and R, one obtains:
(10)
The position of centre of gravity of the arc CB, whose length is a.r, is determined by means
of the definition of gravity centre:
(11)
By substituting eqns. (5), (10), and (11) into the constant volume law (Eqn. (3)), the
thickness s can be expressed versus the dimensionless parameters X and R:
(12)
In order to get the optimum superplastic condition, the equivalent stress ( cr ) and the
equivalent strain rate ( e) must be kept equal to the values for which the material exhibits
the highest elongation. Therefore, the forming pressure must vary versus time so that:
cr = p p = cr = constant
2s
0
(13)
where cr 0 is the optimum flow stress for SPF. By substituting the eqns. (5) and (12) into
eqn. (13) and solving it, the dimensionless pressure (p) results:
The calculation of the dimensionless pressure versus time is difficult since it requires the
solution of the following differential equation:
_dE
e = - = dt
Ids_
- - = e0 = constant
s dt
(15)
326
where E 0 is the optimum strain rate for SPF. Eqn. (15) expresses the dependence of the
dimensionless parameters on time. By substituting eqn. (13) into eqn. (15), it results:
F
t
't=
(16)
dt= Jb(R,X)dX
Xo
where 'tis the dimensionless time, Xo(:;t:Q) is the value of X at t=O, and:
b(R,X)=[2R(l-X) 2 /(X(R 2 -X 2 ) 1/ 2 )-4R(l-X) 2 (R-(R 2 -X 2 ) 1/ 2 )/X 3 +
-4R(l-X)(R-(R 2 -X 2 )1/2 )/X 2 +R(l-X)/(R 2 -X 2 )1/2 ] /
[2R((l- X)/X) 2 [R-(R 2
+R((l- X) I (R 2
X 2 )1/2 ]+ R(arcsen(X/R)- R + (R 2
(17)
X 2 )1/ 2 )]+
X 2 )1/2 )]
a-r
2a
lo
I
I
I
AI
0'
;
;
;
;
;
;
x' G
+G
;
;
XG
From the constant volume law, the equality of the initial volume with the volume of the
deforming sheet, calculated by the Guldino's theorem, provides:
(18)
An
Analysi~
327
where x~ is the distance from the centre of gravity of the part BCP to the axis O'D. It can
be determined by fig. 4 where x0 and x0 represent the distance from the centre of gravity
of the sections identified by the arc CD and by the straight line BC to the axis O'D,
respectively. Therefore, ifl8 c and leo are the lengths ofBC and CD, respectively, it is:
(19)
where:
2r
XG = --
(20)
XG = r
7t
(21)
X~=-------
nr+2W
By inserting eqn. (21) into eqn. (18) and solving with respect to the thickness s:
[1-(; R+W)(1s =so
2:~~-:)-)]
(22)
[(;-~~w)[;=-2::i~)r~~(-,~;)2]
where W=w/a is the dimensionless length of the straight line between the two concavities
of the membrane. Since:
(23)
p = a(1- R) =constant
by substituting eqns. (22) and (23) into eqns. (13) and (15), the following relationships are
obtained:
2 )
1- (
7t
(24)
p= 1-R [(n
2 ])( 2R(R+W))
R+W 1+2(1-R)
2
nR +2W
2R I
(25)
The integration of eqn. (25) from t0 tot provides the dimensionless time versus W:
(26)
328
where 't' 0 is the dimensionless time after that the membrane shape does not change anymore
2.3 Geometrical representation of the model
The manufacturing of very complex components by SPF requires an accurate control of the
process parameters; in particular, the control can be made more effectively by an accurate
analysis of the dependency ofthe height H on thickness s. In this framework, the analytical
models are very helpful. Actually, even if achieved by simplifying assumptions, the models
provide the guidance to define the optimum conditions for superplastic flow. Therefore, the
height H of the pole (figs. 2 and 3) in the stage I can be expressed as:
(27)
H( t) = I+ W( t)
p
(28)
'2r-----------------~==~~~
2
R=O.OS
1.8
R-0.10
R=0.20
1.6
--Jovane
1.4
1.2
0.8
0.6
0.4
0.2
o L-------------------------~
0
0.1 0.2 0.3 0.4 0.5 06 0.7 0.8 0.9
1
T
Fig. 5
2r-------------------~--~~
H
1.6
R=O.OS
R=0.10
R=0.20 I
- -J<lVan!'J
1.2
0.8
0.4
0~--------------------------J
0
0.2
0.4
0.6
0.6
T
Fig. 6
329
0.0005
t[s-1]
100
-Analytical Model
0.0004
R=0.20
H[mm]
FEM
80
0.0003
0.0002
R=0.20
FEM
60
40
0.0001
20
(a)
(b)
0
0
-Analytical Model
500
0
0
500
t[s]
Fig. 8
t [s]
330
4. CONCLUSION
An analytical model for predicting the pressurisation rate in a SPF process of thin sheets
was proposed. As to the previous models, it allows to take into account also the die entry
radius. The process parameters, in terms of pressure, polar height and sheet thinning in the
pole of the dome, vary with time and die entry radius. In particular if, at any instant, the die
entry radius decreases, the pressure and thinning factor decrease and the polar height
increases: these parameters tend to the values obtained in other models where bending of
the sheet at clamps is neglected when R tends to zero.
The validity of the model was verified by means of FEM simulations that have shown
results that, in terms of strain rate, polar height and sheet thinning, are in excellent
agreement with the analytical results provided by the model.
ACKNOWLEDGEMENTS
The authors express their warm thanks to Dr. Valeri Berdine for the helpful suggestions in
the preparation of this paper. The financial support ofMURST (40%) is acknowledged.
REFERENCES
Padmanabhan K.A., Davies G.J.: "Superplasticity", Springer-Verlag, Berlin, 1980.
Baudelet B.: "Industrial Aspects ofSuperplasticity", Mat. Sci. Eng., Al37, 1991,4155.
[3] Pilling J., Ridley N.: "Superplasticity in Cristalline Solids", The Institute of Metals,
London, 1989.
[4] Jovane F.: "An Approssimate Analysis of the Superplastic Forming of a Thin
Circular Diaphragm: Theory and Experiments", Int. J. Mech. Sci. 10, 1968,403-427.
[5] Quan S.Y., Jun Z.: "A Mechanical Analysis of the Superplastic Free Bulging of
Metal Sheet", Mat. Sci. Eng., 84, 1986, 111-125.
[6] Odqvist F.K.: "Mathematical Theory of Creep and Creep Rupture", Clarendon,
Oxford, 1966.
[7] ChandraN., Rama S.C.: "Application of Finite Element Method to the Design of
Superplastic Forming Processes", ASME J. Eng. Ind., 114, 1992, 452-458.
[8] Zhang K., Zhao Q., Wang C., Wang Z.R.: "Simulation of Superplastic Sheet
Forming and Bulk Forming", J. Mat. Proc. Technol., 55, 1995, 24-27.
[9] Enikeev F.U., Kruglov A.A.: "An Analysis of the Superplastic Forming of a Thin
Circular Diaphragm", Int. J. Mech. Sci. 37, 1995,473-483.
[1 0] Ghosh A.K., Hamilton C.H.: "Influences of Material Parameters and Microstructure
on Superplastic Forming", Metall. Trans., 13A, 1982, 733-743.
[1]
[2]
R.M.S.O. Baptista
lnstituto Superior Tecnico, Lisboa, Portugal
P.M.C. Custodio
Instituto Politecnico de Leiria, Leiria, Portugal
KEY WORDS: Sheet Metal Forming, Deep Drawing, Tool Design, Computer Aided Process
Planing CAPP
ABSTRACT: The paper discusses the philosophy of PROEST AMP - a Computer Aided Process
Planning (CAPP) system for the design of deep drawing tools. PROEST AMP will interface with a
finite element code and a material database that includes the mechanical, anisotropic and formability
properties of the material experimentally determined in order to enable the preliminary and final
design of the deep drawing tools from the geometrical or physical model of the part. The CAPP
system will include the capability to store in a multimedia environment the record of all of the
above-mentioned procedures including the experimental analysis of the strains imposed on the part
and the try-out results. This integrated system will lead to the capability of producing sound parts at
first and in a shorter period of time. Simultaneously, an improvement in the design and product
quality will be achieved.
1. INTRODUCTION
The designers of deep drawing tools are facing new and greater challenges. To start with,
there is a need for a quick and reliable preliminary plan of the deep drawing tool so that the
increasing number of budget requests can be met. This is a very critical phase because the
company needs to meet budget requests within a short amount of time. Notice that budget
deadlines are continuously being shortened and only a small number of budget requests
become orders. So, it is clear the need to develop a quick and reliable tool for the design of
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
332
deep drawing tools that integrates the empirical knowledge of the designers and craftsmen
with a more scientific understanding achieved by a theoretical and experimental analysis of
the deep drawing process.
However, the challenges do not end with the order. After that phase, it is necessary to
design the final plan, making use of modelling capabilities, for optimisation of the operative
parameters, linked with the knowledge of the mechanical, anisotropic and formability
properties of the material.
Finally, there is the phase of building and testing the deep drawing tool. That phase usually
includes the production of acceptance parts or even a pre-series run. During the try-out of
the tools some corrections are frequently made, usually based on the craftsmen empirical
knowledge and accumulated skill. No record is kept of the arising problems and of the
solutions found.
In the past years a large effort has been put on the development and application of finite
element codes to the simulation of the deep drawing process. Despite all of these
developments for the most part, the design is still done by process engineers in the
traditional way, as well as the tools try-out. In order to take advantage of these "two
worlds" the more scientific knowledge of the process, achieved by the theoretical modeling,
and the know how acquired by the technicians a friendly user system that integrates all of
these benefits was envisaged.
The development of an integrated tool/methodology to face these challenges are the main
objective of a three years project recently launched by the authors and financially supported
by JNICT. A Computer Aided Process Planning (CAPP) system that interfaces with a finite
element code and a database with the mechanical, anisotropic and formability properties of
the material experimentally determined will enable the preliminary and final design of the
deep drawing tools from the geometrical or physical model of the part. The CAPP system
will include the capability to store in a multimedia environment the record of all of the
above-mentioned procedures including the experimental analysis of the strains imposed on
the part and the try-out results. This integrated system will lead to the capability of
producing sound parts at first and in a shorter period of time. Simultaneously, an
improvement in the design and product quality will be achieved.
In process design, the goal is to define the most adequate process sequence that will
transform the original blank geometry into the final part. This target is very up-to-date and
requires a large and continuous effort in developing time as it can be noticed in the
continuous works ofTisza [1-3] and Karima [4, 5], for example. All of these works have a
main objective that is to transform the present "art" of tooling development according to
the technological skills developed by the artisans to an experience-enhanced science based
technology to be used by the practitioners, [6].
333
334
PART
GEOMETRY
TYPICAL PARTS
EXTERNAL CAD SYST.
DATA BASE
CMM. PHYSICAL
MODEL
.MATERIAL PROPERTIES
~MChlnlcal
MULTIMEDIA
-Anillhpic
-Form1btlty
DATA BASE
PRODUCT
PROCESS
PARAMETERS
-LUBRICANTS
PROCESS
DEVELOPMENT
-PROCESS PARAMETERS
D & Punch comer
rldlul
----rr
STANDARD TOOL
COMPONENTS
PRESSES AND
AUXILIARY EQUIPMENT
FORCE DETERMINATION
CIRCLE GRID
ANALYSIS
335
The process design begins with the definition of the geometry of the part to be produced.
This task can be done in three different ways:
For typical parts, Fig. 2, the user select the desired shape and gives the required
dimensions;
Import the part geometry from external CAD systems, using neutral interfaces, DXF
and IGES at the moment;
To draw directly on PROESTAMP. Nevertheless this capability does not intend to
substitute a CAD system but only to take advantage of the drawing capabilities
implemented in the Edit functionality;
From a physical model a Coordinate Measuring Machine, CMM; will scan the model
and export the data to a CAD system.
At any stage the user can change the part geometry and the system automatically update all
of the computations already done.
336
The next step is to compute the initial blank geometry, Fig. 3. This is done assuming
volume constancy in plastic deformation and average thickness constancy in deep drawing.
So, the initial blank is obtained by surface constancy.
At this moment the user shall indicate the material type. All the relevant data should be
available on the material data base, if not, the user can update the material data base.
At this moment the system is able to propose the sequence of operations necessary for
producing the part. The main criteria applied in this module is the use of the Limit Drawing
Ratio, LDR, of the material. At the same time all the relevant process parameters are taken
from the data base and a forecast of the forces involved is made. These tasks are performed
under the preliminary tool design module.
a (mm) = 100.00
b (mml
H11
~-
{mm) =
70.00.
.;
....
__..
60.46
Achieved this stage the system offers to the user the possibility of keeping her traditional
way, move directly to the final tool design, or the possibility of optimizing the sequence of
operations and the values of the process parameters. Once, more and more FE programs
come available for analysis of the deep drawing process the main idea is to implement a pre
337
and a postprocessor module for a FE program, ABAQUS for example. The main criteria
for the optimization procedure is the comparison between the predicted strain distribution
on the part versus the Forming Limit Curve, FLC, of the material.
Once the preliminary tool design is finished with this optimization procedure the final
design consists mainly on the interfacing of the available geometric and operating data with
a CAD system for the final design and a CAPP/CAM system for planning the
manufacturing of the tool.
Finally the circle grid analysis, and the FLC, will be used during the try-out of the tool.
All of these procedures will be registered, documented, using multimedia facilities,
including the video register of tool details and ofthe main occurrences during the try-out.
3. CONCLUSIONS
A very ambitious three years project, actually at the first half-year, for the development of a
computer assisted system for the design of deep drawing tools was presented.
These type of tools are very attractive because the industrialist has to cut in the time to
market and they need to make very realistic quotations in a short time.
Once the system has so many modules each of them can be of large or small degree of
difficulty. We decided to emphasize the development of the total philosophy, from the very
beginning until the end, with the easiest possible cases and after to include the more
complexes cases as the complex shapes, for example.
The system will always offer to the user the possibility of working in the traditional way he
is used to, and to incorporate all of theirs know how, as well as the possibilities of
gradually introduce a more scientific approach.
The most interesting particularity is that the designer will be able of designing the whole
process at the computer without needing to look for recommended values on manuals.
338
ACKNOWLEDGES
This project is sponsored by JNICT "Junta Nacional de
Tecnol6gica" which is gratefully acknowledged.
Investiga~ao
Cientffica e
REFERENCES
1. Tisza, M.: A CAD System for Deep-drawing, 2nd ICTP, Stuttgart, 1987, 145.
2. Tisza, M.:An Expert System for Process Planning of Deep-drawing, 4th ICTP
Conference, Beijing, 1993, 1667.
3. Tisza, M.; Racz, P.: Computer Aided Process Planning of Sheet Metal Forming
Processes, 18th Biennial Congress ofiDDRG, Lisboa, 1994,283.
4. Karima, M.: From Stamping Engineering to an Alternative Computer-Assisted
Environment, Autobody Stamping technology Progress SP-865, USA, 1991, 97.
5. Karima, M.: Practical Application of Process Simulation in Stamping, Journal of
Materials Processing Technology 46, 1994, 309.
6. Keeler, S.P.: Sheet Metal Stamping Technology- Need for Fundamental Understanding,
Mechanics of Sheet Metal Forming, Plenum Press, N.Y., 1978, 3.
KEY WORDS: Metal Forming, Rapid Tooling, Layered Tools, FEM Simulation
ABSTRACT: The paper presents the concept of layered-construction metal forming tools. Such
tool design allows the adaptation of its geometry where a set of products have to be manufactured.
The composing of a forming tool from existing tool steel plates with different thicknesses enables
quick tool geometry changes. The decrease of tool production times and costs is possible with
computer-aided decision systems and through the use of existing tool plates.
The concept of layered forming tools is shown with the deep drawing of a set of thick
rotational cups. Determined real tool geometry has been estimated with FEM simulations and
compared with experiments to justify the FEM model and to verify the proper choice of tool
geometry.
I. INTRODUCTION
Nowadays industrial production tends towards the shortening of production times and
towards the ability of fast reactions to market demands. Modern manufacturing technologies and computer-aided systems offer manufacturers increased production flexibility,
which is still very low in metal forming processes. Forming tools are mainly designed for
one product, which restricts manufacturing adaptability due to high tool costs. To fulfil the
demands for flexibility in metal forming, special tooling concepts have been developed.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
340
These tools are based on a layered or laminated structure, which enables fast and inexpensive manufacturing of tools (e.g. with laser beam cutting)[l,2,3] and increases their geometrical adaptability [4]. They also can be designed in segmented structure to enable
changing of tool geometry with the setting of tool segments into appropriate positions [5].
Another possibility is geometrical adjustable pin-structure [6] or tools with elastic elements. All above mentioned tools are for use mainly in prototyping and small quantity
production where large demands for fast, inexpensive and/or adaptable tooling systems are
presented.
The paper presents a possibility of increasing the geometrical adaptability of forming
tools based on layered tool structure and analysed using the deep drawing process. The
conventional tools used in this forming process are clearly unsuitable for any geometrical
adaptations caused by changes in product geometry.
2. FLEXffiiLITY OF DEEP ORAWING TOOLS
The adaptation of tool geometry in deep drawing tools was realised with the development of laminated tools in the Forming Laboratory of the Faculty of Mechanical
Engineering, University of Ljubljana, Slovenia. The lamellas for the tools were laser beam
cut from 1 mm thick sheet steel. Manufacturing of these tools is very fast. The manufacturing times were five times shorter than for tools produced with conventional manufacturing technologies [4]. A forming analysis of deep drawing with laminated tools without
blankholder for rotational and non-rotational cups from thick sheet steel was performed. It
showed that laminated structure allows the changing of a particular tool part (wear or fracture-damaged part of drawing die) or even the use of different materials in the same die.
Laminated tools also enable the adaptation of tool geometry according to the demands
forming process. Without larger design or manufacturing efforts some lamellas can
the
of
be changed or removed and new tool parts can be added between existing lamellas [4]. The
removal of some lamellas makes possible the manufacture of a limited set of geometrically
similar products. On the other hand it allows one to control and optimise the drawing
process by changing the punch-die clearance.
The active die geometry of the designed laminated tool is prescribed with a drawing
curve which has in the analysed tool the form of a tractrix curve in order to achieve the
minimal drawing forces. The tool geometry reduces the flexibility in terms of the production of defined product geometry which does not match to existing die openings achieved
with the removal of some lamellas. An additional problem to tool flexibility is the stiffness
of lamellas, which is in comparison to hardened tool steel very low. Low stiffness of
lamellas requires a thicker calibration part which receives increased local strains at the end
of the forming process and has to be changed on each variation of the bottom die opening.
The products chosen for the process analysis are rotational cups with outer diameter of
55, 60 and 65 mm from 5 mm low-carbon deep drawing steel (DIN RStl3). FEM simulations were performed for all three cup dimensions and experiments for the first two dimensions. In order to increase the tool's flexibility and stiffness and to achieve the defined cup
diameters, a set of tool plates was developed to replace a part of the laminated tool. For
341
TOOL PARAMETERS
DECISION CRJTERJA:
- geometrical fining to
the ideal tool geometry
-tool material
- clamping system
- positionining system
II FORMING TOOL I
j
w\
342
drawing ratio of f3max=2.8 (in special cases up to 3.25 [IO]) and forming forces achieved
with tractrix inlet profile could not be reached.
The FEM simulations of deep drawing were performed in two phases. In the first
phase the reference die with inner die profile in the form of a tractrix curve was simulated
with two different die models, rigid
and
elastic (see .fig. 2). Comparison of
FEMmodel
Rigid die
Elastic die
both models has shown nearly identical results for forming forces, cup
geometry and stress conditions. The
stress condition in the elastic model
of the die did not indicate any local
stresses which would exceed the critical values (O'die,max< 320 N/mm 2 Mises). Based on all these comparisons the rigid tool model was chosen
Force- Travel Diagram
for further simulations to shorten the
200
200
calculation times. All simulations
..... kN
were performed with the DEFORM'~'M
~
4.I
.I 2D program [II] with automatic
& 100
mesh generation and remeshing.
so
so
The second phase of simulation
was established to ascertain the
0 20 40 60 80 mm 120
0 20 40 60 80 [mrn] 120
proper choice of tool geometry.
Travel s
Travel s
Different
combinations of tool plates
Fig. 2: FEM model of rigid and elastic die with
and punches with different diameters
inlet shape in the form of a tractrix
were analysed where punch and die
curve.
were treated as rigid objects. In order
to reach fast fulfilment of the
convergence criterion to shorten the calculation times and to avoid separation problems on
contact surfaces, the tool surfaces of all die parts were simplified. The geometry of tool
plates was changed in the top and bottom sequence of the inner die profile where a fillet
with 5 mm radius has been chosen to bind the cone geometry of neighbouring plates (see
fig. 3). These simplifications have to be considered through comparison of FEM simulation
with the experimental results of deep drawing.
4. TOOL-SET DESIGN
The tooling set is designed with the following boundary conditions:
die plate opening is defined through outer cup geometry,
the geometry of the broken cone should best fit on an ideal drawing profile in the form
of a tractrix curve,
343
the dimensions of tool plates should fit the tractrix die profile (see above condition) in
their input and output diameters (<j>70,<j>65, <1>60, <!>55 mm),
the geometry change from tractrix profile to broken cone profile should not increase the
die thickness,
the replacement of each part of the die has to be fast and simple to perform.
The designed set of tool plates is based on partial replacement of lamellas in laminated drawing dies to achieve
better production accuracy and/or stiffness on particular parts
of the die. As simple as possible a combination of tool plates
was achieved, starting from the smallest outer cup diameter
real tool
(<!>55 mm); each next bigger cup diameter in the set can be
geometry
produced by removing a single tool plate. In fig. 4 the tool
flexibility by removing the tool plates is shown with the
simulated
geometry
corresponding product geometry simulated with the FEM
program for each combination punch-die and with constant
blank diameter of Do = 100 mm.
The stiffness of tool plates which result from their
material (plates are made of hardened tool steel and lamellas Fig. 3: Tool geometry by
from steel sheets with 0.61% C), heat treatment and thickFEM model and
ness which is 10-20 times larger in comparison with lamellas
experimental tool
allow the use of these plates also as bottom clamping plates.
The
top
and
bottom geometry
of the drawing
profile of each
plate is made
with transitional
radii to decrease
r--the local stresses
in them (see fig.
3). The inlet and
output radius in
the cone profile
influences
the
forming process,
which is evident
from experimen~
realised
tally determined
- - - other solutions
forming forces .
The production of cup sets
Fig 4: Tool flexibility achieved with changing of tool plates
requires in addi-
--- +
344
tion to the changing of the die opening also the changing of the punch diameter. In the presented work the increase of tool flexibility was limited to the die design and conventional
drawing punches were used. The research of tooling sets for fast and inexpensive punch
geometry changes using special front plates is in progress.
5. EXPERIMENTAL VERIFICATION
The experimental verification of the flexibility of the newly designed deep drawing tool
was performed with three punch-die combinations whereas another three combinations
were only simulated with the FEM program. The chosen die-punch-blank diameter combinations are presented in tab. 1.
= 690.1 cpe0221
m:
2 ]
and anisotropy
r = 0.78
was used.
The comparison of forming forces acquired with FEM simulation and experimentally
has shown some differences between the two methods. In fig. 5 two cups with different
outer diameters made by the presented flexible tooling set are shown. The experimental
results of force-travel diagram show a greater decrease of forming force when the
workpiece has passed the contact areas between two tool plates (fig. 6 - indicated by
arrow). On the other hand the FEM simulation did not show these force decreases due to
the simplification of the model of rigid tool geometry.
6. CONCLUSIONS
345
346
ratio and maximal drawing ratio for a particular tool geometry in order to build up reliable
decision-making criteria for the achievement of the maximal drawing ratio with a chosen
tool set will be the theme of future work.
ACKNOWLEDGEMENTS
We would like to thank the Ministry of Science and Technology, which through project
No. 12-7064-0782-96 supported the realisation of the presented work.
7. REFERENCES
l. Nakagawa T.: Applications of Laser Beam Cutting to Manufacturing of Forming ToolsLaser Cut Sheet Laminated Tool, 26th CIRP International Seminar on Manufacturing
Systems, LANE '94, 12-14. Oct. 1994, Erlangen, Germany, p. 63-80.
2. Kuzman, K.; i>epelnjak, T.; Hoffmann P.: Flexible Herstellung von
Lamellenwerkzeugen mittels Laserstrahlschneiden, Blech Rohre Profile, 41 (1994) 4, p.
241-245 (in German).
3. Franke, V.; Greska, W.; Geiger, M.: Laminated Tool System for Press Brakes, 26th
CIRP Int. Seminar on Manufacturing Systems, LANE '94 , Erlangen, Germany, p. 883892.
4. Kuzman, K.; Pepelnjak, T.; Hoffmann, P; Kampu~. Z.; Rogelj V.: Laser-cut Sheets- one
of the Basic Elements for Low Cost Tooling System in Sheet Metal Forming, 26th CIRP
International Seminar on Manufacturing Systems, LANE '94, Erlangen, Germany, (invited
paper), p. 871-~82.
5. Nielsen, L.S.; Lassen, S.; Andersen, C.B.; Gr~nbrek, J.; Bay, N.: Development of a
flexible tool system for small quantity production in cold forming, 28th ICFG Plenary
Meeting, Denmark,1995, p. 4.1-4.19.
6. Kleiner, M.; Brox, H.: Flexibles, numerisch einstellbares Werkzeugsystem zum Tiefund Streckziehen, Umformtechnik, Teubner Verlag, Stuttgart 1992, p. 71-85 (in German).
7. Balic, J.; Kuzman, K.: CIM Implementation in Forming Tools Production, Proc. of 2nd
Int. Conf. on Manufacturing Technology, Hong Kong, 1993, p. 361-366.
8. Brezocnik, M.; Batie, J.: Design of an intelligent design-technological interface and its
influence on integrational processes in the production, Master Thesis, University of
Maribor, 1995,91 p.
9. Kampu~, Z.; Kuzman, K.: Experimental and numerical (FEM) analysis of deep drawing
of relatively thick sheet metal, J. Mat. Proc. Tech., 34 (1992), p. 133-140.
10. Kampu~. Z.: Optimisation of dies and analysis of longitudinal cracks in cups made by
deep drawing without blankholder, 5th ICTP, October 1996, Ohio, USA, (accepted paper).
11. Scientific Forming Coop.: DEFORM 2D- Ver. 4.1.1., Users Manual, 1995.
INTRODUCTION
348
deformation can be performed at room temperature, instead of hot forging temperature. For
these reasons the laboratory tests are faster, easier and less expensive than a sub scale
production process. Furthermore, the dies utilised in the test and reproducing the geometry
of real dies can be manufactured in resin, aluminium, Plexiglas (in the case of waxes and
plasticine) or carbon steels (in the case of lead).
Investigations based on model material can be focused on different aspects, such as flow
behaviour (die filling, defects recognition), forming load requirements [4], parting line
location, flash design, die attitude optimisation, billet location, etc.
Fig. I The equipment for the reconstruction of forces and moment over the forging cycle
The direct analysis of the attitude of resultant and mapping of its application point can
suggest modification of die attitude, parting line location and billet positioning, in order to
reduce lateral forces and moment acting on the dies.
349
cycle
allow the
recognition
of
A comparison among simulations obtained using alternative forming dies gives information
on effectiveness of solutions adopted in die design [7, 8] relevant to die filling, forming
load reducing, defects eliminating and flash minimisation. Defects in material flow and in the
die filling can be recognised by visual inspection of model material preforms and using a
multi-colour layered billet. This approach results to be particularly useful in the preliminary
phases of process design, when alternative solutions should be rapidly evaluated in order to
determine the optimal one, without manufacturing e~pensive die sets and testing them at
operative conditions.
An extreme care should be taken when the load of real forming operation has to be
0.1 2
_~/
0.1
0.0 8
0.0
0.0
---- I/""
0.1 4
6/
4/
,r-
I
I
:{
0.0 2
0
0
0.2
0.4
0.6
0.8
1.2
1.4
1.6
Fig. 2 Multi-step true stress- true strain curve of the wax (model material)
As concerns model material characterisation, cylindrical specimens have been upset; good
lubrication condition should be assured in order to minimise the barrelling, otherwise the
state of stresses is three-axial. When true strain (e) is above 0.4, lubrication becomes not
effective ; therefore, the characterisation test should be splitted into a number of steps, each
350
one performed to a strain less then 0.4 and reconstructing the lubricating film before each
step. The resulting multi-step true stress-true strain curve is presented in Fig. 2.
3.
APPLICATION EXAMPLE
Physical simulation using model material is applied to the study of a hot forged crane link.
This new-design large crane link for earth moving machines (see Fig. 3) will be produced on
a three stage vertical hot former. Main difficulties in forging this crane link are relevant to
die attitude and too high forming forces compared with the press loading capacity.
Material is 35MnCr5 steel forged in the range 1200-1250 C. Forging sequence dies used in
physical simulation are shown in Fig. 4 ; a flash trimming stage, not shown, ends the
sequence. The starting billet is 100x100 mm (square section), 350 mm long.
12117
~------------41--~--------~
Fig. 3 Top view of the new-design crane link
Fig. 4 Dies for the simulation of hot forged crane link (3 forming steps : preforming,
blocking and finishing)
351
In order to investigate both die attitude and required forging force, dies for physical
simulation have been NC manufactured using a resin in halfsize scale respect to designed
dies for real process.
As concerns model material, a wax has been used which offers a behaviour similar to the
real material. In Fig. 5 the true stress-true strain curve of wax is compared with the true
stress-true strain curve of 35MnCr5 steel (T=l200C, 8=11 s -t ). The curves of this steel
have been obtained using the Gleeble 2000 thermo-mechanical simulator in the range of
temperature, strain and strain rate present in the process.
cr.,.. WAX [N/mm2]
r~:~---.::::::;;~====~1:;:;::;;;;;;;l 0.14
0.12
60
0.10
50
WAX
0.08
40
0.06
30
0.04
20
0.02
lO
0 ~--~----~------------~----~--~----~--------~ 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Strue
Fig. 5 Comparison of true stress - true strain curve of 35MnCr5 (T= 1200C, 8= 11 s -I
steel with wax curve
Fz
Fx
95.0
90.0
85.0
Die Stroke [m m]
Fig. 6 History of 3 force components
---Fy
19500 F
16500 0
13500 r
10500 c
7500 e
4500 [N]
1500
-15 0 0
352
A direct analysis of forces, torque and application point plots gives information on
correctness of partition line definition, dies orienting and billet positioning. The lateral
forces (Fx, Fy) in the finishing die, shown in Fig. 6, are negligible and main contribution to
resultant force is due to the Fz component. The moment My and Mz (se Fig. 7) are low due
to the facts that the die is symmetric respect to the y axis (My ~ 0) and the lateral forces
(Fx, Fy) are negligible (Mz ~ 0). Presence of moment Mx can be explained by the fact that
dies parting line is not in a single plane.
M
[kN m m]
600
400
200
------------~----~=r~-.==~~-40
94
92
9o
88 -
86~82
'"""-
-200
-400
MX
-600
('/z o Vol o p o v2)model /(1/z o Vol o p o v2),.,.I = (Vol o cro o 8 )modei/(Vol o cro o 8 )real
where p is the density, v is the speed, Vol is the volume of the
workpiece, cro is the flow stress, 8 is the strain.
353
The fulfilment of similitude conditions allows the determination of load for real process
(Freal) on the basis of
Freal( E) = Fmodel(E) <Joreal(E)/<Jomodel(E)
The second approach [9], which gives a very approximate estimation, is based on the
following assumption : the maximum offorce is reached at the end of the forming process,
when the dies are filled and flow of material is essentially /occrted in the flash. In this case
the following relation can be used
Fmax=KrA<Jr
where A is the projection area in the forming direction including the flash,
Kr is the complexity factor,
crris the material flow stress.
The complexity factor mainly depends on geometry of dies and preforms. Taking in account
this simplification, the Kr factor can be considered independent from material and it can
determined as
Kr = Fmaxm /(Am <Jrm) = 5.8
using the Fmaxm obtained from physical simulation and crrm as the flow stress of the model
material. The maximum offorce in the real process (Fmaxr) results to be
Fmaxr = Kr A. <Jtr = 28900 kN
354
CONCLUSIONS
A recent application of physical simulation technique to the analysis and modelling of hot
forging operations has been presented. The paper has illustrated the developed equipment
and the procedure utilised in the investigation of hot forming by physical modelling, which
includes the monitoring of force-and-moment history over the forging cycle of complex
parts. The application of this technique to the forging of a crane link has been presented,
demonstrating the power of this approach in the preliminary phases of process design if
compared with expensive trial sub scale production or time consuming F .E. simulations.
REFERENCES
1. Altan, T. and Lahoti, G.D : Limitations, Applicability and Usefulness of Different
Methods in Analysing Forming Problems, Trans. of ASME, May 1970
2. Ferguson, H.F. : Fundamentals of Physical Simulation, DELFT Symposium, December
1992
3. Wanheim, T. : Physical Modelling of Metalprocessing, Procesteknisk Institut,
Laboratoriet for Mekaniske Materialeprocesser, Danmarks Teknisk Hojskole, Denmark,
1988
4. Altan, T., Henning, H.J. and Sabroff, A.M., The Use of Model Materials in Predicting
Loads in Metalworking, J. Eng. for Ind., 1970
5. Bariani, P.F., Berti, G., D'Angelo, L., et a!. : Some Progress in Physical Simulation of
Forging Operations, II AITEM National Conference, Padova, September 1995
6. Bariani, P.F., Berti, G., D'Angelo, L. and Guggia, R. : Complementary Aspects in
Physical Simulation of Hot Forging Operations, submitted to 5th ICTP Conference, Ohio,
USA, 1996
7. Pihlainen, H., Kivivouri, S. and Kleemola, H., Die Design in the Extrusion of Hollow
Copper Sections Using the Model Material Technique, J. Mech. Work. Tech., 1985
8. Myrberg, K., Kivivouri, S. and Ahlskog, B., Designing Preforming Dies for Drop
Forging by Using the Model Material Technique, J. Mech. Work. Tech., 1985
9. Schey, J.A., Introduction to Manufacturing Processes, McGraw-Hill, 1988
L. Francesconi
University of Ancona, Ancona, Italy
A. Grottesi, R. Montanari and V. Sergi
University "Tor Vergata", Rome, Italy
1. INTRODUCTION
The Italian Mint Service (Istituto Poligrafico e Zecca della Stato) has developed two
different cycles for the production of commemorative coins: the normal cycle and the proof
one (Fig. 1). The proof cycle is used when a mirror-like surface is requested for the final
product.
Ingots (70 mm wide, 15 mm high and 300 mm long) are produced by continuos casting.
The microstructure of the ingots is not homogeneous and shows evident segregation of Cu
at the surface. In order to reduce segregation phenomena the ingots are annealed 6 hrs at
356
L. Francesconi et al.
600C in air (stage 2- normal cycle) or their surface is milled removing 0.5 mm from each
side (proof cycle).
Following the normal cycle, ingots are then cold rolled (total height reduction .1h = 12.5
mm), annealed 6 hrs at 600C in air and again cold rolled into sheets of 1.6 mm in thickness.
Blanks are cut from the sheets, annealed 6 hrs at 600C in air, pickled with H2S04 and then
hemmed.
The stages of the proof cycle are: cold rolling (.1h = 12.4 mm), annealing 1.5 hrs at 600C
under inert atmosphere, pickling with H2S04 and hemming.
The total height reduction was obtained by multi-pass 2-high non reversing mill; the stock is
returned to the entrance of the rolls for further reduction by means of a platform.
CASTING
\V
ANNEALING
6hrs at 600C
ROLLING
from 15 to 2.5 mm
MILLING
0.5 mm from each side
J
7
ANNEALING
6 hrs at 600 C
ROLLING
from 2.5 to 1.6 mm
ANNEALING
6 hrs at 600C
1
8
COINAGE
ROLLING
from 14 to 2.5 mm
ANNEALING
under inert atmosphere
1.5 hrs at 600C
.....
Figure 1. Flow-chart showing the stages of the normal and proof cycles.
357
5
7
Number of passes
16-18
10
20
5
10
Before coining the blank surface results to be enriched in Ag with respect to the mean
chemical composition due to the milling (proof cycle) or to the pickling, which remove the
Cu oxide scale formed during thermal treatments in air (normal cycle).
The microstructure evolution and the mechanical characterization during each stage of the
two cycles have been object of a previous work [1]. It was found that after the normal
cycle, the hardness of the material is slightly higher and the microstructure less
homogeneous but despite these differences, blanks show a better formability which results
in a longer life of the dies.
This work aims to evaluate the grain orientation after continuos casting and its evolution
during processing.
2. EXPERIMENTAL
The studied alloy has the following chemical composition (wt. %): Ag 83.5 %, Cu 16.44 %,
p 0.06%.
The alloy consists of Ag-Cu eutectic grains in an Ag-rich matrix [1).
For X-ray diffraction (XRD) measurements a diffractometer SIEMENS 05000 equipped
with Euler cradle has been employed with Ni-filtered Mo-Ka radiation (A.= 0.71 A). XRD
patterns ofbulk specimens were collected by step scanning with 28 steps of 0.005 and counting
time of20 s per step. The texture has been evaluated from the (111), (200) and (220) pole
figures of the Ag- and Cu-rich phases by the reflection method. The intensities
measurements have been performed in the ranges 0< x <70 and 0< 4> <355 and with a step
size of so and a counting time of 5 s for each step. Data have been then corrected for
background and defocusing. The experimental data were elaborated by a series expansion
method in order to have computed pole figures covering the whole x range up to 90.
3. RESULTS AND DISCUSSION
XRD patterns of the samples after each stage of the two cycles are shown in Fig. 2.
The ingots can be considered texture-free being the relative intensities of the reflexions of
the Ag-rich and Cu-rich phases similar to those of a sample with random oriented grains.
The texture development and evolution after each stage of the two cycles may be explained
as the results of two different processes: the cold-rolling deformation and the following
recrystallization due to thermal treatments.
358
L. Francesconi et al.
....
....
....
NORMAL
....
....
..,
"
::::t
ca
....,>
en
c
....,G)
c
15
20
25
30
2 {} (0)
35
40
Figure 2. XRD patterns of the samples after each stage of the normal and the proof cycles.
359
2. The final grain orientation is different in the two cases. Before coining, the normal cycle
gives rise to a { 111 } texture whereas a mixed texture with a strong { 110} component is
produced by the proof one.
3. The { 111 } texture of the blanks obtained by the normal cycle is preferable for the
subsequent coining.
360
L. Francesconi et al.
Figure 3. Ag { 111} pole figure after: (a) stage J, (b) stage 4, (c) stage 5 and (d) stage 6.
(110)[112], 0(110)[001], (110)[l1l],A{ll1} pole (the other poles lie on the circle
at 70 from the central pole).
361
t
RD
{lll}
{200}
{220}
Figure 4. Pole figures of the Ag-rich phase
after stage 8.
362
L. Francesconi et al.
4. As already found by other authors, the Ag-Cu eutectic was found to have orientation
relationships close to: { 100}Ag II { 100}Cu and <010>Ag II <010>Cu.
ACKNOWLEDGEMENTS
The authors would like to thank Dr. A. Tulli (I.P.Z.S. Italian Mint Service) for providing
the material.
REFERENCES
1. Grottesi, A., Montanari, R., Sergi, V., Tulli, A.: Studio Microstrutturale della Lega Ag
835 nei Vari Stadi del Processo di Fabbricazione delle Monete presso Ia Zecca della Stato,
Proceeding 3 AIMAT, in press
2. Calnan, E.A.: Deformation Texture on Face-Centred Cubic Metals, Acta Metallurgica, 2
(1954), 865-874
3. Dillamore, I.L., R,oberts, W.T.: Rolling Textures in F.C.C. and B.C.C. Metals, Acta
Metallurgica, 12 (1964), 281-293
4. Smallman, R.E., Green, D.: The Dependence of Rolling Texture on Stacking Fault
Energy, Acta Metallurgica, 12 (1964), 145-154
5. Honeycombe, R.W.K.: Anisotropy in Polycrystalline Metals, The Plastic Deformation of
Metals, Edward Arnold, London, 1984, 326-341
6. Cantor, B., Chadwick, G.A.: Eutectic Crystallography by X-Ray Texture Diffractometry,
Journal ofCrystal Growth, 30 (1975), 109-112
7. Davidson, C.J., Smith, 1.0.: Interphase Orientation Relationships in Directionally
Solidified Silver-Copper Eutectic Alloy, Journal of Materials Science Letters, 3 (1984),
759-762
INTRODUCTION
Some recent trends in metal forming as, for instance, forging of complex geometries and/or
with close tolerances highlight the importance of studying interface phenomena: friction,
wear, heat transfer. Recently at DIMEG a research work has been started on such topics.
Heat transfer is a process which heavily influences tool behaviour at work. Tool hardness
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
364
rapidly decays when temperature is higher than the selected recovery temperature. For this
reason it is desirable that heat transfer between workpiece and tool is low in forging
processes. According to lubricant producers desiderata, even a qualitative standard test
could be useful to compare behaviour of different lubricants as concerns thermal barrier
effect. In spite of these considerations and of remarkable works already developed [1+4],
reliable data on heat transfer between tools and workpiece in forming processes are not
available.
Direct measurement of the heat transfer coefficient is not possible, therefore non-direct
ways have been developed, usually combining different techniques. When experimental tests
are used, the starting point is measurement of temperature through thermocouples in
selected points inside tools and workpiece. Location of measurement points inside the
workpiece is difficult during forging tests because hot junctions shift from the original
positions due to material deformation. As a consequence, a temperature Vs time diagram
obtained from a test does not refer to a point but to a point path.
The present work relates to a first set of experiments developed to evaluate the heat transfer
coefficient at tool-specimen interface at forging conditions. Some simplifications were used.
(i) 1100 Aluminium was chosen as the specimen material to reduce test temperature and to
use low cost and easy-to~work tool materials, in this case AlSI 304 stainless steel. (ii)
Specimens are elastically deformed, then point shifts inside the specimen are so small that
the position of measuring points inside the specimen can be considered known throughout
the test.
2.
PROCEDURE
To evaluate the heat transfer coefficient between punch and specimen an inverse analysis
technique is used, based bn combination of experimental tests and numerical simulations.
The test chosen to this aim is upsetting of a round cylindrical specimen between flat
punches.
In this test the heat transfer is supposed to be one-dimensional, from the specimen to the
punches [1,2,3], that is to say the temperature is uniform on each plane parallel to the punch
faces (cross sections). With this assumption two phenomena are neglected: lateral cooling
and the distribution of the heat transfer coefficient at interface. Lateral cooling determines a
radial temperature gradient in the components (punches and specimen), its influence
increasing in slow tests. Pressure distribution is not uniform in this kind of test. It has been
shown [5] that higher pressures determines higher values for the heat transfer coefficient. As
a consequence, also the temperature distribution close to the specimen end surfaces is not
uniform.
Locations for hot junctions were chosen taking into account the following requirements :
thermocouples should not be too close to each other to reduce mutual influence,
thermocouples should be far from lateral surface and close to interface to reduce
influence of lateral cooling.
In addition, all thermocouples were set at the same distance from the axis, to reduce the
influence of pressure distribution at interface.
365
Four thermocouples are used, two inside the specimen and two in one punch.
The procedure is made up of 5 steps, listed below.
1. conduction of the experimental test, whose output are the four temperature V s time
diagrams given by thermocouples.
2. extraction of a "first attempt value" for the heat transfer coefficient, to be used for the
numerical simulation. The "first attempt value" is based on the hypothesis of onedimension heat flow and constant axial temperature gradient inside components.
3. development of the numerical simulation and derivation of temperature V s time diagrams
in "control points" corresponding to the thermocouple locations.
4. comparison between experimental and numerical diagrams to decide if the heat transfer
coefficient should be increased or reduced in the numerical simulation.
5. development of a new simulation using the new value for the heat transfer coefficient.
The procedure ends when a good agreement between experimental and numerical results is
reached.
COMPRESSING
STEP
HEATING
STEP
~
plates
compressing
punches
""6~"""
I
,heating
punches
EXPERIMENTAL SET UP
A Gleeble 2000 high temperature testing system was used to conduct experimental tests.
Characteristics of the system useful for this "V5>rk are (i) possibility to conduct both
mechanical and thermal tests, (ii) possibility to choose stroke and temperature control, (iii)
an integrated acquisition and registration system.Jor thermal and mechanical data, and (iv)
its control software, suitable to command movement of non-standard devices.
Specimens are heated through Joule effect, the resulting temperature distribution being
made up of isothermal cross sections inside the specimen. Two independently controlled
hydraulic pistons (stroke and wedge) are used to deform specimens, the maximum load
366
being 20 tons. Up to six thermocouples may be connected to the acquisition system, four
type-K (chromel-alumel) and two type-S (platinum-platinum rhodium). In the presented
tests four type-K were used.
To assure good test reproducibility a special equipment was used, designed by the authors.
A feeding ram is used for correct specimen positioning. Two sliding stainless steel plates are
inserted between hydraulic pistons and punches. Their movement is possible through a
pneumatic piston, software controlled. Plates are water cooled for high temperature tests.
The resulting assembly allows use of two different couples of punches, one copper couple
(electrodes) for heating specimens and one stainless steel for deforming them. This solution
was chosen because heating by Joule effect causes increase of temperature in punches as
well as in specimen. In that case punches would reach much higher temperatures and
operating conditions during deformation would be farther from industrial forging
conditions.
The testing procedure is shown in Fig. 1. The specimen is first carried by the feeding ram
between the electrodes (heating step). Here it is heated till 430C, an usual forging
temperature for aluminium. At the end of the heating step, the specimen is kept in position
by the ram while the plates slid to face the punches to the specimen. The last steps are
compression to the programmed stroke and specimen unloading.
4
.a
48
25
367
NUMERICAL SIMULATION
punch
specimen
qs
As
= f(tsl-
fs2)
where:
q, and qp
368
where:
K=
q
fss-tsp
450
400
350
300
E250
Q.
200
150
100
50
0
0
20
40
60
80
100
120
140
160
180
tempo (sec)
200
369
Ill
300
250
qc
200
150
100
50
0
136
138
140
142
144
146
148
150
152
154
156
158
time
RESULTS
The results of experimental tests are sets of four temperature Vs time diagrams relevant to
thermocouple positions, an example is in Fig. 4. Each diagram is made up of four steps: (i)
specimen heating, (ii) temperature homogenisation for 20 seconds, (iii) elastic compression,
and (iv) keeping in position for 30 seconds.
Operating conditions of the tests were, as follows:
hot junctions 6 mm far from the axis, dry interface, die initially at room temperature
(Tl);
hot junctions 10 mm far from the axis, dry interface, die initially at room temperature
(T2);
hot junctions 6 mm far from the axis, dry interface, die starting from 75C (T3);
hot junctions 6 mm far from the axis, interface lubricated with MoS 2, die starting from
room temperature (T4);
Each test was conducted three times; data dispersion is within 9%. FEM was used to
simulate the compression step, so temperature-time diagrams of control points derived from
simulations must be compared with one part of experimental diagrams. One comparison is
shown in Fig. 5, percentage temperature difference being in all cases within 5%. Due to high
thermal conductivity of aluminium, curves relevant to the specimen overlap. Details of
results can be found in [7].
Table 1 summarises values of the heat transfer coefficient calculated for the different
operating conditions.
370
The coefficient has similar values in T1 and T3. In T2 thermocouples are closer to the
lateral surface, then one possible explanation of the very high value for K could be the
influence of lateral cooling inside specimen, having much higher thermal conductivity than
the punch. As concerns the value relevant to T4, it was demonstrated also in other kinds of
test that the heat transfer coefficient increases when interface is lubricated; this result is
confirmed in unpublished reports of other researchers working in the same field.
Test
T1 (d=6 mm, dry int. room temp.)
T2 (d=10 mm, dry int., room temp.)
T3 (d=6 mm, dry int. 75C)
T4 (d=6 mm, MoS2, room temp.)
K (W/m2oC)
3050
;6700
2700
4500
1. INTRODUCTION
This paper gives an analysis of the components forces which appear in the drawing of
conical parts. In view of the complexity of the deforming process itself, the existing
literature gives numerous solutions for the forces, that are the solutions based upon various
approximations [1 ,2].
In this paper the procedure is presented of determining maximal component forces by the
conical parts rotary drawing with respect to the "sine law" (s1 = s0 sina. 0 ) and with respect
to a deviation from it.
The component forces are determined on the basis of the stresses found out in close
neighborhood of stressing and of areas involved in the transmission of the respective
372
component forces [3,4]. In addition, the positions of the maximal component forces during
the rotary drawing process are determined.
2. FORCES AT
(s
"* s sina
0
0)
A detailed analysis of the stress I deformation state in the rotary drawing is given in the
Ref. [3]. Fig. I. shows the meridian stresses during the deforming process which has been
used as the basis for determining the component forces. The most immediate deforming
zone for the case s0 > s1 "# s0 sina 0 consists of two zones, namely the first (I) zone where
the reduction is done with respect to the diameter, and the second (II) zone in which the
reduction is done with respect to thickness.
...
~~
II
"'
~ 011
J "'"'
-~"'
Ar).
The maximal component forces in the direction of the axis p (FP) appears at the moment
of the total grasp ofthe shaping arbor radius(R) which is experimentally proved [3]. The
pressure roll path along the cone generating line starting from the moment of touching a
workpiece till an achievement of the maximal force in the direction of the axis p is equal
to:
(1)
373
A-A
---
__..
Fig.2.Components forces
The maximal stress in the direction of the axis p at the exit from the zone II is given by
the equation (reduction with respect to a workpiece diameter and thickness) [3,4]:
(j Rllmax
(j~/max
=115K 1/sr {[1+-!J.-(1.
sma
1,15K 11s,.
where:
cr~tmax- maximal meridian stress at the exit from the zone I (Fig. 1,2)
K~.,,.K 11s,.-
R;'
d;
s"
{1 +!J.Y)
(3)
= 2r1' = d1 + 2R tg( 90 -
374
aR =arccos P., + s,
Pw +so
y 0 = 90 -aR -a 0 ,
FPmax=O'Pmax'Ap
(4)
Ap
= ~; dw (~) S1
(5)
d, +dw n
The maximal force in the direction of the axis q appears at the end of the process, that is at
the moment of disappearance ofthe zone I
O'p
1,15Kusr{[l+~(l+ln
So)]ln So+ sina}
sma
s
s
2
1
(6)
. Q
where:
v) a
d;.. d, ( - tg-pwsma
R
8
d, +dw n
2
(8)
d;- diameter of the cone at the end of the process, (~)-pressure roll path along the
cone generating line, then the maximal force component in the direction q is:
(9)
Fr=O'rAr
(10)
Ar
O'r
+O'Q
2
(11)
(12)
For the case s1 = s0 the maximal force component in the direction of the axis p(FP)
appears at a distance
~~
375
R + K lsr
cr P max = ( l,lK lsr In --4-0
r1
2p., +so
1+ J.!Y)
(14)
Ap =
d. d;.
(~)So
(15)
n
The maximal force in th.e direction of the axis q appears immediately before the end of the
process [3). At that moment the stress is cr P ~ 0. From an approximate plasticity condition
the maximal stress is obtained in the direction q (Fig. 2):
(16)
cr Qmax = cr p + 1,15K..,. = 1,15K,,.
If the pressed surface in the direction of the axis q taken to be:
AQ =
d . +d1
(v) F@v)
-d;.d.,
.-- -
2p .. dl +d.. ll
ll
The maximal component force in the direction of the axis q :
FQmax
crQmax
(17)
AQ(18)
If we take into consideration that in deforming a plane deformation state appears then the
expression for the tangential stress is:
(19)
Whereas the tangential stress component is:
Fr = crr. A.r
where:
A.r
=~2p.. {7z)
(20)
(21)
S0
1,15Ku.,,.{[1+~(1+/nso)]/nso
+ sina}
sma
s
s
2
1
(22)
According to the equation (22) the stress during the process has a constant value. The
experimental research performed [Ref. 3] shows that the component forces gradually
increase during the deforming process. The increases of the component forces appear due
376
d; + d;)
The greatest
contacting surface is immediately before the end of the process and consequently, the
components forces are the greatest.
The maximal components force in the direction of the axis p :
FPmax = cr p. Ap
(23)
Area invo!ved the force transmission in the direction of the axis p for the very end of the
process
J:
A p--
(24)
The components forces in the direction of the axes q ( FQ) and t ( F1 ) can be determined
with respect to the same given equations (7,8,9,IO,II,I2) along with taking consideration
about the stress constancy as well as about a small force increase due to an increase of the
contacting surface between the pressure roll and the working cone.
The above-given expression can be used for the rotary drawing with pressure rolls having a
radius (Pw) or a cone on its top (angle of clearancea). The connection between the
pressure roll radius
equation:
377
diagrams are recorded for the conical parts whose thickness deviates from the "sine law"
and especially for various materials and process parameters [ 3].
In order to view more clearly the flow of changes of the components forces the diagrams
are given of the pressure roll path. Fig. 3. gives diagrams components forces for a conical
part when s 1 = s0 sina 0 whereas in Fig.4. diagrams are given of the forces for the rotary
drawing with respect to the "sine law"(s 1 = s0 sina 0 ).
r--- t---
'
L:_
'
- - --
I
-- 8
t1. fet~
2:
__ L
L
'
'
1/
I
I
I
i / ' r-----:=-.
f--
I
I
'
' !
'o
I
I
1-
I
I
~ :
r-
.,,,
II
--
I
I
I
--
--
' '
I
--'
!
I
.,
I
'
'
5. CONCLUSION
On the basis of the results obtained by theoretical elaboration as well as by experimental
research the following conclusions can be drawn:
- values of the maximal components forces ( FP. FQ arrd Fr) determined by theoretical
analysis and experimentally measured upon the recorded diagrams agree very well,
- positions of the component forces' maximum arrived at by theoretical analysis agree with
the experimental values since the theoretical assumptions are achieved on the basis of the
component forces' analysis during the experiment,
- a flow of the components forces changes upon the recorded diagrams agre'-s with the
theoretical assumptions (in view of the fact that sudden changes for particular processes
could not be involved),
-maximal component forces in the direction of the axis p for the case of a deviation from
the "sine law" appears at distance:
Iz0 --
----
cosa 0
So
. 0 tga 0 + - ( P.. + S0 + R) tga 0 + Pw sma
- + Pw cosa 0 from the moment
cosa 0
378
- maximal value of the component force in the direction of the axis q appears immediately
before the end of the process for hk =
0,
- tangential force component during the deforming process has an approximately constant
value,
- in the rotary drawing of the conical parts with respect to the "sine law" and in the very
beginning of the process there is a sudden increase ofthe component forces which retain an
approximately constant value after the process is firmly established,
- a negligible increase of the component forces in the rotary drawing with respect to the
"sine law" appears due an increase of the coDtacting surface during the deforming process
(d; + d;'),
1. INTRODUCTION
The Copper Institute of Bar has been trying for several years to develop technology for
continual wire casting as well as that for profiles of small cross-sections made of pure
metals and their alloys by crystallization above the molten metal. The first aim of
developing such continual casting technology is to meet the demands of the lacquer wire
factory of Bar, that is to manufacture a cast copper wire of 8 mm in diameter that can be
directly subjected to cold plastic treatment without any previous hot treatment operations.
In addition to the fact that it does not need to be treated in hot state before drawing, the
wire has to have good plastic properties so that it can undergo a high degree of reduction.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
380
V. Stoiljkovic et a!.
In this way it can be used for fine drawing for diameters below 0.1 mm. In order to develop
such technology a great deal of experimental research is needed for the sake of defining the
effect of great many factors upon casting velocity and copper wire quality. Part of this
research is presented in this paper.
What has been tested is the effect of many casting parameters upon the process stability
and the cast copper wire's quality for the sake of separating the parameters that most affect
the casting velocity and wire plastic properties. The determination of these factors would
present the basis for new research aiming at increasing casting velocity and cast wire
q~cllity. The results of the research presented in this paper can also serve as the basis for
choosing optimal parameters for getting the best quality wires as well as for manufacturing
a new structure of cooler. Besides, a new solution can be obtained for a wire drawing
device in order to obtain much greater capacity and quality.
2. CASTING PROCEDURE
The continual copper wire drawing procedure developed at Bor Copper Institute is one of
the continual casting procedures by crystallization above the molten metal (1).
(/)
~
iii
-<
~
j;j'
:E
<
I
0
c:
c:
3
381
cooled by water. The hardened wire leaves the graphite die at high temperature. Within the
cooler, and in order to prevent the oxidation of the cast wire surface due to high
temperature vacuum is used. In addition to this function, vacuum is used for providing a
necessary pressure differential within the cooler thus enabling the molten metal to enter the
graphite die. In order to prevent the cast wire oxidation after its leaving the cooler, the
temperature on its surface must be below 60C (2). This is provided by cooling the cast wire
in the crystallizator' s secondary part. The cast wire drawing is done according to the
motion-pause pattern.
The process stability is provided by adjusting the wire drawing velocity to the process
leading away heat from its side surface.
2. DEFINING CASTING PROCESS AND WIRE QUALITY FACTORS
The process of the continual copper wire casting by crystallization over the molten metal is
based on agreement between thermodynamic parameters and those of cast wire motion
(3). Both hardening character and cast wire quality are directly influenced by the degree of
the acquired agreement. The determination of the most optimal continual casting regime by
calculation is very difficult due to the fact that a series of unknown values appear, namely
those that cannot be contained within one formula. Their definition requires experimental
research.
The factors affecting the process stability and the cast wire quality are divided into five
following groups:
1. Parameters for Crystallizator Cooling Water,
2. Parameters of Cast Wire Motion,
3. Structural Parameters,
4. Vacuum in Casting Cooler, and,
5. Parameters of Molten Metal.
For experimental needs six measuring places have been determined at which the process
input parameters are recorded (11) as well as the cast wire quality parameters (4). Samples
have been taken for every change of any casting parameter as shown in Table 1.
Table I
75
382
V. Stoiljkovic et al.
The casting parameters that cannot be shown in their real form for the sake of protecting
the technology are denoted as Motion Parameters 1 and 2, Crystallizator Parameter A, B
and C and Vacuum Parameters (such as the time for drawing wire in one cycle, the time for
wire' s rest in one cycle, the value of cross-section for crystallization water-cooling,
structural parameter that is supposed to increase the crystallization front lifting, the
structural parameter that increases crystallizator's length as well as vacuum intensity).The
feed value that cannot be shown in its real value is expressed through the drawing rolls
diameter. The shown values for this group of parameters are given in this way so as not to
change the function flow of the obtained dependencies between the cast wire plastic
properties and these values. The overall number of the performed experiments was 250;
some ofthem are given in Table 2.
3. NEURAL NETWORKS
The basic aim of the obtained experimental results is to model the observed wire casting
system. Since the system is described by means of many concrete input-output values,
neural networks impose themselves as an ideal modelling method primarily because of their
abilities for machine-learning.
O;.F\i,Wi Xi)
HIDDEN :
LAYER ~
F: Adlvafion Funcrlon
- - Feedfooward Connections
-- - . . Recurrent Connections
383
The network acquires knowledge (learns) by adjusting the connections' weights in the
network training process that is taking place according to the determined learning
algorithm. In the learning process so-called training samples are used, that is predetermined pairs of input and output values as the basis for weights' adjustment.
The neural network structure involves the following tasks:
- determination of the network structure: network topology (number of layers, number of
neurons and their connection) and activation function
- choice oflearning algorithm
- choice of training samples
3.1. NEURAL NETWORK STRUCTURE
The basic task of the neural structure is to build in the way the tested characteristics of the
cast wire depend on the variable casting parameters. On the basis of the Kolmogorov
convergence theorem a two-layer neural network has been applied (in addition to input
layer) - that is, multilayer perceptron. The network has 11 neurons in the input layer (11
input parameters PI - Pll), 4 neurons in the output layer (4 output parameters 11 - 14).
The number of neurons in the hidden layer is experimentally determined. The neuron'
activation function is sigmoidal(... ). The network is trained by the same set of samples for
varying number of neurons in the hidden layer and the best results (the least error) are
achieved in the case of 12 neurons in the hidden layer. With a lesser number of neurons in
the hidden layer the network is not able to converge. With a greater number of neurons it
only memorized the samples (input-output pairs) and it loses its generalization ability.
3.2. NEURAL NETWORK TRAINING ALGORITHM
Since neural networks acquire their knowledge about the problem by learning, the task of
the training process itself is to enable the networks to detect dependencies among the
processed data while simultaneously bridging the gap between particular examples and
general relations and dependencies. The trained network tnodels the mapping of a set of
input data vectors (process parameters) into a set of output vectors (copper wire's
characteristics) and thus it represents the model of the copper wire generation.
For the network training a backpropagation algorithm has been used (4) based on the error
backwards propagation as the most commonly used network training algorithm for
mapping networks training.
As for the algorithm parameters (4) that cannot be discussed here due to the lack of space,
it is important to note that the initial network weights values are initialized at the value
1/(2 .. number of neurons in the previous layer) while the learning rate is set at 0.3.
3.3. CHOICE OF SAMPLE TRAINING
In order to obtain necessary training samples numerous experiments have been carried out
in real conditions with the following parameters:
384
V. Stoiljkovic et al.
a) Casting Parameters (system input values and at the same time neural network input
values) shown in Table 1 in the first 11 columns
b) Output characteristics ofthe manufactured wire (system output values and at the same
time neural network output values) shown in the last 4 columns in Table 1.
The overall number of the performed measurements is 250, namely 250 various parameters'
combinations have been observed as the ones affecting the copper wire quality. The test
results are only points in multidimensional space (11 input variables) though it is necessary
to define the whole space. However, the values obtained as the test results are not able to
lead us to any conclusion about the way particular parameters as well as their mutual
effects influence the copper wire quality. Not even the experts with great experience in this
field are able to determine what manufacturing conditions are needed for obtaining copper
wire of particular quality. In practice the trial method is applied as a non-efficient and noneconomical procedure since it wastes both material and precious time. Therefore, the
obtained data processing by regression analysis as well as the acquisition of analytical
expressions are not sufficient to make any reliable conclusion about the effect of the
discussed factors. It is for this reason that the knowledge acquisition from the known data
is found in the application of artificial intelligence, that is in the use of neural networks.
The average number of iterations needed for the neural network to learn the given mapping
is 100000. It is not pre-determined; instead, the network is trained until the overall error is
obtained as less than 10.
In other words, the trained neural network represents a model of the cooper wire
acquisition described by empirical connections between the system input and output. The
neural network behaves as an adaptive system since it learns by self-adjustment; when the
learning process is over, it identifies and simulates the system for acquiring copper wire.
This procedure is known as forward modelling - mapping of the direct system dynamics.
The result of the forward modelling is a system model that identifies the observed nonlinear system in an extremely accurate and effective way. This means that this model is
extremely suitable for various experiments with the system behavior; for instance, this
includes the experiments aimed at discovering what particular wire characteristics will be
obtained in the case of arbitrarily chosen casting parameters. Likewise, this prediction can
be output-input oriented, that is it can be oriented towards the determination of the system
input parameters' values that would give optimal output values - quality. This problem can
also be formulated in another way: if the output system values are given, then the values to
be led to the system outputs should be defined. This problem belongs to the so-called
inverse modelling.
4. INVERSE MODELLING
The inverse system modelling plays an important part in a wide class of control structure.
There are many procedures for inverse modelling and they are discussed briefly (5).
1. The simplest and, at the same time; the most tiresome procedure is manual adjustment of
input values as well as output values control. Thus fine input adjustment is performed in
order to obtain the desired output. This procedure is time-consuming especially if the input
is multidimensional.
385
2. A better procedure is the so-called direct inverse modelling based on the identical
training procedure as in the forward modelling, though the roles of the system inputs and
outputs are changed. Namely, training samples are thus synthesized that the outputs of the
original training samples are brought to the network input. The system outputs are
measured, whereas the system inputs await at the output. Clearly, such structure tends to
an effective mapping of the system inverse model. This procedure, however, has some
shortcomings, primarily in the case when the system-defining mapping is not one-one since
for the two identical combinations of the input signals different output will be obtained.
3. The .third procedure of the inverse modelling that can overcome all the above-listed
shortcomings is known as a specialized inverse learning.
The idea is quite clear from the mathematical standpoint. Let X and Y be inverse and direct
system models, while a, band care system inputs or outputs (models) and let:
Since the inverse and direct models are mutually symmetrical, that is since the number and
character of the inverse network inputs correspond to the number and character of the
direct network outputs and vice versa, the number and character of the inverse network
outputs correspond to the number and characters of the direct network inputs (number of
hidden neurons cannot be identical since they depend on training samples), it is possible to
couple them in such a way that the one set of outputs is brought to the other's inputs, that
is, it is possible to obtain the following combination:
If these are really complementary models (direct and inverse), then it is going to stand that:
Namely, what is brought to the inverse model input appears at the direct model output.
Thus this network combination has to provide for identical mapping and in this sense it
should be trained under supervision. This does not present any special difficulty.
It is important to note that, since the direct network has already been trained, only weights
belonging to the inverse modelling network should be adjusted while the error from the
input, namely, the one used for adjusting weights, is normally distributed through the direct
model as well.
5. OPTIMIZATION OF CAST COPPER WIRE PLASTIC PROPERTIES
Since the forward modelling for the wire casting system is described in the section 3, the
previously presented solutions can be used for obtaining the inverse system model that
would provide for generating quality in development. They can also be used to predict the
above-listed process parameters in order to acquire the desired copper wire characteristics.
The determination of the desired input parameters' values is experience-based during the
development of technology and characteristics of a high-quality copper wire:
Relative Elongation A5 ~ 40%
Results ofBending Test~ 70
Results of Alternate Bending ~ 10.
The procedure 3. from the previous section has been applied for inverse modelling; the
procedure 1 has been used for testing. The obtained results matched each other till
satisfactory accuracy is obtained.
The used neural network(s) structure is shown in Fig. 3.
386
V. Stoiljkovic et al.
Namely, the inverse neural network outputs are brought to the trained direct neural
networks. The direct network outputs are compared with the inverse networks inputs and
by the above-described backpropagation algorithm the training is performed until the unit
mapping is completed.
INPVTS
Olroc:l
. -
GOALS -
'""Neural
r-
r-----
'--
c------
ACTIONS
6. CONCLUSION
The formulated new knowledge contained in neural networks enables a further
development of new copper wires with pre-determined characteristics. Thus important
results have been undoubtedly obtained for applying this technology without which there is
no high-quality drawn wire. They have confirmed, in the best possible way, both
importance and possibilities that exist in the generation of the product quality in
development.
REFERENCES
[1] Murty, V.Y., Mallard, R.F.: Continuous casting of small cross sections, AIME, New
York, 1981
[2] Hobbs, L., Ghosh, N .K. : Manufacture of copper rod, Wire Industry, 1 (1986), 42-44
[3) Bahtiarov, R.A.: Vlijanie temperaturi i skorosti litja na strukturu i svojstva slitkov
splavov na mednoj osnove, Cvetnie metalli, 1 (1974), 68-71
[4) Fu, L.: Neural Networks in computer intelligence, McGRAW-HILL, New York, 1994
[5) Miller ill, W.T., Sutton, R.S., Werbos, P.J. : Neural Networks for Control, MIT Press,
Cambridge; MA, 1991
M. Jurkovic
University of Rijeka, Rijeka, Croatia
1. INTRODUCTION
Flexible automated manufacturing recently obtains more importance, particularly for small
and medium batches of part production. We are faced with stronger requirements for a fast
adaptation to marketing conditions, higher level of accuracy and reliability of machining
systems and for raising of small production batches to a higher and better quality level. The
contemporary machining systems are strongly required to provide simultaneously a
flexibility and productivity and those conditions, at present, are met only by flexible
machining systems (FMS). Flexible machining systems of profiles and wires manufacturing
are still insufficiently researched, developed and applied in practice [1, 2, 3, 4, 5]. On that
account, it is necessary to dedicate particular attention to determination of optimal
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
M. Jurkovic
388
technology and developed FMS for complex profiles and wire manufacturing. The first
FMS FLEXIPROF 40-7 presented in this work is designed at Mechanical Faculty and was
put in trial operation on 1988, [6, 7]. This work deals with a structural construction,
technology, principle and control model ofFMS.
2. ELEMENTARY TEHNOLOGICAL BASE FOR DEVELOPMENT AND DESIGN
OFFMS
2.1. Group technology and classification systems for rolling and drawing profiles
Realize a sufficient level of productivity, together with flexibility, it is necessary to put the
manufacturing program assortment into manufacturing homogenic groups of profiles
having technological and manufacturing similarity (Fig. 1).
The classification system and profile classifier are elementary technological base for design
of technologies and flexible technological modules (FTM). A classifier is defined by the
numerical code characters (Fig. 2), which enables the using computer material selection,
tools and technological methods (CAPP). Application of integral classification systems in
describing of profiles production has a special importance in the technological preparation
of production and products design (CAD).
"'"'
:'i
----.
........ ...
... ... ..................
.................
--
~--
....
)-(
. . . &.....a . .
JlfX+t.V ...
AA
,...
-fl4
389
GROUP
CLASS
Section I-I
Secaoan-n
Section m-m
1
Section IV-IV
Section I-1
Section ll-ll
Sec:lio11 I-I
Section n-n
Sectio11 IV-IV
Figure 3. Illustration of flexible technological models (forming station) for rolling and for
roll drawing with 2, 3 and 4 tools
M. Jurkovic
390
(2}
~~..--.Fd=O
WY~H-~~
c.
b.
~.
=>
Figure 4. Profile forming models: a. matrix drawing model, b. roll drawing model, c.
rolling model
The limiting degree of deformation is used as a practical means in the determination of the
formability limit of materials:
(/)elimiting=
/(Olj) =
3crm
/(-) =
O'e
f(
/1
) = f( ~3J )
2
(3}
where are:
stress tensor,
mean hydrostatic stres.s,
O'e effective stress,
/1 = 0'1 + 0'2 + 0'3 = 3 O'm - first invariant of stress tensor,
O'ij
O'm -
.J2=61.
!f(cr1-cr2)2 + (cr2 -cr3)2 + (CT3 -0'1)2]=-O'e3-2 - second .mvanant
. of the dev1atonc
. .
stress tensor.
As at cold forming processes of axial - symmetrical profiles 'Pl = 'Pl and
= f{>l = -0,5 (/)3, this equivalent strain can be represented form is:
(/J3 = -2 (/Jl,
that is 'Pl
391
Ao
(4)
A similar pressed state of stress as that of rolling, showed a maximum formability in metal
forming processes:
(/Jcmax= (/Jerolling= 2,810 > (/Jerolldrawing= 1,650 > (/Jcmatrixdrawmg= 1,520 = (/Jcmin
The experimental values for material steel R St 42-1 (DIN 17006) are shown in Table 1.
Table 1. Results of experimental investigation of ~imal techno I~U
EXPERIMENTAL
VALUES,%
FACTORS OF
Num
Matrix
Roll
Rolling
OPTIMIZATION
drawiJ!& drawiJ!g
41,5
54,0
100
Maximal degree of the deformation
1
for machining station, CfJms
54,1
58,7
100
Total degree ofthe deformation,
2
(/Jt = (/Jc limiting
60,5
85
100
Velocity of~ofiles, v
3
185
160
100
Number of machining stations, n
4
130
108
100
Strengthening ofworkpiece
5
material, ~CT
153,8
107
100
6
Temperature oftool contact
surface: t
35
68
100
7
Process productivi!Y,_q
138
96
100
Energy consumption, E
8
30
Tool life, T
93
100
9
AIM
FUNC.
F.
F. = (/Jmsmax
F.= 17-\rnax
'
F.= Vmax
F.=nmin
F.= ~CTmin
F.= tmin
F.=_qma:<
F.=Emin
F.= Tmax
During design of forming process and number FTM important is of knowledge: geometry
and characteristic dimensions cross section of profiles, initial workpiece material and
number of phase plastic forming. Number of phase forming, that is, number FTM is defined
by the expression:
A0 1
lnA. 1
n=--=ln--(5)
An lnA.m
lnA.m
Total coefficient elongation of materials:
392
M. Jurkovic
(6)
where are:
A 0 , An - initial, that is, finite cross section of workpiece,
At, A.2, .. , A.n - coefficient elongation from phase of forming,
&n- mean strain of workpiece,
A... - mean coefficient elongation ofworkpiece,
1, 2, ... , n - phase forming.
Some possibilities of the forming process of cross section of profiles and wire in Fig. 5 are
shown.
/I~
"'."'
\ ....
:,..;
....:::
:;.-
11,2081
11,0-2.4171
12,4171
~~". t-
"''~~\
~ fii-
12,4171
''0 I
1
12,6871
11.5561
11,104 I
A.
B.
c.
11.5561
11.0-4,0 I
12,687 I
I
11,33 31
11.208 1
I ~0-~5561
11,5561
IU041
I ~1041
D.
E.
F.
G.
393
+f?tirh-~
!Qtr
~=:::;:;::==:=======.='t::::J. 1~
12
~~~~:U:2!f==~~~!I=]~I~g
Figure 6. Technical structure ofFMS for cold profile rolling
Main constructional-technological features ofCNC-FMS are:
- Number of technological modules: 7
-Cross section area of a product: 3 mm2 to 100 mm2 (maximum 150 mm2)
-Tolerances offinished product: 0,1 to 0,02 mm
-Number of rolls per technological module: 2, 3, 4, and 6
-Production speeds: 100m/min to 1000 m/min (maximum 1-800 m/min)
-Roller diameter: 160 to 250 mm
- Electric motor power: 40 kW
5. CONCLUSIONS
Produced prototype and realized CNC FMS in industry offer great possibilities for further
development of machinery in this area of metalworking with the adverse main stress
schemes (matrix drawing) and with machining systems which are characteristic with rigid
conventional automation.
This research carried out shows that by using an optional scheme of principal stress
components the potential of metal materials formability substantially increases, which has
an exceptional importance in projecting technology, tools and machining systems. Thus, a
similar pressed state of stress (cold rolling tp. limiting=2,81) related to the heterogeneous state
394
M. Jurkovic
of stress (cold matrix drawing 9'climiting=1,52) showed in increasing ofthe formability limit
of material for 185%.
Factors which determine the suitability of this FMS model primarily have technical nature
(operational safety and reliability, unification, energy consumption, tools durability, and
originality of modular technique); technological nature (surface state of workpiece quality,
optimal production method, accuracy of forming); economic nature (minimal costs of
operation and profitability); organizational and information nature (process control,
stability and adjustment of working processes, monitoring of processes, ergonomy
handling).
The developed flexible rolling line has a series of preferences in comparison with
conventional automatic line, i.e., automatic rigid line:
- high degree automation and flexibility,
- the working process productivity increasing to 250%,
- reducing the preparatory - finishing times and auxiliary times to 50%,
-increasing the tool durability, that is, less consumption of tools to 350%,
- machining accuracy increasing,
- reducing energy eonsumption to 25%, etc.
REFERENCES
1. Jurkovic, M.: Flexible technology and manufacturing systems in process of deformation,
Proceeding 1/91, University ofBanja Luka, 1991, 67-92
2. Jurkovic, M., Popovic, P.: Die verwendete Technologic und flexibles Bearbeutngssystem
:fur Profilherstellung, Proceedings FOSIP '88, Bihac, 1988, 5-61
3. Reuter, R.C,: The rod rolling revolution of the '88s, Continuus Spa Milano, Wire
Industry, 4(1983), 203-205
4. Esipov, V.D., Iljukovic, B.M.: Prokatka specialnyh profilei sloznoi formy, Tehnika,
Kiev, 1985
5. Jurkovic, M., Mecanin, V.: Development and itroduction of new flexible computer
controlled line for wire and full profiles rolling, Proceedings 8th International Conference
BIAM '86, Zagreb, 1986, 226-229
6. Jurkovic, M.: The state of stress in the zone of deformation is a fundamental pointer of
the deformability of material and effectiveness of cold work, Proceedings 4. International
Symposium on the Plasticity and Resistance to Metal Deformation 1984, H. Novi, 1984,
354-356
7. Jurkovic, M., Curtovic, K.: Erwagung der Wirksamkeit der verwendung von
Bearbeitungsverfahren zur Ausarbeitung der Aschensymetriscen und ahnlichen umrisse,
Technology ofPiasticity, 17(1983), 41-61
8. Jurkovic, M.: Determination of the Formability Limit ofMetals in Processes ofPiastic
Forming, Engineering Review, 15(1995)1, 9-22
396
In the following, a procedure for evaluating the clamping unit features is described, and the
transfer function of five clamping units is presented. Results of drilling experiments are
shown, revealing the behavior of the drill-clamping unit during the process, especially in
the penetration phase. The relations between the clamping unit features, the drilling
conditions and the hole-location accuracy will be demonstrated.
In milling, the role of the dynamic features of the clamping-end mill unit is demonstrated
applying two clamping units, under different milling geometries. The relations between the
(dynamic) cutting force excitation, clamping dynamic features and end-mill response are
demonstrated. All the above is directed to reach a better understanding of the role of the
interface unit in high performance machining, leading to the improvement of cutting
process planning as well as to a rational development of the clamping units.
397
.pring collet
chuck
screw
Fig. 1 Various clamping units
1
:::..
O. I r
~i
~
200
Fig. 2
t --
W"ELOON! SCREWS
~ --:--COLLET CHUCK
p -
10 {- _ _ THEORETICAL CL~MP
-- - - - - --
t--- ----.
- -c;/'-----::::>.lt=-
2 _r_ _
----~-----;-
L(mm)
20
Fig. 3 .
40
60
80
100
398
399
s
-5
-.!. 0
4 >1--:J fx (N]
i~~
I
'
-~
~~::!l.
Z"J ~~. ~J
.~
z ,.,
-2 . 0
~ 6 Y [ V] [0. 14 mrn/V]
'
I ' :
n
]
::r[N
,.,
3 . 1d
'
;a -
I I
.9
'
2 <J~. Iil
I
I
40\3.0
I
62\a.e
n [rpm I
:woo
.woo
R[O.Ol mm]
COLLET
I I
''
S' I<J.I(l
413~ . Q
.i'!-
2Q\1.~
.; e.e . ~
x i1
---~
-l . ., -
-2.9-
- 3. 0 -
I I
-3.0
-1.13
t.X (V)
I
.!. . 0
+
X
15
10
3.Q
x:E- 1
HYDRAt;LIC
!::.
t. y [V ] (0. 14 mmN)
l. G -
2.!il
1.13
2 . ~-
4 . >1
....:;
n
, <J
xE-J.
3.
500
I
I I
;: i1 (L 'd
.c.~ ....:;
6i3a . ~
>:E~
~(l :a . ~
: a~~ .
2. 0~ ~
- 2 .CJ
.0
20
. 13
6!31? . 0
~< E-1
xf.- !
25
' I
~ :g ~
- 4 ~ .3
400
DISP./FORCE
MAGNITUDE
\ i
i -.-
,;
DISP./FORCE
'\
'
. _,\. ;~f
LOG
0.01
!'i I,
\.
'I
'-4"
iJ
~-
\rf
rl-.._--~"il
FREQ. (Hz)
MAGNITUDE
625
I \.
I
! i
......;.~'* ' .-
, ~~ ,
-1. 9
_5 ,
11;1
I
'"'
-7 . 13 --
,1!
1
r! .L . .), ~
u
I
'
~~~
<
hI 1.liy1/
1.!'
.9
II
1ll9 . 0
:::::=======: ..
I
},
36 1l . e
r~\ ,.1
1,
1,. ,
1:11 ,l j
2 . ll ___ .,
o
- <\. HL.
Fy(tl
.:::---
,...., I
-~
l. _.----.____..;\_ ~. - -
li
II
.1,
. ra
---,-
IiI I I i I
351l.ll
,I.L_,
, ,
II i IT'l
63~J.Il
_j ' - - - - - - -
6 . ~ ~4 . ~'~
.1a j
l. ll
~~
. ; ~~
, ~-~
,
~,
FxCtl
'
3.1l l
[I 11 f1
Jf I., IJ
I u
I
,t''il1 L
111 ti1j
II
II
, 1
. -3.9
!1 II, I{1
.-
.i
'
LOG
. . - - - - - - - - - - - - - - - - - - - - - , . - - - - - - -- - - -1r1I'
1190
_j"T>.
~:: ~- IItl
---:.:_...~
405 425
1240
f
,.r-'"--..._
'
1'.
; .,
.:.' l.
PHASE DEG.
\ ('\
-180
10
470
'''"j '
1
41 ~,-~- 660
DISP /FORCE
180
-r ~
DISP./FORCE
0.0 1
r--~ 1~
-~
1.0
PHASE DEG.
r1r\
180 I
I
1!11U3
1'
'
--
~-2-.-5. . . . . . , - - - -r
I
II
l .5
J.. ll
5
I~y ( f)
I
2.0
360.0 i
-Ij .
1'1
. ll
l---
I '
:i1U3
.~ ...
II
o\
,j
' ll
I
----- ------- -"--- - .. .J
i I I I i 1 ! '1 I ! I
359.@
650 . \l
3. 1l-
2. 9 -
II
1. 0 .9 _
)1. .
1 1
Dy(f)ill . ll
111"1
1 1 1 1 1 1 1
65 0.0
35 1l.ll
J.
401
Each experiment was repeated 4-5 times, and the results were averaged.
Fig. 11 presents a qualitative description of the milling forces as measured in [ 5 ].
Fig. 9 presents the force and deflection functions of a "Weldon" chuck in slot milling in
time and frequency domain; y - feed direction, x - perpendicular direction.
In this example one may observe the following:
a. The typical force function includes the main component at tooth frequency
(200Hz) and smaller but still significant higher-lobes (at 400, 600Hz). There are also tool
runout component (at 100Hz), and some high frequency dynamometer components.
b. The large magnification of the 400, 600 Hz components of the exciting force, due to
clalmping natural frequencies expresses in the deflection curves.
In Fig. 10 force and deflection functions are shown, while using the "Weldon" chuck in
side-wall milling. In this example, the following may be observed:
a. The force function is different now: the "y" force (feed direction) sign is changing,
therefore an additional400 Hz (twice-tooth frequency) excitation occurs.
b. The clamping-unit transfer function amplifies this component both in x andy directions.
The combination of milling geometry (force function shape), cutting conditions, and
clamping unit features leads in this case to increased vibrations in both directions.
In Table 1, the maximum values of force and vibrations are presented. They were averaged
in 4-5 experiments each, over four spindle rotations, in time domain.
Force values in identical operations are the same in both chuck, which means that the
influence of the force-deflection feedback mechanismn is negligible (no chatter).
The difference in deflection between the two chucks is evident, and matches their transfer
functions.
Milling
Operation
No. of Tests
Slot Milling
Side
Milling
Wall 5
Measured
Parameter
Fx- perp.
Fy -feed
Dx- Perp.
Dy- feed
fx- Perp.
Fy- feed
Dx- perp.
Dy- feed
Weldon
A vera2e Value
0.83
0.68
0.89
1.11
0.84
0.64
0.94
1.03
Collet
A vera2e Value
0.86
0.68
0.62
0.42
0.80
0.65
0.56
0.40
6. REFERENCES
1. Tobias, S.A., 1965. Machine Tool Vibration. Blackie and Sons Ltd .
. 2. Week, M., 1994. "Werkzeugmaschinen-Fertigungssysteme". VDI Verlag.
402
3. Gygax, P.E., 1980. "Cutting Dyanamics and Process Structure Interactions Applied to Milling", Wear
Journal, Vol. 62, pp. 161-185.
4. Rivin, E., Agapiou, J., 1995. "Toolholder/Spindle Interfaces for CNC Machine Tools", CIRP Annals, Vol.
44/1 pp. 383-387.
5. E. Lenz, J. Rotberg, R.C. Petrof, D.S. Stauffer, K.D. Metzen, 1995. "Hole Location Accuracy in High
Speed Drilling. Influence of Chucks and Collets". The 27th CIRP Seminar on Design Control and Analysis
of Mfg. Systems Proceedings. pp. 382-391.
6. Armarego, E.S., Deshpande, N.P., 1991. "Computerized End Milling Force Predictions with Cutting
Models". CIRP Annals, Vol. 4011, pp. 25-29.
7. Smith, S. , Tlusty, J., 1991, "An Overview of Modeling and Simulation of the Milling Process", ASME
Journal of Engineering for Industry, Vol. 113, pp. 169-175.
8. J. Rotberg, S. Shoval and A. Ber, 1995, "Fast Evaluation of Cutting Forces in Milling", Accepted for
publication in the Inti. J. of Advanced Manufacturing Technology.
9. Schulz, H. , Herget, T. 1994. "Simulation and Measurement of Transient Cutting Force Signal in High
Speed Milling", Production Eng., R&D in Ge~ar.v, Vol. 112, pp.l9-22.
~:: ~
l.G
jl
:! ___j\'---r_j'\o_o_~~.~_j!_J
1
11
f ~(f) 59 . 9
1111~
-tso . o
850.(!
05":11,
I
4 .9~
3.9 ~
2.9 J
0
io
1- ' )'
I
I I
Oy (f) 9 . 9
Fig.
rl
J\
1. 9
i I'
~---.
- --
-- -
4SQ.Q
--
- ~
859.9
403
Fx [N)
Position [deg.]
KEY WORDS: Production Planning, Flexible Manufacturing Systems, Tooling, Mathematical Programming.
ABSTRACT: The problem of improving the saturation of an FMS cell is considered. Limited
tool buffer capacity turns out to be a relevant constraint. A hierarchic approach is formulated,
contemplating at the highei level the determination of "clusters", i.e., sets of parts that can
be concurrently processed. At lower levels, clusters are sequenced, linked, and scheduled.
While for the latter subproblems solution methods taken from the literature are used, two
original mixed integer programming problems are formulated to determine clusters. The
proposed methods are then discussed on the basis of computational experience carried out
on real instances.
INTRODUCTION
The present paper discusses some production programming and scheduling problems
studied for a flexible cell at the Trieste (Italy) Diesel Engine Division of Fincantieri, a
major Italian state company in the shipbuilding sector.
The Diesel Engine Division produces and assembles diesel and gas engines for ships and
electric generation plants. Its normal production is about 50 engines per year. Typical
costs for a single product range from about 3 to 6 million dollars for fast and slow
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
406
M. Nicolich
engines, respectively. Orders arrive about one year before due date; then supplying
row parts requires six monts, production and assembly three months each.
The FMS cell under study is composed of four machines: two machining centers, a
vertical lathe and a washer. Each machine has an input and an output buffer for
parts. Also, each machine, except the washer, has a tool buffer with 144 positions
each. Tools are loaded and unloaded automatically. Pallets carrying parts or tools are
moved automalically too.
The FMS cell is a crucial resource for the production system, as it can process parts
as much as three times faster than ordinary job shops, due to its ability to comply
with changes both in production mix and in lot size. However, the cost per hour of its
machines is sensibly higher (about 1.5 times) than the cost of a machine of an ordinary
job shop. As a consequence, the FMS cell requires to be saturated as much as possible.
The cell started its production in 1991. It operates on three shifts, eigth hours each,
from 6 a.m. of Monday to 2 p.m. of Saturday; the night shift is not supervised by
operators, thus giving about a 50% yield.
For each week from 1992 to 1995, the saturation level of the cell, defined as
produced parts, measured in processing hours
. l l
,
.
.
saturatwn eve =
hours available for the three operatmg machines
has been recorded. Although the nominal saturation level indicated by the supplier of
the cell (Ex-cell-o, Germany) is 70%, the recorded value is about 50%.
In this section the original approach devised to improve the saturation level of the FMS
cell is discussed.
2.1
PROBLEM FORMULATION
The problem of improving the saturation level of the FMS cell has the following elements:
Input data: the monthly production plan, which specifies parts to be produced, assigns operations to machines and determines operation times;
Output decisions:
parts to be concurrently produced;
operation scheduling;
Objective: maximizing the saturation level of the cell;
407
Constraints: complying with the operational rules of the cell, in particular with its
limited resources, such as buffer capacities.
Concerning constraints, it turned out that a critical resource is the tool buffer capacity:
since each part normally requires several dozens of tools on each machine, the possibility of concurrently working different part types is severely affected, thus reducing in
practice the flexibility of the cell. This is the central issue of the problem considered.
2.2
A HIERARCHICAL APPROACH
A hierarchic approach has been devised for the just formulated problem. It decomposes
the whole problem in four simpler subproblems, that, when solved in turn, give the
solution of the whole problem. The subproblems are arranged in a hierarchic order (see
for instance [1]), according to their detail level, from the most to least aggregate. The
solution of each level subproblem provides the data for the subproblem of the following
level. The subproblems are:
clustering: in this phase the set of all parts required by the production plan is partitioned into subsets ("clusters") of parts that may be concurrently processed;
cluster sequencing: in this phase the most appropriate sequence of the clusters which
are determined in the previous level is sought;
cluster linking: in this phase, according to the sequence decided in the previous level,
the transition from each cluster to the following one is determined;
scheduling parts within each cluster: in this phase each part is scheduled within
the cluster to which it was assigned in the first level.
The merit of the proposed hierarchic approach is to allow to tackle with a complex
problem through the sequence of simpler subproblems; on the other hand, this entails
some degree of suboptimality, as the solution provided at higher levels affects the
performance obtainable at lower levels, and, as a consequence, for the whole problem.
However, this is the price to be paid in order to deal with _the whole problem, which is
so complex to be unaffordable by any global approach.
The clustering and scheduling subproblems have been considered in detail, and will be
discussed here. The other two subproblems seemed to have less relevance, as far as
their impact on the global solution is concerned.
CLUSTERING MODELS
In this section the two models are presented, which have been formulated for the
clustering problem. As it has been pointed out, concurrent processing of different part
types is limited by the capacity of the tool buffers of the operating machines. Two
408
M. Nicolich
different models have been devised for this problem. They basically have the same
decision variables and constraints. They only differ for the objective function: for a
given production plan, the former minimizes the global lead time, while the latter seeks
to produce a good machine balance.
The problem data for either model are as follows:
the set P of part types to be produced in the considered horizon;
the set Af of machines;
the set U of available tools;
the set C
the aggregate production plan, in terms of the number N(p) of parts that must
be produced for each part type p E P;
the proces<>ing times, in terms of the time T(p, m) required to process a part of
type pEP on machine mE M;
the tools needed for each part and each machine, in terms of a matrix a(p, u, m),
defined as
a(p,u,m)
~ {:
the number of positions k(m) in the tool buffer size for each machine mE M;
the tool size, in terms of the buffer positions b( u) taken by each tool u E U.
3.1
z(c,u,m.)
'
={
409
L :t(p, c)
L r(c)
( 1a)
cEC
> N(p)
VpE P
(1 b)
cEC
L a(p, u, m):r(p, c)
L z(c, u, m)b(u)
< /{ z(c, u, m)
Vu E U, m E M, c E C
(1c)
pEP
uEU
r(m, c)
< k(m)
Vm E M,c E C
Vm E M,cE C
(1d)
(1e)
pEP
r(c)
> r(m, c)
VmEM
(lf)
In practice, problem ( 1) takes a given number of clusters and assigns each part of
the production plan to a cluster; furthermore, the tools to be loaded on each machine
buffer are determined, complying with their capacities. Among the several possible
assignments, the one minimizing the global lead time is sought. The lead time is given
by processing times only, thus disregarding transportation, fixing and washing times,
as they are at least of one order of magnitude smaller. Thus the lead time of a cluster
is the total processing time of the most loaded machine. Note that at this aggregation
level idle times due to the processing sequence are not considered.
In particular, ( 1e) expresses the total processing time for each cluster and each machine
as the sum of the processing times of all parts assigned to that cluster on that machine.
Then ( lf) takes the lead time of the most loaded machine for each cluster. The global
lead time is then minimized by (1a). Furthermore, (1_b) guarantees that at least as
much parts are produces as required by the production plan. Also, (1c) guarantees
that all necessary tools are loaded on each machine ( /{ is a suitably big number), and
( 1d) expresses the tool buffer capacity.
The model ( 1) allows to find the minimum number of clusters by solving it for increasing
values of ICI: the first feasible solution gives the minimum number of clusters.
Although model ( 1) correctly formulates the problem of minimizing the global leadtime, it is rather heavy for the computational resources it requires. In particular, too
many integer variables z( c, u, m) may result, even for practical problems of limited
size. To overcome this drawback, another version of problem (1) has been devised, by
introducing new decision variables y(p, c) expressing the type-cluster assignment:
y(p,c) = {
410
M. Nicolich
x(p,c)
!V(p)y(p,c)
Vp E P,c E C,
(2)
stating that if x(p, c) > 0, then part type p must be assigned to cluster c. Furthermore,
the constraint ( 1c) is modified as
a(p,u,m)y(p,c)
z(c,u,m)
Vp E P, m E M, u E U, c E C.
(3)
Note that variables z need not to be integer anymore, since the l.h.s. of (3) can assume
only 0 or 1 values. Furthermore, there is no need for the big /{ constant.
Since the number of integer variables is now much smaller than in ( 1), the latter model
turns out to be much more convenient that the former as far as computation times are
concerned.
3.2
BALANCING WORKLOADS
The above model can be easily modified in order to balance machine workloads. It
suffices to introduce another constraint
T(m,c) 2: aT(c)
Vm, E M,c E C,
(4)
expressing that, for each cluster, the workload for each machine cannot be larger than
the load of the most loaded machine, multiplied by the "balance factor" a. Considering
a as a new variable to be maximized, a two objectives nonlinear programming problem
results.
This problem may be conveniently tackled ranging the objectives in a lexicographic
way, with the balance factor at the higher level; then a can be seen as a parameter,
and several problems with the additional constraint (4) can be solved for increasing
values of a, until no feasible solution is found. Then the last found solution minimizes
the global leadtime subject to the maximum balance factor.
The programming problems of the two above models have been solved using Cplex on
a DEC Alpha 3000 workstation, for several instances taken from practical cases. Cpu
times and workloads experienced on the basis of a large number of trials, performed
on real cases, show that the proposed approach is both efficient and effective.
The subproblem of the second level in the hierarchy consists in determining the most
appropriate sequence in which clusters, determined at the first level, have to be processed. In order to determine this sequence, some factors are considered, that were
neglected at the upper, more aggregated, level. In particular, tool loading times appear to be the appropriate performance index for cluster sequencing.
411
More specifically, for .any pair of clusters, the number of non common tools (i.e., tools
that are used only by one of them) is a sensible way to assess the burden of tool
changing when they are processed one after the other. Minimizing the total number
of tool changes leads to a Travelling Salesman Problem [4], that can be conveniently
solved by approximate methods [3]. However, due to the fact that the number of
clusters is in practice always very low for our application, no particular importance has
been attributed to this problem.
Once clusters have been sequenced, they could be "linked", in the sense of processing
in a concurrent way the ending transient of the first one with the beginning transient
of the following one. However, also in this case this subproblem appeared to be quite
irrelevant, due both to the fact that transients are relatively short, and clusters are
quite different, thus leaving few margins to the possibility of concurrent operations.
In this section the problem is faced of finding the most appropriate scheduling of the
parts assigned to each cluster, as they have been determined at the upper level.
For each part type, the production plan specifies the operations that have to be performed on each machine, and their sequence. Due to te reduced number of parts
involved, it is natural to model the sequencing problem as a job shop problem [2].
Note that also in this case, being at a lower level of our hierarchic approach, more detailed elements are considered as in upper levels: in particular, processing sequence on
machines, that was not contemplated to determine clusters, is now taken into account.
For this subproblem, local dispatching rules have been considered [5]. In particular,
six of them turned out to be quite interesting:
1. Erliest Due Date (EDD);
412
M. Nicolich
6 CONCLUSIONS
Motivated by the problem of improving the saturation level of a flexible cell of Divisione Motori Diesel, Trieste, a general hierarchic approach has been proposed to the
production planning and scheduling problem when tools are a scarce resource.
The subproblem corresponding to the upper level of the proposed approach has been
modelled as a mixed integer programming problem, and solved using a general purpose
mathematical programming package. The other subproblems have also been modelled
and standard solution methods have been proposed for them.
Several computational experiments have been performed on real instances, showing the
effectiveness of the proposed approach.
ACKNOWLEDGEMENTS
The authors are graLeful to the managers of Divisione Motori Diesel of Fincantieri for
their support to this research.
REFERENCES
1. Bitran, G.R., and Tirupati, D.: Hierarchical Production Planning, in: Graves,
S.C., Rinnooy Kan, A.H.G., and Zipkin, P.H. (eds.): Handbooks in Operations
Research and A1anagement Science, Vol.
tory,
Nortb~Hollancl,
4:
1993.
2. Blazevicz, .J., Ecker, K, Schmidt, G., and W~glarz, .J.: Scheduling in Computer
and Manufacturing Systems, Springer, 1993.
3. .Junger, M., Reinelt, G., and Rinaldi, G.: The Traveling Salesman Problem,
in: Ball, M.O., Ma.gna.nti, T.L., Monma, C.L., and Nemhauser, G.L. (eds.):
Handbooks in Operations Research and Management Science, Vol. 7: Network
Models, North~Holla.nd, 1995.
4. Lawler, E.L., Lenstra, .J.K., Rinnooy Kan, A.H.G., and Shmoys, D.B.: The Traveling Salesman Problem, Wiley, 1985.
5. Panwalkar, A., and Iska.nder, W.: A survey of scheduling rules, Operations Research 25, pp.45-46, 1977.
T. Odanaka
Hokkaido Information University, Hokkaido, Japan
T. Shohdohji and S. Kitakubo
Nippon Institute of Technology, Japan
1. INTRODUCTION
With the progress of frontier science in recent years, large scale complicated systems have
appeared on the stage and demands for flexibility, diversity, reliability and so forth are put
forward. For these purposes conventional concentrated control systems which bring all
information to one control center are no longer able to meet the demands, because
exponential scale increase of control center will be inevitable in this massive concentrated
control system, so that once a possible accident happened in a control and communication
structure, the whole control system would be broken down through stoppage and aberration
of it. It is under these circumstances that a new concept of system which is able to work out
these problems with the name of ads attracts attention recently[l ].
414
In a new concept of this system there is no control center that integrates the whole system
but each element of it which constitutes the system is dispersed but can carry out the duty of
it through the cooperation of each element in a autonomic way, discharging what is called for,
therefore, once something was wrong in a part of the system and environments of it changed,
cooperative adjustment may be realized in a autonomic fashion and may be able to fulfill its
whole duty, so that interest in it is growing among all quarters. However, this approach is on
the way to unification of system theory and c6mposition principle to realize this artificial ads
has not been disclosed as yet.
One of the clues to this approach is biological system that exists in nature and it will be
necessary to incorporate this autonomous decentralized concept to system making of each
field. Although some approaches are being considered in the formulation of ads, a
production system is worked out as one of the examples.
2. AUTONOMOUS DECENTRALIZED SYSTEMS
In the formulation of this autonomous decentralized systems the following two concepts are
above all important.
(i) intelligent level of an individual.
(ii) relationship between fields that constitutes order and cooperation among individuals.
Intelligence that an individual has means individual autonomic behavior and intelligence that
makes it possible to establish order, keeping cooperation with other individuals. It is
essential for an individual to have some intelligence in a behavior to keep order as a whole,
observing his own behavior and that of others, and act according to the situations.
Let us next consider about relationship of cooperation between the total system and
individuals. In order to do a thing on system, each element or individual that is a member of
it has to be integrated and behave in a reciprocal fashion. Therefore, it will be important to
formulate mathematical equations representing relationship between order of the system as a
whole and cooperation between the individuals.
3. JIT PRODUCTION
As mentioned before, in ads it is required for each unit to equip itself with autonomy and for
the whole factory with cooperation and harmony between the units, simultaneously. How to
formulate a system with such adjustment processes is extremely important but difficult.
Pulling system in JIT production (Just-In-Time production) formula does not generate flow
of entire production automatically but it has a scheduling program by MRP (material
requirements planning), that is, simultaneous progrmnmed instruction from the program
center to all processes, all dealers and parts manufacturers. Under the pulling system each
process is nothing but the so-called discentralized one. On the other hand the program center
provides the fmal assembling process only with production program information based on
orders of selling shops, so that the program center does not have to send simultaneously
program information concerned to the processes preceding the fmal one.
Customers
' - -7--
Demand
Figure 1.
415
replenishment
replenishments
replenishment
1 wk
1 wk
3wks
---'>-----
Information
(orders)
- -->---
Information
(orders)
Supplier
)
----~-----
Information
(orders)
In the upper stream processes above the final process only through mutual exchange of
information by Kanban production program is carried out independently[2], [3]. In other
words each process acts autonomously but total harmony of the whole factory can be
maintained automatically (see Figure 1).
On the other hand the headquaters considers prearranged program designed for the whole
processes, all dealers and all parts manufacturers. These programmed values are leveled
down to obtain programmed production value by operation days, which are given as daily
programmed value of an operative production month. These values are not ordered but
estimated. Actual orders are issued by Kanban.
It should be noted that this kind of system can be realized by both scheduling by MRP and
operation program based on pulling system by Kanban formula. Autonomy in this approach
is constrained by minute adjustment within 10 per cent up and down the scheduled value of
the program center. These narrow ranges are the values of autonomy.
416
goods from the acquisition of the raw materials up to the delivery of the finished products to
the ultimate users, and the related counter-flows of information that both record and control
the movement of materials. System can therefore be seen as the art of managing the flow of
material and goods-getting the products where they are needed, at the time they are wanted,
and at reasonable cost.
An important feature of the traditional art of managing the flow of materials in enterprise is
the division of the responsibility into several functional subsystem. A major disadvantage of
this method is that sometimes optimization efforts remain confmed to the individual
subsystem, efforts upon the enterprise as a whole are not taken into consideration. The
consequences are frequently high inventories and long throughout times.
System can help us to avoid these efforts, it involves the consideration of the total flow of
material from the receipt of raw materials though manufacturing and processing stage up to
the delivery of the finished products. There are two difficult concepts for managing the flow
of material and goods in production, which are the concept of material stock optimization
and the concept of material flow optimization. With the concept of material stock
optimizations one can, though optimization of inventory, attempt to guarantee quick
deliveries, circumvent the unpleasant consequence of equipment break down insure constant
ut~ation of production unit even when several variations of demand exists, etc.
The traditional systems of production planning and control are based on this concept.
Recently the concept of material flow optimization has attained importance. Stock keeping is
now seen as the "root of all evil," when inventories are available, the management has little
incentive to prove the operating system, e.g. reduce set up times, to increase process and
product quality reliability, etc[ 4].
Therefore the aim is first to eliminate waste( e.g. high set up times) in the operating system
and afterwards to install new planning concept, e.g. JIT. Although a stockless production is
the ultimate goals, trade-offs must be taken into consideration. Finally, production logistics
has to carefully evaluate the consequences of these trade-offs, thereby increasing the
competitive advantage of an enterprise. In the following section we will discuss the
fundamental concept of production planning and control systems.
As we have seen, production system span all activities which are concerned with planning
and controlling the material and goods flows from the raw material inventory to production
and between all the production units up until the [mal-product-inventory. We can distinguish
two different concepts for the realization of these tasks, which are the concept of material
stock optimization and the concept of material flow optimization. The traditional concept of
material stock optimization tries to attain a certain customer service level by investing
inventories. The benefit of high inventories is that goods can be quickly delivered to the
customers, in the case the right products are at hand. For a given service level the stock
should be as minimal as possible. That is the reason for the name: material stock
optimization.
The draw backs of the concept are:
417
1) High material stock and product stock piles demand a great deal of tum over capital
and negatively affect the liquidity of the enterprise.
2) Inventories increase the risk of having unsellable products, since forecasts sometimes
prove incorrect.
3) High inventories prevent the weakness of production processes (high set up times,
unsynchronized production processes, imperfect flexibility in production) from
becoming apparent.
Therefore waste is not eliminated. Inventory can therefore be judged from two viewpoints:
1) In general the defenders of the concept of material stock optimization regard inventory
in a positive manner.
2) Inventory makes it possible to guarantee a smooth production and quick deliveries and
to avoid the negative consequences of breakdowns, they enable a constant utilization
of production units, etc.
According to the opponents of this concepts, stock piles are the "root of all evil", because
they prevent the elimination of long set up times, unsynchronized production processes, high
reject rate etc. These opponents prefer the optimization to determine the modes of action.
These involve a continuous process of improvements. Examples are
1) Minimizing of set up times, which enable smaller lot sizes to become economical.
2) New kinds and layout of machines which are able to produce parts or even products
completely.
3) The synchronization of capacities,
4) The optimization of variants,
5) Quality assurance, etc.
A production-synchronous-procurement is also an example of this kind of rationalization.
This reduces the throughout time dramatically, eliminates breakdowns and lowers stock piles.
In summary this mode of action should make a JIT production possible; at any given time
only the materials and products which are immediately needed at certain production stages
or by customers are produced.
Now we have to pose the questions:
Under which conditions is a JIT production economical? Can inventory be justified under
certain circumstances?
In the following sections, we will try to answer these questions by taking a theoretical model
as a basis.
4.2 Theoretical Foundation
In parts and assembly industries, planning and controlling the flow of materials and goods is
an especially complex task. The products consist of many parts and processing takes part in
several production stages. The jobs must share capacities. An important function of
management is therefore the condition and control of complex activities, including the
machine sequencing problem. The well-known objectives-nominal throughout times, low
inventories, high capacity utilization, ultimately associated with the goal of minimizing the
418
replenishments
Customers
replenishments
1 wk
1 wk
''------>-----''
Demand
------->--- --\'-
orders
"'~-:::~->----/~
---- -----------
Figure 2.
---- -->-----.-
orders
Supplier
3wks
------>-- ,/
.-
,///
---~-------- -~
orders
Demand Information
and production order quantity zn (xr) at the rth system in the nth period is for n = 1, 2,
under the some assumption,
419
for x, :S xn,
for the otherwise,
(2)
where An (x,) =
{c, (x, - xn) +af {qn-1 (x, - s)- qn-1 (xn - s) ~(S)cJS,
0,
(x,
xn ),
(x, < xn).
(3)
<!!
(5)
The critical value xn is computed as the unique solution of the equation:
(6)
and in central warehouse,
x,
The physical meaning of the critical value x, is the critical number of Kanban. The first
assumption we shall consider involves the stocking of only one item. We shall assume that
orders are made at each of a finite number of equally spaced items and immediately fulfilled.
After the order has been made and filled, a demand is made. This demand is satisfied as far as
possible, with excess demand loading to a penalty cost. The principal consequence of
discussion is the following important theorem:
420
Theorem If h(t) and p(x) are convex increasing and c(z) = c z, and installation
numbers is m, then the optimal policy is the form described in equation (2) where x is
determined satisfying equations (4), (6), and (8). The functions h(t) and p(x) are the
expected holding cost function and the expected shortage cost function, respectively.
Please refer to the reference [6], because we have no space to explain with some numerical
results.
Our problem now is to attempt to in corporate a set up cost associated with the
transportation of items from installation 2 to installation 1. It is therefore appropriate to ask,
still on the intuitive level, for the part played by the assumption of no set up cost in the
previous policy. First of all, the lack of a set up cost was responsible for the simple
description of optimal policies at the lower level in terms of a sequence of single critical
numbers x3 , x4 ,. If a set up cost in transportation were included in the problem, the
optimal policy would no longer be of this simple form.
Then the optimal policies are of the (S, s) type, with a pair of numbers, Sn and sn, relevant
for each period.
5. CONCLUSION
The statements show that the efficiency of a JIT production depends on the selection of the
dimension of capacities. In cases where significant seasonal variations, long lead times or
huge set up times exist, stock-keeping can be more advantageous than a JIT production.
REFERENCES
1. Tanaka, R., Shin, S., and Sebe, N.: "Controllability for Autonomous Decentralized
2. Monden, Y.: Toyota Production System, Industrial Engineering and Management Press,
1983
3. Odanaka, T.: "On the Theoretical Foundation of Kanban in Toyota Production System,"
Memories of Tokyo Metropolitan Institute of Technology, No.4, (1991), 79-91
4. Odanaka, T.: "On Approximation of Multi-Echelon Inventory Theory," Memories of
Tokyo Metropolitan Institute of Technology, No. 3, (1989), 43-46
5. Odanaka, T.: Optimal Inventory Processes, Katakura Libri, Inc., 1986
6. Odanaka, T., Yamaguchi, Keiichi., and Masui, Tadayuki.: "Multi-Echelon Inventory
Production System Solution," Computers and Industrial Engineering, 27 (1994) 1-4, 201204
E. R~sby
Norwegian University of Science and Technology, Trondheim,
Norway
K. T~nnessen
SINTEF, Production Engineering, Trondheim, Norway
422
E.
R~sby
and K.
T~nnessen
machinable. Aluminium profiles are more machinable in heat treated tempers than in softer
tempers.
Typical products are aluminium profiles used in buildings and transportation, as door and
window frames and components for cars, trains, airplanes and boats. The aluminium part
production must be economically even at small batches, since there is an increasing
demand of complexity of the products and towards one-of-a-kind production.
The aim of the paper is to present the central criterions used when small and medium sized
enterprises are faced with the problem of choosing between different concepts for
machining extruded profiles. A systematic approach is presented and advice is given, based
on experiences made during a project run in a company.
The basic problem when planning the production of aluminium parts is to choose between
the 'high technology model' and the 'small factory model'. The 'small factory model' in
this scence utilizes simpler, often manually operated machines organized in manufacturing
cells. The 'high technology model' involves highly automated machines, and there are two
different concepts which have evolved the last years; the bench type and the flowline. The
bench type machine tool has one spindel and the profile is fixed on the bench during the
machining. The flow line has three to four spindles and the profile is fed progressively into
the work space of the spindles.
2. SPECIAL FEATURES OF THE PROCESS
The process of machining extruded profiles has some significant features which separate it
from other machining operations. Parts requiring surface treatment must be anodized
before machining, because anodizing of full length profiles is more economic than that of
singe! parts. The advantage of anodizing before machining is also simpler handling of
fewer parts. Anodizing requires electric contact points, leaving marks on the profile. The
marks must be cut off, thus making it impossible to anodize parts that are already
machined. Anodizing the parts after machining will of course reduce the risk of getting
scratches in the surface, but it will give marks from the points at which the part is fixed to
the electric conductors.
Machining of extruded profiles differs from machining of. solid or hollow bars, which are
usually machined in turning centers. Often the total surface of the parts produced from bars
are machined, thus leaving a surface integrity decided by the machining process itself. This
is not valid for the machining of separate holes and slots in profiles, as the main visible
surface is produced by the extruding process or the anodizing. The milling tool is entered
perpendicular to the surface, leaving a slot suitable for fixing it to other parts or for
mounting of other functions. The angle of the spindle can also be varied relative to the
surface of the profile.
423
What separates these machines from conventional CNC machines and milling centres? As
aluminium requires low machining forces, there is no need for powerful spindels. Since the
tolerances are not too narrow, there is not a need of extremely rigid constructions. The
number of available tools is limited to a handful. The control system is simpler, since the
machined features are limited to holes, slots and cutoffs. Despite the small diameter of the
tool, high cutting speed is obtained through high spindle speed. Since the profiles are
usually machined in a hard temper, burrs are minimized when the tool is sharp.
3. CHOICE OF PRODUCTION CONCEPT
Faced with lack of capacity or failing profitability, the machining department is forced to
reorganize using the available equipment or to provide new machines. The following
procedure is recommended:
1. Study of products and processes. All the products to be machined must be examined to
reveal information on the following points: number of parts and variants per batch and
year, throughput times, technical requirements like tolerances and surface
specifications.
Output: requirements to be set to the machining concept.
2. Evaluate alternative solutions. Several 'high technology models' and 'small factory
models' must be regarded. Costs and savings must be calculated for each alternative.
Output: chosen machining concept.
3. Elaborate the solution. The chosen concept must be planned in detail.
Output: investment plans, layout, operator tasks and surrounding organization.
There are developed several methods for the design of manufacturing cells using group
technology, [2,3]. In the next chapter, different organization models and machining
concepts are dicussed. The alternatives are the ones to be evaluated in point 2 above.
4. WORK SHOP ORGANIZATION AND SPECIAL MACHINE TOOLS
4. 1 Manual machines and functional layout
Traditionally, the production processes are carried out using manually operated machines.
After the design phase, a technical drawing of the product is the basis for the planning and
manufacture of the product. In the planning phase the sequence of the machining
operations is optimized to utilize the available resources in the best way possible. During
the planning of the operations, it is necessary to get as many products as possible out of
each profile. The waste material must be reduced to a minimum to ensure optimal
economical production. Usually, the length of the profiles are 6 meter. Often the first
operation is cutting of the full length profiles to ease the handling in the workshop. The
subsequent operations are milling, drilling, tapping and countersinking, and finally
packing. As the numerous machines are arranged by function, this may be called functional
layout. This is characterized by inefficient manual handling and intermediate storing of the
424
semifininshed parts between each station. Logistics is of course a challenge when the
number of variants is high and the batch sizes are small. A high degree of manual handling
is monotonous and less effective than automated handling.
The machine tools may also be automated. For the cutting of the profiles, special saws with
programmable feeding are frequent. In the case of simple cross sections of the profiles, the
technique of cutting several profiles at time is efficient. When boring multiple holes,
special machines using several spindles may be used. The spindles are then manually
positioned at the start of each batch.
Each cell is dedicated to produce one type of products, analysed by group technology
Order of priority is 'first in - first out'
The product is machined to a finished state before leaving the cell
There is a limited number of machine tools inside each cell to keep it surveyable and
easy to control
The organizing of the cells is supposed to be simpler, requiring more responsibility at
cell level
Splitting the production of a batch is not allowed. That increases necessary control and
is an unwanted effect
Organizing the production department as manufacturing cells will increase the number of
necessary machine tools and often offer excess production capacity. However, simple
machine tools are not too expensive and the cost of capital is less than labour cost. The
problem of intermediate stock is reduced, but not eliminated. Each operation will still need
handling and waiting. The structure is, on the other hand, more flexible. For the case of
profiles already surface treated, manual handling can be gentle to avoid surface scratches.
Visual inspection can be done at site. The choice of organization affects the need of new
machines. If the workshop is organized in cells, there is less need for investments in highly
automated machines. Manual mounting of the workpiece into the machines makes it
possible to machine products that are already bent to its final form.
The following two chapters focus on some technical aspects of the two different concepts
in the 'high technology model'. The latest generation of these two types of machine tools
are highly automated CNC machines in order to finish the machining of the parts once the
425
profile has been mounted into the machine. Because aluminium alloys used for extrusions
of this type have gOod machinability, high speed spindels are used in both concepts.
4.3 Bench type with one spindle
One of the main types of advanced machine tool concepts is the bench type with one
spindle on a gantry and fixed workpiece. This concept of machining holds the workpiece
fixed in one position until the machining of all the parts out of one profile is finished.
There is only one spindle, which can move in six axis and approach the aluminium profile
from up to six sides, by pulling the last finished workpiece forward and apart from the rest
of the profile, see Figure 1.
Advantages with this system is high machining precision and good surface quality of the
profile. The front side of the aluminium profile is the free surface and the other sides are
clamping surfaces. By fixing the workpiece this way, the risk of introoucing scratches on
visual surfaces are reduced, as the profile never moves relative the supporting surfaces. A
drawback is long transfer times because the spindle support must move to all positions.
Mounting and clamping is critical to avoid scratches in the anodized layer. Manual
mounting ensures careful handling and control of the clamping. It is possible to use palettes
to fix complicated sections.
4.4 Flowline concept with moving workpiece
Multi spindle machining with moving workpiece is the other concept in the 'high
technology model ' . Normally, the profiles are automaticly loaded into the gripper from the
loading magazine. Up to four spindles machine four sides of the profile simultaneously.
Each spindle can move in three directions. It is not possible to machine the profiles from
the two sides at the ends of the workpiece. A gripper holds and feeds the profile to position
it near the milling tools. When the milling process is completed, the part is cut off in the
426
integrated saw and stored on the unloading magazine. Cutting of the profiles can be done at
arbitrary angles. The magazines are manually loaded and unloaded, see Figure 2.
Chips between the fixtures and the profile can reduce the positioning accuracy. The risk of
getting scratches in the anodized surfaces is also higher because chips can attach to the
surface of the profile. During feeding and clamping this can cause damages and rifts even
though the clamping system is made of soft material and the profile is fed on rubber
cylinders. This may be partly avoided by reducing the amount of cutting fluide and
machine as dry as possible. It is only possible to machine straight profiles. Products that are
already cut and bent can not be clamped in this machine. The concept can be integrated
with a CAD/CAM system. The CNC codes for the machining can be generated from the
CAD drawing of the product.
Figure 2. Flowline concept with moving workpiece and multiple spindles (Extech)
427
variations along the length of the profile. This affects of course the tolerances of the
machined product.
It is difficult to set specific requirements to an anodized or lackered surface after
machining. The requirements should be easy to verify in order to speed up control. Often
the demands are qualitative, like 'no visible scratches allowed', but these criteria are
diffuse and not uniquely defined. The production of machined profiles are thus on the 'safe
side' to avoid scratches and surface damages. Only visible surfaces need to be free of
marks, but handling must be gentle and this slows down the production rate.
Machining of materials with low Young's modulus, like aluminium alloys, normally gives
large burrs. It is difficult to define and quantify the size of machining burrs without
thorough inspection. At the same time it is hard to predict the resourses required to deburr
the parts. In practice, this is no problem, because the burrs are small. Nevertheless, it is
important to know parameters like burr root thickness and burr length. The knowledge of
how different cutting parameters influence the burr formation is important.
Modern machining concepts, like those described here, use a milling tool for most
machining operations, exept cutting and tapping. Small diameter holes are drilled with a
flute drill, but larger holes are milled with a circular motion after the milling tool has
penetrated the profile wall. In this way the burrs formed as the tool enters the material are
removed as the tool enlarges the hole to the specified diameter. The large burrs formed by a
drilling tool as it enters and penetrates the workpiece are thus avoided. The two different
machining concepts bench type and flowline both use this technique and they are equal
when it comes to problems regarding burr formation.
6. CONCLUSIONS AND COMMENTS
Even though there are automated machines available on the market, the concept of
machining profiles using simpler machines organized in manufacturing cells still is an
alternative when investments have to be kept at a minimum. It is not necessary to introduce
new machines without evaluating other possibilities, using the present equipment. Manual
handling operations can be combined with visual quality control of the surfaces, and
traditional production concepts are on the safe side when it comes to scrap because of
surface damages.
If the conclusion is to invest in the 'high technology model', there are basically two
different solutions. In fact there are many, since the suppliers are numerous. The dilemma
is to choose a concept which does not violate the surface requirements. The bench type
machine gives the possibility to clamp profiles that are already cut and slightly bent. To be
on the 'safe side' when it comes to scratches on anodized surfaces, the bench type should
be preferred, since the profile does not move relative the clamping equipment.
428
If the products are always machined starting with a full length profile, the flowline concept
is effective. Since there is some sceptisism to use this system for high quality surfaces, a
test using the actual products should be conducted.
ACKNOWLEDGEMENT
Thanks are due to the Research Council of Norway for financial support.
REFERENCES
1. Metals Handbook, American Society for Metals, 8th Edition, 1967, vol. 3 Machining
2. Kamrani, A. K., Parsaei, H. R.: A methodology for the design of manufacturing
systems using group technology, Production Planning and Control, 1994, vol. 5, no. 5,
450-464
3. Onyeagoro, E. A.: Group technology cell design: a case study, Production Planning and
Control, 1995, vol. 6, no. 4, 365-373
M. Braglia
University of Brescia, Brescia, Italy
I. INTRODUCTION
Manufacturing facilities which make use of new technologies and philosophies such as
group technology (GT) or flexible manufacturing systems (FMSs), in designing activities
require more attention than in the past. The development of an optimal machine layout
constitues an important step in designing manufacturing facilities due to the impact of the
layout on material handling cost/time, machine flexibility, and productivity of the
workstations. According to the GT philosophy, the FMS facility layout problem involves
three steps: (i) part family and machine cell formation , (ii) detailed layout within the cell,
and (iii) the arrangements of the cells. The first problem is relevant to the clustering of
machines, based on the part families, in manufacturing cells (or departments). The second
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
430
M. Braglia
one concerns the optimal placement of the facilities within each cell. Finally, the third one is
similar to the block layout problem.
Our aim is to present a heuristic expressly developed to treat the second step in the
particular case of the linear single-row machine layout, one of the most implemented
schemes in FMSs (Figure 1). In this configuration, machines are arranged along a straight
line where a material handling device (MHD) moves the items from one machine to another.
The success of this configuration is due to the major efficiency of the handling device flow
path [1]. Of course, also a circular machine layout (see Figure 2), where equipments are
placed on a circumference served by a robot positioned in the centre of the circle, can be
treated with our heuristics. To this end, it will be sufficient to rectify the path of the
handling robot.
I MAC:nNE II
(~)
MACHINE
2
MACHINE
3
MACHINE
1
MBD
M5
Ml
~G:0
431
The single row layout problem (SRLP) is NP-complete and, therefore, it can be addressed
with heuristics. In this paper we propose a new heuristic for minimising the backtracking
movement in an FMS. (We recall that backtracking means movement of some parts from a
machine to another one that precedes it in the sequence of placed machines). To this end,
we use data accumulated in an F flow matrix (also referred to as "travel chart", "cross
chart", or "from-to chart") and we consider the distances between two generic machines
equal to one. Therefore, the fundamental quantity to be determined is the frequency Jij of
parts displacements between machine i (MI) and machine j (MJ) in the cell.
Rigorously, the mathematical formulation of the problem is the following [2]. Given a flow
matrix F, whose generic element fi is the number of total parts moves from machine i to
machine j, the model requires each of M unique machines to be assigned to one of M
locations along a linear track in such a way that the total backtracking distance is
minimized. Assume, as mentionned, that machine locations and facilities are equally-spaced.
Then, the general form of the model may be stated as follows:
MMMM
rrumm1ze
c(x) = LLLLdikjhxikxJh
i=l k=l j=l h=l
subject to
k = 1, 2, ... , M
Ixik = l,
i=l
M
i = 1, 2, ... , M
Ixik = 1,
k=l
xik = 0
or
for all i, k.
The decision variable xik equals l if machine i is assigned to location k, and 0 otherwise.
The distance parameter dikjh is defined by
dikfh
fjokh
where
k-h
okh = { o
The constraints ensure that each machine is assigned to one location and that each location
has one machine assigned to it.
Backtracking adversely impacts the movement cost and productivity of a manufacturing
facility as it causes the movement in the flow line to resemble the movement in a job shop
layout. Moreover, different material handling devices from those in use may be required,
and queues may appear. The study of the facility layout is important as increased machine
flexibility and product diversification create additional complexity in scheduling and material
handling. Several heuristics have been developped to this end (e.g., [1], [4], [6], [7]).An
excellent review on this subject can also be found in [3].
432
M. Braglia
433
where FKc _1 and FNEw are the backtracking values provided by KC_1 heuristic and present
heuristic, respectively. All the values are obtained as average over 100 repetitions and are
relevant to 6x5=30 classes of problems, caractherized by different problem sizes (i.e.,
number of machines) and flow densities (i.e., the percentage of zero terms in the flow
matrix). The maximum number of machines we have considered is 30, which is quite
sufficient for this kind of problem. In fact, the number of machines in a cell or a line on the
facility floor is usually small (e.g., [3]). The non-zero frequencies are randomly generated
from a uniform distribution U[I,20]. For each element ./;1 a random number between 0 and
l is generated. If this random number is greter than the flow density value, we sample the
non-zero frequencies, otherwise ./;1 =0 . The distance between two machines is always equal
to 1 and clearance is included in this distance.
As one can see, for low densities, the average improvements of our method are not
negligible. Our heuristic outperforms the KC _1 algorithm also for medium/high densities
(i.e., 0.75) and problems with few machines (i.e., 5, 10 and 15). The KC_1 heuristic is
better for very high densities (i.e., 0.99) while the gap is reduced when we consider
problems with higher size.
It is worth noting that the results of this new heuristic are obtained with interesting
computation times. In fact, the response times required are less than one second also for
problems with 30 machines (values obtained using a DIGITAL DEC 3000/600 Alpha
computer).
60r----,.---------------------------------50r---~------~---------------------------
..----~~-------------------
40r---~------
30
0::
20
IL
10
o~~~~~~~~~~~~~
-10 L-~------~----~----~~----~----~---
MACHINES
434
M. Braglia
The results of Figure 3 are confirmed by the values of Table 1 which report the number of
instances in which our heuristic is found to give the best solution for each class of 100
problems. As one can see, there are regions where our heuristic dominates the KC_1
heuristic. We recall that one speaks of dominance, when a heuristic produces better
solutions with higher probability. (Concerning the concepts of dominance, the reader may
refer to [9]}.
10
15
20
25
30
0.99
45 (8)
40 (0)
34 (0)
23 (0)
13 (0)
14 (0)
0.75
64 (12)
60 (0)
47 (0)
29 (0)
30 (0)
16 (0)
0.50
65 (19)
71 (1)
57 (0)
52 (0)
36 (0)
36 (0)
0.25
48 (41)
73 (I)
72 (0)
77 (0)
65 (0)
78 (0)
0.10
16 (84)
68 (18)
77 (2)
77 (0)
72 (0)
70 (0)
..
Tab.l: Number ofmstances m which the new heunsttc ts found to gtve the best solution
(enclosed in brackets the number of instances in which the heuristic is found to give the
same solution than KC_1 procedure)
5. CONCLUSIONS AND REMARKS
We have presented an algorithm for finding the optimal placement of machines in a single
row layout, that is the configuration which minimises the backtracking distance travelled by
a material handling device in a flexible manufacturing system.
Our algorithm is found to behave very well, with interesting computation times, and
outperform another (recent) algorithm developed for the same problem when considering
low flow matrix densities or medium/high densities (i.e., 0. 75) and classes of problems with
few machines (i.e., 5, 10 and 15). In this sense, we can consider the two heuristics as
complementary of one another in the (Size,Density) problem space (Figure 4).
Future work should be addressed to the development of new appropriate versions of the
heuristic able to minimise other cost functions such as the weighted sum of bypassing and
backtracking, or the total movement distance (time) of the material handling device.
ACKNOWLEDGEMENTS
This work has been supported by MURST and CNR. The calculations have been made on a
DIGITAL DEC 3000/600 Alpha computer of the Department of Physics, University of
Parma.
435
Density
Present
heuristic
Size
Fig.4: Classes of problems where the two heuristics dominate
REFERENCES
1. Heragu, S.S., and Kusiak, A: Machine layout problem in flexible manufacturing systems,
Operations Research, 36 (1988), 258-268.
2. Sarker, B.R., Wilhelm, W.E., and Hogg, G.L.: Backtracking and its amoebic properties
in one-dimensional machine location problems, Journal of the Operational Research Society,
45 (1994), 1024-1039.
3.Hassan, M.M.D.: Machine layout problem in modern manufacturing facilities,
International Journal ofProduction Research, 32 (1994), 2559-2584.
4. Hollier, R.H.: The layout of multi-product lines, International Journal of Production
Research, 2 (1963), 47-57.
5. Groover, M.P.: Automation, Production Systems, and Computer Integrated
Manufacturing, Prentice-Hall International Inc., 1987.
6. Sarker, B.R., Han, M., Hogg, G.L., and Wilhelm, W.E.: Backtracking of jobs and
machine location problems, in Progress in Material Handling and Logistics, edited by White,
J.A., and Pence, I.W., Springer-Verlag, Berlin, 1991.
7. Kouvelis, P., and Chiang, W.: Optimal and heuristic procedures from row layout
problems in automated manufacturing systems, Working paper, The Fuqua School of
Business, Duke University, 1992.
8. Kouvelis, P., and Chiang, W.: A simulated annealing procedure for single row layout
problems in flexible manufacturing systems, International Journal of Production Research,
30 (1992), 717-732.
9. Lin, S.: Heuristic programming as an aid to network design, Networks, 5 (1975), 33-43.
10. Afentakis, P., Millen, R.A., and Solomon, M.M.: Dynamic layout strategies for flexible
manufacturing systems, International Journal ofProduction Research, 28 (1990), 311-323.
438
= average lead time of the i-th item lot produced in the system [days];
a;
m;
= average value of raw materials per lot for the i-th item [$not];
I; = average value of direct costs per lot for the i-th item [$/lot];
t1T; = average time between the arrivals of two consecutive item i lots to the system
[days/lot].
In fact, if the value is supposed to be added to item lots linearly with time as shown in fig. 1,
then [2]:
[.)
WIP =-a1 ( m +....!...
[$]
I
A.TI
(1)
The value of WIP; is therefore represented by the area of the trapezium in fig. 1. The
evaluation formula (1) shows how WIP; [$]is the product of the average number (Q;) of lots
in the system provided by Little's law (Q; =a/t1T;) and the average value of each lot. On
average 50% of the final value of each lot is already present and 50% remains to be added.
The most difficult parameter to determine in equation (1) is the average lead time. Hence the
reason why lead time evaluation formulas for job-shop and flexible manufacturing systems
are proposed in the following sections.
'!t
li[$/lot]
., . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.
mi
[$/lot]
Qi
=ai/ATi [lots]
i mi [$/lot]
i
Fig. 1 - WIP when value is added to item lots linearly with time
3. LEAD TIME EVALUATION IN A JOB-SHOP SYSTEM
It would be useful to express lead time as a function of some known parameters relative to
process cycles and job-shop characteristics. The authors propose the evaluation formula
explained in section 3.1 and provide empirical test results in section 3.2.
3.1
MODELLING
When considering the i-th item belonging to the j-th product family manufactured in the
system:
.
a=-'
,- xj
where:
a;
(2)
x1 =
439
Parameter n; can be easily deduced from each item routing, while particular attention must be
paid to the number of operations completed per day on each lot of the j-th family(X).
The first to mention a similar parameter was Corke [3], who talked about the possible utility
of the average number of operations per lot and day in lead time evaluation for job-shop
systems. He considered this parameter as a characteristic of the whole job-shop, without
distinguishing product families, and ascribing it values which ranged between 0.2 and 0.4
operations/(lot*day). Also on the grounds of empirical evidence (as shown in the following
section) it appears that when a single value is ascribed to parameter X, no significant errors
are made, provided that the items are characterized by a high degree of homogeneity in their
manufacturing cycles. When, on the other hand, different families are processed in the jobshop system, X must be related both to the manufacturing system management and the
product characteristics.
In fact from relation (2) it follows that:
(3)
where:
th
= duration of the h-th operation of item i cycle, including run time, set-up
ti
Parameter X can therefore be related to the average duration of a cycle operation. However,
each machine in the system differs for processing times, work loads and therefore queue
length. Thus, lead time depends on which machines are used to process the i-th item. Since
items belonging to the same family have similar manufacturing cycles, the error introduced
by considering the average duration of an operation for a whole family is less than that given
by calculating the average duration of an operation for all the products the system produces.
Hence the reason why the authors relate X to item families and not to the whole system as
Corke proposes.
From relation (2) it follows that for a given item family, lead times in a job-shop system approximately differ only for the number of cycle operations, i.e. the number of displacements
in the system. Lot size is not involved in (2); this is due to the fact that most of the time an
item spends in a job-shop system is wasted waiting in queue to be processed, thus
independently from its size. Moreover items belonging to the same family are usually
manufactured in lots of similar entity. Omitting the variability of time with lot size does not
therefore lead to a considerable mistake in the approximation.
3.2
EMPIRICAL TESTING
The authors tested relation (2) in a local firm, which produces centrifugal ventilators and
where component parts manufacturing is organized as a job-shop system.
Data on several families of items were collected over a three month period. To obtain the
value of Xj for each j-th family every day the number of operations performed for the family
and the number of its lots present in the system were measured. Then X was calculated as
440
the mean of the values obtained each day by dividing the number of operations performed by
the number of lots in the system.
The authors furthermore measured the average lead time for each item produced in the
observed period and deduced the number of their cycle operations from their routings.
The results of the four most representative families are shown in the tables below, which
also provide lead time evaluation when a single value of X (X=0.29) is calculated for the
whole system.
Part
number
number of
operations
10100110
11000110
12800110
17130110
20100110
22200110
23200110
25000110
26300110
29000110
3
3
3
3
2
4
4
2
2
2
2.8
Actual
Lead
Time
[days]
Estimated
Lead
Time
[days]
Error
average
8.8
8.8
8.8
8.8
5.9
11.8
11.8
5.9
5.9
5.9
8.24
Error
X=0.29
Xj=0.34
9.9
9.85
6.6
10.1
5.2
10.3
9.9
6.1
5.75
5.3
7.9
Estimated
Lead
Time
[days]
12.5%
11.9%
25.0%
14.8%
11.9%
12.7%
16.1%
3.4%
2.5%
10.2%
12.1%
10.3
10.3
10.3
10.3
6.9
13.8
13.8
6.9
6.9
6.9
9.64
3.9%
4.4%
35.9%
1.9%
24.6%
25.4%
28.3%
11.6%
16.7%
23.2%
17.59%
Table 1 - Actual and estimated lead times for side panel family
Part
number
number of
operations
11600130
11800130
12000130
12200130
12800130
13100130
14000130
14500130
15000130
16300130
I
1
1
1
1
1
1
1
1
1
average
Actual
Lead
Time
[days]
3.2
3.3
3.1
3.8
3.8
3.2
3.4
3.45
3.6
2.5
3.33
Estimated
Lead
Time
[days]
Error
Error
X=0.29
Xj=0.29
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4%
Estimated
Lead
Time
[days]
5.9%
2.9%
8.8%
11.8%
11.8%
5.9%
0.0%
1.5%
5.9%
26.5%
8.1%
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4
3.4%
Table 2 - ACtual and estimated lead time for back panel family
5.9%
2.9%
8.8%
11.8%
11.8%
5.9%
0.0%
1.5%
5.9%
26.5%
8.1%
441
number of
operations
21000200
23190200
24000200
24090200
25090200
26300200
27100200
28000200
29000200
29092200
4
3
6
3
3
5
4
4
4
2
3.8
average
Actual
Lead
Time
[days]
18.8
12.1
21.6
12.2
12.1
23.4
20.9
18.4
18.9
11.8
17.02
Estimated
Lead
Time
[days]
Xj=0.21
19.0
14.3
28.6
14.3
14.3
23.8
19.0
19.0
19.0
9.5
18.08
Error
1.1%
15.4%
24.5%
14.7%
15.4%
1.7%
10.0%
3.2%
0.5%
24.2%
11.07%
Esti-mated
Lead
Time
[days]
X=0.29
13.8
10.3
20.7
10.3
10.3
17.2
13.8
13.8
13.8
6.9
13.09
Error
36.2%
17.5%
4.3%
18.4%
17.5%
36.0%
51.4%
33.3%
37.0%
71.0%
32.26%
number of
operations
11000260
12000260
12200260
13100260
18000260
19000260
42091260
43600260
55094260
57194260
6
4
4
6
6
6
3
3
4
4
4.6
average
Actual
Lead
Time
[days]
6.1
5.6
5.6
6.0
6.4
6.0
3.65
3.4
4.0
4.2
5.09
Estimated
Lead
Time
[days]
Xi=0.82
7.3
4.9
4.9
4.9
7.3
7.3
3.7
3.7
4.9
-4.9
5.38
Error
16.4%
14.3%
14.3%
22.4%
12.3%
17.8%
1.4%
8.1%
18.4%
14.3%
13.97%
Estimated
Lead
Time
[days]
X=0.29
20.7
13.8
13.8
20.7
20.7
20.7
10.3
10.3
13.8
13.8
15.86
Error
70.5%
59.4%
59.4%
71.0%
69.1%
71.0%
64.6%
67.0%
71.0%
69.6%
67.26%
442
Like X, also SF must be related to each product family processed by the system, since a
different number of fixture and tools can be provided to each family.
Therefore lead time for a flexible manufacturing system can be evaluated in the following
way:
a=-'
I - SF
where:
ai
ri
SFj
=
=
=
(4)
A formal comparison between lead time in a job-shop (JS) and in a flexible manufacturing
system (FMS) can thus be made.
There appears to be a correspondence between the average number (Xj) of operations per lot
performed daily for a given family in a job-shop and the "scheduling factor" (SFj) for a
FMS.
Since in a job-shop environment most of the time is spent waiting for the items to be
processed by each machine of the routing, lead time is substantially determined by the number of operations to be performed, i.e. the number of displacements from one centre to
another, and can be considered lot size independent (see equation 2). On the other hand, in a
flexible manufacturing system inter-operation waiting times are not considerable and lead
time appears to be strictly related to lot size (see equation 4). Hence, for a given family lead
time is a fixed parameter when a job-shop system is involved, but is considered as variable
one when a flexible manufacturing system is analyzed.
5. PASSING FROM A JOB-SHOP TO A FLEXffiLE MANUFACTURING SYSTEM
By analyzing relationship (1), (2) and (4) it is possible to identify which factors concur in
reducing work-in-process when passing from a job-shop to a flexible manufacturing system
with identical capacity.
Considering item data (m, I, jT) unchanged and omitting, for semplicity, the subscript
related to the i-th item being analyzed, it can be written:
I) _ a1s _ ~Xj
WIPFMs - aFMs (m + i)- aFMs - 'fsF.
AT
2
I.}
-a,s( m+WIP1s _ AT
2
(5)
Hence, the reduction of WIP is associated with shorter lead time one has when passing from
one system to the other.
A first reduction can be ascribed to the different ways of processing in the two systems: a
job-shop has fixed lead times, while a flexible manufacturing system has variable ones, with
no queue time between operations (see equation 5). A numeric example is given in table 5,
which shows how it is possible to shorten the lead time from the 28 days in the job-shop to
the 9 days in the flexible manufacturing system (see columns 2 and 3).
A further reduction can be obtained by reducing lot size, due to the dependent nature of lead
time in a flexible manufacturing system. If k is the "size-reducing factor", then lead time can
be shortened with the same factor until a single unit lot is reached (see columns 5 and 6 of
table 5).
443
'is
(6)
=k
Job-Shop
FMS
no lot-size
reduction
FMS
with lot-size
reduction
FMS
with lot-size
reduction
(k=l)
(k=9)
(k=99)
Independent variables
Si = lot size [units]
99
99
99/9=11
99/99=1
n; [operations/lot]
0,63
0,63
0,63
0,63
X {operations]
1 lotday
0,25
m; [$/lot]
54
54
54/9
54/99
l; [$/lot]
36
36
36/9
36/99
119
1199
63
63
63/9
63/99=0,63
63
63
63
63
28
9/9=1
9/99=0,091
28
2872
972
9/972
9/9972
sF[i lotday
hours ]
11T; [days/lot]
Dependent variables
'i = S; c; [hours/lot]
r.
capacity = - 1 [hours I day]
11T;
n
r.
a.=..:..:.L[days]
a.=....:L[days]
I
x.J
a [lots]
Q.=--=.L
I
11T;
SF-
1 ( m; + l; )] [$]
W/P.=a.[I
I 11T;
2
444
aFMs _ SFJ _
- - - - - - 1- (r15
-Xj)
a 15
_!!_
k SFJ
n
(7)
xj
The terms in brackets represent characteristics of the job-shop system being abandoned,
while k and SF describe the flexible manufacturing system to which one passes and
represent the variables granting a WIP reduction. The scheduling factor, in fact, takes into
account the different nature of a flexible manufacturing system as compared to a job-shop
one, which is instead characterized by the average number of operations per lot and day (X).
The size-reducing factor describes the possibility of reducing lot-size thanks to the lower setup times in FMS.
From equation (7) it can be deduced that minimum lead time, and consequently minimum
level of work-in-process, is related to the maximum values of the size-reducing factor (k)
and the scheduling factor (SF).
The maximum value which can in theory be given to k is equal to lot size [in table 5 k =99];
the actual value assumed by this parameter does not however depend on FMS characteristics
but rather on the requirements of the upstream and downstream stages.
On the other hand the value of the scheduling factor is related to the amount of investments
on fixtures and tools. If the flexible manufacturing system is provided with the maximum
degree of flexibility, than SF is equal to the work capacity of the system.
When trying to limit the investments needed for the FMS, the value of SF that is associated
with the same lead time of the job-shop being abandoned must be considered as a lower
bound.
It is, therefore, possible to determine the minimum acceptable value for SF as follows:
aFMS
rjk
aJS =>~=SF
J
=> SFj
min=
r Xj
k.----;;
(8)
An investment on fixtures and tools leading to a scheduling factor less than SFmin provides
the flexible manufacturing system with a lead time which is worse than that of the job-shop.
REFERENCES
1. Little, J.D.C.: A proof for the queueing formula: L
383-387.
446
R. Baldazo et a!.
1. INTRODUCTION
Testing before
the physical
application
OFf. lJNE
Integrat ion
Programming and
Production
simultaneously
447
In the automobile industry, stopping production involves big economic losses. Design
changes are common, and daily production must be quite high in order for installations to
be as profitable as possible, and in order to obtain a competitive price for the vehicle.
Simulation and Off-Line programming have an impact on all these areas, taking production
closer to the final objective, that is, obtaining higher benefits.
The advantages that this kind of technology offers are summarized below:
Time improvement:
Installation designs and modifications are validated before their physical
application.
Stopping at the assembly line during the robot programming is avoided.
Robot programs are optimized, improving thus the cycle times.
Quality improvement in:
Designs.
Job positions.
Programs.
Information organization.
Final product.
Cost improvement, which, in tum, involves a benefit increase.
However, as we will show later, this is not as easy as it looks, and introducing this
technology involves an important change in mentality, not only as far as robot
programming (which requires a higher training) is concerned, but also as far as the whole
production environment (which will have to get adjusted to the new production
philosophy) is concerned.
2. STAGES IN SIMULATION AND OFF-LINE PROGRAMMING.
When applying Simulation and Off-Line programming techniques, assuming that It IS
necessary to compile first all kind of information (both geometrical and distributional
information, and information about the process being performed at the working cell), a
number of steps should be followed:
~ --['
I
.
v""""'""
+ I
+
Fin a I
Layou t
l'll A SE II
448
R. Baldazo et al.
Graphic representation of the cell components, which will be designed within the
system, by means of importing the geometrical data from other CAD systems, or by
using elements which are available in the system libraries.
Plant layout definition.
Validation of this design, checking the robots accessibility, tool kits, tweezers, grips,
etc.
Once the final layout has been obtained, a second stage would start, comprising:
The definition of the trajectories to be followed by all movable elements.
Simulation.
Optimization, detection of collisions among the elements, mistakes in the design of toolkits, grips, tweezers, etc., positioning mistakes, and cycle time analysis.
Layout
Procccss
Parameters " ' -
Path
Definition
t
Simulation
OFF-LI E
Programming
Once the trajectories are defined, and once the correct functioning of the system is checked
upon, the robot programs will be obtained, and thus the Off-Line Programming stage
would start:
Post-processing, translation of the program from the language which is characteristic of
the Simulation system, into the language which characterizes the robot controller.
Communication with the robot.
Program adjustment (calibration), given the differences between the real world and the
Simulation.
From this point onwards, once the program is already installed within the robot's controller,
production can start.
449
450
R. Baldazo et al.
Very often, data are available in different manners; they may appear in different CAD
systems (2D or 3D), or in paper. When information is available in a CAD system, the
problems that we find are the typical ones, concerning the information transfer among
different CAD systems (which involves the need of translators). When information appears
represented in planes, the design will have to be carried out, which involves an increase in
both time and cost.
Given the fact that Simulation and Off-Line systems have a module for component design
(a CAD module) available, the solution to this problem of information transfer will be
provided by the development of these programs, which will progressively incorporate
translators, in order to facilitate the direct transfer of information, coming from other
systems.
Information duplicity
On the other hand, we found that the time devoted to modeling was longer than intended,
due to the problem mentioned above, in the preceding section, and also due to the
repetition of parts of geometry which were already present in other components. This was
difficult to control, given the high number of components and component elements which
make up a cell.
Creating a component data base, integrated within the system, has been posited as a
solution to get rid of this lack of co-ordination which affects geometric data. The goal is to
facilitate the cell design, taking advantage of prior designs, with similar sub components,
by means of a simple codification.
This data base will contain those elements which may have a repeated role in the modeling
of a robotized system (grips, tool-kits, engines, etc.), accumulating the effort made for prior
developments, in order to minimize component modeling costs.
Just as robot libraries exist, the future tendency would be for suppliers of tool-kits, grips,
tweezers, etc., to provide this kind of libraries.
3.3. Simulation
Among the problems which are Simulation specific, the presence of elements which do
not appear in the Simulation (such as electrical or refrigeration conductions both in the
robot and in the welding tweezers) should be considered. These elements just mentioned
cannot be modeled, since they are not rigid elements, which are interposed between the
robot access points, and the welding tweezers, so that trajectory modification is necessary.
These systems don't take into consideration the loading capacity of robots during the
Simulation either; that is, if a given robot is not able to support the weight of some welding
tweezers, this will not be appreciated in the Simulation. Thus, a parallel study should be
carried out in order to ensure it.
3.4. Programming
Both robots and Simulation systems have their own system available. All this language
variety makes the application of numerous specific translators necessary. Depending on the
451
kind of robot controller which is used in the system, a bi-directional relation should be set
up between the programs that are obtained in the neuter language of the Simulation and
Off-Line Programming system, and the specific languages that robots have. The level of
this kind of developments does not have the necessary quality when translations are
performed directly from the neuter Simulation language of the system to that of the robot.
5. REFERENCES
1. Baldazo R., Alvarez M. L., Burgos A.; Simulaci6n y Programaci6n Off-Line de Lineas
Robotizadas; XII Congresso Brasileiro e II Congresso lberoamericano de Engenharia
Mecanica 1995.
2. Sorrenti P., May J.P.; Simulaci6n y Programaci6n de Sistemas Robotizados de
Soldadura; AI/RR n 66, p. 60-66. 1992.
3. Dr. John Owens; Microcomputer-Based lnductrial Robot Simulator and Off-Line
Programming System; Robotics Today, Vol8, no 2, 1995.
452
R. Baldazo et al.
4. Rivas, J.; Una Herramienta que puede Ahorrar Tiempo y Dinero; Especial CIM, n243,
pp. 75-78. 1994.
5. Said M. Megahed; Principles of robot Modelling and Simulation; Ed. John Wiley &
Sons, 1993.
6. Readman, Mark C.; Flexible Joint Robots; Ed. Prentice Hall, 1991.
7. ABB Industria, S.A.; Simulaci6n Gnifica de Celulas Robotizadas; Revista Espafiola de
Electr6nica, pp. 38-40. 1991.
8. Williams; Manufacturing Systems; Ed. Halsted Press, Jonh Wiley & Sons, 1988.
C. Pascolo
CO.R.EL. Italiana, Udine, Italy
P. Pascolo
University of Udine, Udine, Italy
KEYWORDS:
ABSTRACT: This report attempts to identify numerical models suitable for describing the
technological networks underpinning industrial plants and for realizing production line monitoring
systems.
Two previous reports submitted to the proceedings of the 1993 AMST Conference highlighted the
shortcomings of models available on the market and at the same time proposed an innovative method
based on the GEER model.
This study presents the results obtained from applications of the GEER model over two years and
compares them with other systems currently available on the market to assess the model's efficiency.
Graphic representations of sites and plants are usually based on a layer approach. This involves
artificial linking of databases to the graphic layer in order to manage alphanumeric data that are at
least as important as the graphic data themselves.
Technical managers have pointed out that in these systems the graphic data, which are of course
necessary to the operator, provide a visual representation only and need to be supported by an
alphanumeric component. Moreover, the representation-layer technique has caused considerable
difficulties both in creating multi-user systems and in safeguarding congruency with the data and
graphic layers.
Many different types of data are processed by technical management, including site representations,
alphanumeric data, scale drawings and drafts as well as information useful for the maintenance,
upgrading and links with internal networks.
During the course of our research work, it became necessary to define a specific model. Other
models using representation layers linked to relational databases did not meet the needs that had
emerged. A new model, called GEER, was therefore developed along with a compatible userinterface language.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
454
These ideas led us toward the definition of a system that would be able to describe and manage CIM
plants and would perfonn appropriate actions when necessary.
One relevant aspect of the management of industrial plants is related to the ability to properly
describe both the layout of the plant and the interactions that take place between its components.
Management activities require a unifonn approach to this information which is denied by traditional
database systems.
Information systems with the ability adequately to define, manage, store and retrieve structured,
georeferenced multimedia information will provide technical managers with the capability they
require.
Such systems will be able to manage CIM plants and to perfonn appropriate actions in response to
sensorial stimuli.
1. INTRODUCTION
Application programs currently available can organize the descriptive graphic database in a
number of ways: layer-oriented, object-oriented or structured object-oriented.
This distinction is more than merely methodological . As will be shown, the procedures used
to organize graphic databases influence the implementation phase to such a degree that in
many cases it becomes impossible to achieve the management objective identified.
The foregoing assertion will be borne out by the following examination of the differences in
layer-oriented, object-oriented and structured object-oriented data structures.
01aa1e $ci1i1i> 9ft!>'
iu PlnlcdleF l'vdcelleE E4Uk1 Clvlcl Omlt<rl FGae Varle Zonamb Zorlp Zormtl lDotor Mmui'Mcipale
~ JQJ ~~Qj~~~
. -
.[I ...........
MACHINE LAYER
SENSOR LAYER
ON-SCREEN REPRESENTATION
Aluto
~ A
455
2. COMPARISON OF MODELS
The most significant examples for model comparison are to be found in informationtechnology solutions adopted to implement descriptive information systems for industrial
plants and their technological and installation network system.
In layer-oriented systems, each layer contains an undifferentiated set of bitmap-coded
graphic data (lines, points, etc.) representing all items belonging to a single class (machine
layer, eJectric cable layer, etc. see fig. 1).
Where the technical office wishes to interact with a single object, it will have to retrieve all
objects belonging to the same class, or in other words the entire layer, with consequent
penalties of various kinds. This is because in layer-oriented systems, each layer contains an
undifferentiated set of bitmap-coded graphic data representing all the items that belong to
the same class. In contrast in object-oriented systems, each item is distinct within the
database and may therefore be treated independently.
The substantial difference between these two approaches influences the response speed of
their various application programs (see table 1). The excessive time complication of layeroriented search algorithms is a consequence of their having to perform readings on bulk
memory whereas the more sophisticated search algorithms for management structured
object-oriented management perform readings mainly on random-access memory, which has
access times that are five orders of magnitude faster.
In addition, it is also possible with object-orientation to set up functional links of belonging,
existence and location between objects that are functionally dependent on each other (see
fig. 2).
----1
)~
r
456
The next figure highlights the differences between layer-oriented (fig. 3b) and objectoriented (fig. 3c) graphic data acquisition and memorization procedures.
Figure 3b shows that it is necessary to manipulate the entire file in order to represent and/or
update one item. In contrast in figure 3c only the second record is retrieved (see also table 1
-response times).
In layer-oriented systems, matters are complicated each time the graphic part interacts with
management data.
a)
On-screen representation.
b)
Clble-route layer file: dislribution of lines inside file u lccepled in layer-oriented system.
c)
457
File
File
able
of:.....,_.,_
Figure 4. In layer-oriented systems, the geocode is the link between the graphic datum and
alphanumeric datum.
File
_of:....._.,_
-o(...-..-
Figure 5. In object-oriented systems, the graphic datum and the alphanumeric datum
are memorized in different files.
As may be seen from fig. 4, the links between alphanumeric data (machines and cables) and
the graphic files are the geocodes. Geocodes have no structural links with the graphic part
because individual graphic items are not physically distinct froni each other (see fig. Jb).
The above figures also show that it is not possible to create a multi-use facility. Two or
more workstations cannot operate simultaneously on items belonging to the same class
because these will not be distinguished within the database (see fig. Jb).
It should also be noted that this model poses severe problems regarding database reliability
(consistency and congruence). It is notoriously difficult to maintain consistency over
separate sections of databases where each section refers to the same items memorized,
however, on different archive structures (see fig. 4).
458
r?!\,
\t!5)
\
'\ ,,
\
\\
',LY
~ \~
/\7 /// //," /
------~~~-----_-_--_-____4/~/~/
~.....
..................
.......
.....
--- --
-..-..
......
..........
.............
.....
Name
..... .......
Figure 7. In structured object-oriented systems, data of all types regarding one object
are memorized in a single variable-length logical record.
459
.. .
.... . .
. .. .
. . ..
JOOO CONnOL POINTS
Updates
Year
..... .
. ... . .
STRUCTURED
1000
. . . . .. . . ... . . . .. ...... .. ..
100 CONT1\0L POINTS
_.
LAYER
100
Scale
10
I
so
_I_
100
200
460
LAYERS
OBJECTS
STRUCTURED OBJECTS
(NO INDEX)
(WITH INDEX)
3' II"
23"
15"
54"
8"
6"
MUL11-USER FACILITY
NO
CRITICAL
. NOT ALWAYS
CRITICAL
YES
YES
461
REFERENCES
1. Pascola C.: Costruzione di un sistema informativo per Ia gestione del territorio
("Construction of an information system for territory management"), from the proceedings
ofthe "Seconda Conferenza Nazionale Informatica" CISPEL, Santa Flavia (Palermo), 1990
2. Pascola C., Pascola P.: The graphic extended entity relationship data model and its
applications in plants design and management, from the proceedings of "The third
international conference on advanced manufacturing system and technology" CISM Udine,
1993
3. Pascola C., Pascola P., Casco G., Nalato N.: Plant management applications of
CORAD/GEER, from the proceedings of "The third international conference on advanced
manufacturing systems and technology" CISM, Udine, 1993
4. Pascola C., Pascola P.: Seeking quality in G.I.S., from the proceedings of "The Fifth
European Conference on Geographical Information Systems EGIS, Paris, 1994
OCTREE MODELLING
IN AUTOMATED ASSEMBLY PLANNING
464
available. The most diffuse absolute model is the CAD model of the product, directly
obtained during the design stage.
In the relational model only positional information are available. Typical positional
information are the contact or alignment between two elements along an axis of the
coordinate system. The relational models are often simpler than the absolute ones: they
have to usually store a smaller number of information. However, using these models, the
product is not completely defined and only simple analyses are possible. The absolute
models are more complex, but using this approach a more accurate simulation of the
disassembly process can be performed (for instance, the disassembly trajectory can be
studied).
This paper proposes an absolute modelling method based on octree encoding [2,3,4,5].
Using this method, together with a simple relational model, the usual performance of an
automated assembly planning system can be improved.
2. OCTREE MODELLING: BASIC CONCEPTS
The octree modelling is based on a volume digitising process. The simplest method to
digitise a volume is to divide the space in a grid of blocks (voxels): blocks are equal and
each block has a specific code. In this domain, the object is represented by the codes of all
blocks included in the volume of the object. This method is really simple, but has a great
disadvantage: the number of blocks strongly increases with the volume of the object and
with the resolution of the model too. In this situation the grid of blocks is dense as the
accuracy of desired details in the model increases, obtaining a time-consuming procedure
that requests a large memory to store the model.
To reduce this problem the octree representation method can be used.
An octree model is generated with an automatic recursive process: the process starts
considering a cubic work space including the object to be modelled. This work space is
subdivided in eight equal blocks (octants). The intersection between each octant and the
object is analysed. Three conditions are possible:
1. no intersection exists between the octant and the object (white octant);
2. an intersection exists between the octant and the object, but the octant is not completely
included in the object (grey octant);
3. the octant is completely included in the object (black octant).
In case 1 the octant is discarded. In case 2 the octant is again subdivided in eight suboctants and the process continue. In case 3 the octant is added to the octree model without
further subdivision.
The memory and computing time saving is obtained as a direct consequence of inclusion in
the model of octants of different size: a black octant included in the object does not
generate smaller blocks. A further generation of blocks occurs only when a grey octant is
detected (i.e. a higher resolution is requested). In this way the total number of blocks in the
model of the object can be drastically reduced. The process stops when the minimum
dimension, previously stated for the blocks, is reached. At the end of the process the set of
465
CAD MODEL
OCTREE MODEL
466
Although the relational model is not sufficient to perform each possible analysis of the
product, it is partially kept. In particular, the information concerning contact and
connection relationships among the elements are conveniently use in order to make faster
the sequence detection process.
In the next paragraphs further features of octree encoding oriented to solid modelling for
assembly planning are discussed and some meaningful examples are proposed.
3.1 CAD INTERFACE
To create the octree model of the product, an appropriate software linking the CAD system
and the automatic assembly planning system has been realized. This software (written inC
language, as the rest of the system) can create CAD conupand files. By running these files
on the CAD system each block is detected and the octree model is automatically created.
For the creation of the octree model the steps reported in section 2 are strictly followed.
The CAD is only used during the octree model construction phase. Thus, using octree
representative models stored in binary files, the assembly planning system can perform
solid modelling operation also running on computers where a CAD package is not
installed.
3.2 MODEL ACCURACY
A critical problem in octree encoding is the evaluation of the model accuracy, which
depends on the minimum dimension of the blocks used to represent the object. In the octree
modelling two contrasting requirements are present:
for an accurate modelling, the minimum dimension of the blocks should be as little as
possible; it means that a high number of blocks in the model is requested;
for a fast running of the software, the number of the blocks in the model should be as
small as possible.
d=b
d=Jib
d=I3b
DETAIL WITH
PARALLEL FACES
CYLINDRICAL
DETAIL
SPHERICAL
DETAIL
Fig.2- Different kinds of details with their critical dimension (d) related to
block edge length (b).
467
Therefore a minimum block dimension, which allows to consider each detail in the product
using the minimum number of blocks, has to be detected.
Three typical situations are reported in Fig.2: details with parallel faces (blocks),
cylindrical details, spherical details. To correctly model the details, the following features
have to be considered: in the first case the edge length, in the second case the diagonal of a
face, in the third case the diagonal of the block. To assure a correct modelling of all the
situations, the following relation must be fulfilled:
d.
bmin< 05__1!!!!1_
J3
(1)
Where bmin is the minimum block dimension and dmin is the minimum detail dimension.
Furthermore, bmin must be an acceptable value to create an octree grid, i.e. must be equal to
work_space_dimensionJ2n, being n an integer number. dmin has to be multiplied by 0.5
because the particular situation reported in Fig.3 can occur. To show the situation clearly
2D representation is used, but the relation (1) is valid in 3D. If the object is positioned in
the octree grid as in Fig.3.a, the current resolution is sufficient and the minimum dimension
of the block is equal to the minimum dimension of the detail . But, if the object is
positioned as in Fig.3.b, to take the detail into consideration, at least higher level of blocks
is required.
dmin
dmin
1- t-
(a)
J:
Ll
(b)
468
It results that a fundamental operation that the system has to perform, is the detection of
interferences between elements moving along a disassembly direction.
Because an interference during the disas~embly of an element occurs when it is aligned
with any other element along the current disassembly direction, the interference detection
can be performed by an alignment check (Fig.4.a).
The octree model allows this operation because the position of each block in the model is
available from its code. Therefore it is simple to control if any alignment between
components exists by checking the alignments of the blocks in octree models (Fig.4.b).
',
',,
...
INTERFERENCE
VOLUME
DETECTED
ALIGNMENT
ELEMENT A'',,,
DISASSEMBLY
DIRECTION
(a)
DISASSEMBLY
DIRECTION
(b)
Fig.4- Alignment check: actual situation (a) and simulation with octree models
(b) where all the blocks not aligned with current higher level blocks are not
involved in the check procedure.
The problem is the number of alignment checks to perform. If the model of the current
element contains N blocks and the model of the rest of the product contains M blocks, for
one interference control, the number of alignment checks is N*M. An octree model often
contains more than 1,000 blocks, and during a disassembly sequence detection session
hundreds of interference control have to be performed. So, the total number of alignment
checks can be excessive for an effective use of the system.
To avoid this problem a particular algorithm has been developed: the blocks stored in the
lower level file are used to fill a 3-D matrix. The matrix reproduce the spatial grid of octree
model for the current higher level. If an element of the matrix is included in a lower level
block, that element is filled with 1. On the contrary, it is filled with 0. Hence, each control
between files is very fast because for each higher level block only a short series of checks
between element in the matrix have to be perform. Thus, the whole interference control
between elements A and B is subdivided in n*m controls, where nand mare the number of
469
files (i.e. levels) forming the octree models of A and B, respectively. The value n*m is
always much lower than N*M: with 9levels (i.e. 9 files) a workspace of 512x512x512 mm
can be modelled with an accuracy of 1 mm .
. Because the octree model is totally included in the object volume, the octree model is
smaller than the modelled object; in this situation, little volume interferences can be
undetectable using octree models. To avoid this problem an enlarged octree model of the
element to disassembly can be used (this enlargement can be also used to reduce the
number of detected sequences: if, during its disassembly, an element moves close to
another one, by enlargement an interference can be generated; in this way the critical
sequences can be discarded by an appropriate enlargement). The enlargement factor is set
by the user considering, for instance, the tolerance of critical elements or the trajectory
error of the robot.
4. EXAMPLE OF APPLICATION
The chosen benchmark is a manually operated stop valve actually produced in industry.
The high production volumes make it suitable for automatic assembly. A section of the
CAD model is reported in Fig.S.
470
In the last I 0 years the F acuity of mechanical engine.ering in Slavonski Brod, has developed
the information system for different production enterprises- ASIP.
Today, the aim of Information system has been changing: from separate programes called
information islands, to package programmes of universal application (Computer Aided
Design- CAD, Computer Integrated Office- CIO, Computer Aided Manufacturing- CAM
.... ) and Management Information System - MIS to Computer Integrated Manufacturing CIM [1].
AISP has been developed for manufacturing firms in metal processing, electrical, timber,
construction industries, erection companies, processing industry, meat industry and bread
production.
N. Majdandzic et al.
472
473
IPRO~~ORS I I
lsPREADSHEETsl
CIM
PLANNER
CALCULATOR
II
I
NC
LAN
WAN
CNC
TIS
ROBOTS
EXPERT
SYSTEM
~I
~;
BAZAP
1
DEPTO
CAD
NAZAL
PLAPE
PRAPE
ODKAP
2D I 3D
l k
I
~
I
FINIS
~--~
Figure 2
RINIS
Software integration
APT
IMKE
CAM
EXAPT
474
N. Majdandzic et al.
The first level of integration is the connecting of the business (FINIS, RINIS) and
production subsystems. (PROKA, DEPTO, NAZAL, PLAPE, OSKVE, PRAPE,
ODKAP).
The second level of integration is the connecting of AISP and CAD/CAM system.
The third level of integration is connecting AISP, CAD/CAM, CIO (Computer Integrated
Office), expert system and production capacities (NC, CNC machines, robots and flexibile
technology systems).
2. CONTENTS OF THE DEPTO SUBSYSTEM
The DEPTO subsystem organizes data and programs needed to define the structure of
products, operations oftechnology and standards of materials.
The DEPTO subsystem contains three modules:
- the Product Definition module,
- the Technology module,
- the Materials module.
The Product Definition module holds data on a product (description of a product,
illustration of a product, manufacturing elements, composition of a product, required
auxiliary materials, required operating supplies).
The Materials module holds data on required materials, cutting out scheme for the common
starting material and on variant material.
The Technology module (TEHNO) will be described in some detail here.
2.1 THE TECHNOLOGY MODULE (TEHNO)
The TEHNO module has been developed to design and develop technology for the
conventionally and numerically controlled machines, manual workplaces and protection and
heat treatment technology.
The TEHNO module content is given in figure 3.
The GRAFM program system contains programs and data for graphic design of technologic
sketches and preparation of drawings from the CAD program for the NC program
development(CAM).
The KONTE program system holds organized data needed for the operation of the
subsystems: production planning, production and selling control (determination of a product
manufacturing price). It organizes data on technologic operations, variants of a technologic
process, technologic activities, modes of operation and the required standard and nonstandard tools.
475
The NUSTE program system contains programs and data needed to develop programs for
operation of the numerically controlled machines.
The OPTRA program system contains programs and data for optimization of modes of
operation of machines in accordance with the performance possibilities of the machines and
tools.
The OPTPA program system automatically selects the optimal technologic process.
The IZDET program system makes it possible to generate and print technologic
documentation while the program system POSTP contains programs for postprocessing of
the programs for the operation ofNC machines.
GRAFM
KONTE
NUSTE
TEHNO
OPTRA
OPTPA
IZrED
IZrEP
POSTP
476
N. Majdandzic et al.
In the Definition of product, technology and materials module, structure of the product is
defined.
After that by the OPKROJ program in an interactive mode of operation, the layout of parts
on common starting materia~ defined by its dimensions, is determined. The layout
procedure also integrates the gas cutting elements so that after determining the layout the
technologic parameters are also obtained and sent back to the DEPTO subsystem to be used
as the data for determining the operation and time of cutting and for defining the list of
materials through the cutting out scheme.
The Selling and calculation subsystem - PROKA takes standards from DEPTO, price from
the accounting subsystem RINIS and then calculates the cost price. Based on the monthly
plan it supplies the elements necessary for making an order in the Purchasing and selling
subsystem - NAZAL.
'.Q, ,
~ ~ .
AIS
DEPTO - cutting out sch~
QUANTITY
r
l
; : :
. IBM
- - - - - - - - - + 11 i;...
I
IBP,
X,
x,
L___-,-~ mr.
AJS
r=_P_I_
W_~
_A__,-C-0-Sl rice
- Cooperation - Labour
- Basic
- Gross salarie,
material
- Amortization I
- En~::rgy
_ _J
AIS
AIS
~L-1
I - Orders
, - Tenns
- Inventories
L__
IL
___J
RINrS
I - Prices
I - Final
expenses
--
I
I
I
!
'
477
3. CONCLUSION
With increased power and responsibilities, managers also show more interest in introducing
information technology into their firms. The concepts of development and similarity of
subsystems in various types of manufacturing companies are defined. The necessary
integrations of applications package programs into the AIS of a company are given as well
as one of the possible strategies of the program integration into the CIM system.
REFERENCES
1.
2.
1. INTRODUCTION
The rapidly growing up of science and computer technology have a dramatic impact on the
development of production automation technologies. Nearly all modem production systems
are implemented today using computer systems. This tendency also has a very strong
requirement from the properties of constructive materials. The optimization of classical
materials and their characteristics have a limit, so the new solutions were taken developing
quality new materials. Composite materials are unique materials that could be content
compromise requirement for a good reliability, light weight, high statical and dynamical
characteristics.
The use of composite materials helps the development and production of new body in a
short period and without heavy investment. Due to the heavy involvement of the manual
work the overall design and manufacturing process of the vehicle is characterized with the
480
great deal of lead time and man hours. Design and production of different types of vehicles
in small series are rapidly growing busyness for small and medium size companies.
Analyzing those parameters and use the PC and PC based computer software was
developed a new integrated CAD/CAM/CAE system. In each phase of this system was
shown the development and production of all terrain vehicle. The new body was
manufactured by composite materials and installed on an existing metal chassis and driving
group, with appropriate changes caused by the specific demands for a new vehicle design.
3. COMPOSITE MATERIALS
In the area of research and development a new advance design material the most special
place was taken from the composite materials. The literature [3,4.] which treat this
problematic, composite material was defined as an artificial materials system composed of
a mixture or combination of two or more macroconstituents differing in form and or
material composition and that are essentially insoluble in each other.
Composite materials are two phases' materials and on the microstructure level they consist
two materials: the reinforced fibers and matrix resin. The macrostructure level of the
composite material a developed with layers made by reinforced fibers and bounded with
the matrix resin. Engineering methods of calculation was developed on the macrostructure
model and according on this, the layer was taken as a basic type of element in the finite
element method [4,7,8].
CAD/CAM/CAE System for Production All Terrain Vehicle with Composite Materials
481
IDEA
I .Marketing
Software for
2D/3D modeling 1
Standards,
I
recommendations
CONCEPTUAL
DESIGN
GEOMETRY
MODELING
4--------------------------------------,
'
I'
I'
'
'
I'
''
I'
I'
I
'
'''
FEM
I
I
J
.I
STRUCTURAL
ANALYSIS
optimisation
l'
t
!
grapt'!lcal
prezel)tati<?n
and ammatlon
I1
EVALUATION AND
PREZENTATION
,---------------------.
''
review
L
' _____________________ J''
----------...:I
l
DRAWINGS
COMPUTER AIDED
PROCESS
PLANING
i'
.,. _ _ _ _ J
WIRE FRAME
SURFACE
SOLID
COMPUTER AIDED
MANUFACTURING
{CNC code)
COMPUTER AIDED
CONTROL
PLANING
482
In the literature was described several theories for analyzing composite materials [3,4]. One
of them is rule of mixture. The rule of mixture belongs to classical theories of analyzing
composite materials. This method describes the analytical way to calculate the elastic
modulus (Young's modules) and tensile strength. Before to calculate and analyze with this
method, it must to know the mechanical properties of the fiber and matrix as their volume
ratio.
E = aEtVt + EmVm
(1)
(j e
= /3 (jfu V f
+ ( (j m ) efu V m
(2)
Where are:
Er - [MPa], Young's modules of fiber
Em- [MPa], Young's modules of matrix
CAD/CAM/CAE System for Production All Terrain Vehicle with Composite Materials
483
and body are developed using AutoCAD drafting and modeling software. Such approach
enables the following phases of the development process, after being verified by the
structural analysis, avoiding the need of their multiple drafting during the different phases
of the new vehicle development.
5. STRUCTURAL ANALYSIS
The design process of the certain structural entity of the vehicle includes its stiffness and
strength analysis. The finite element method is the most widely used method in the world
practice. The main objective of the analysis is to develop suitable model of construction
that will be able to define the stress and strain state of the overall system as realistic as
possible. Those defined states refer to possible loads of the vehicle which could occur at
certain regimes during its exploitation.
The main characteristic of the presented composite body is the use of polyester composite
structure of different thickness reinforced at certain part with ribs and close profiles.
processing a polyester resin, reinforced with glass fiber or other type fibers. The
mechanical properties of the obtained composite structure are directly related to the used
components. They depend on: mechanical properties of reinforced fibers and matrix resin,
the percentage of the reinforced fibers in the structure, degree of the adhesive ability, the
section and orientation of the fibers in the matrix. The table 1 are presented minimal
mechanical properties of reinforced polyester composite materials [3,4].
..
. 1s
. 1 properttes of po ester compostte matena
mec hamca
Tabl e 1. M tmma
thickness
tensile
flexible
Young's
density
volume
mm
strength MPa strength Mpa modulus MPa
kg/cm3
ratio of
fibers
45
5
112
4900
1.8
25%
6.5
84
133
6300
1.8
28%
8
95
140
7000
1.8
30%
105
9
154
8400
1.8
34%
Increased resistibility and absorption of the impact energy could be achieved by the usage
of matrix resin, fillers and reinforced fibers. The composite body feature has good
mechanical properties in relation with weight, good antirust and insulation properties,
possibility to manufacture big parts (decreasing the duration and the process of assembly)
and decreasing of noise.
For the purpose of this analysis ALGOR commercial software package has been used
because it meets the necessary requirements and has references for such type of analysis
[9]. The ALGOR software package has a module for exchange DXF files with AutoCAD.
The basic geometry of the vehicle model including chassis and body is transferred from
AutoCAD thought the use of DXF. For the purpose to get more realistic picture of the
stress and strain state the chassis and composite body are interconnected with joint
484
During the loads modeling, the arrangement of the useful loads and loads caused by vehicle
equipment have been chosen very carefully. The most appropriate design solution based on
strength properties of whole vehicle as well as for certain characteristic sections has been
determined by several successive iterative analyses.
6. EVALUATE AND FINITE GEOMETRY MODEL
After each iterative analysis, the basic geometry model undergoes certain specified
corrections which lead to the fmite geometry model. As a result a 3D wire frame model is
transferred from ALGOR FEA to AutoCAD via DXF. Surface and solid models in
AutoCAD are formed with the use of 3D Modeling and AME (Advanced Modeling
Extension). Those models are used in the further phases for computer integrated design and
production system [10,11].
The graphical visualization and animation of vehicle have great influence on the evaluation
of the overall design. The investors and other technical persons would be show interest in
the aesthetics and value of new design. They have so easier understand design presented in
three dimensional models. This techniques help to have short time need for manufacturing
prototype physical model, also save investments need to involve and start the production
process, much faster made appropriate changes on the geometry model from the market
pressers, and so simply and easy on the computer graphical display explain and solve the
non understanding parts and details from vehicles. Using 3D Studio software program
CAD/CAM/CAE System for Production All Terrain Vehicle with Composite Materials
485
some deficiency in the design are eliminated and the new vehicle for marketing purpose is
presented in Figure 5 and 6 [11].
The solid model combined with wire frame model and drafting capabilities of AutoCAD is
used for elaborating the final technical documentation. Detailed description of the vehicle
in a form of 2D technical drawing is shown by such technical documentation.
486
8. CONCLUSION
The presented integrate CAD/CAM/CAE system for development, design and production
of the vehicle with composite materials provide possibilities for reduction of the vehicle
development time, labor required and improvement of quality. Taking into consideration
widely used PC and PC based AutoCAD software and well-known ALGOR FEA software,
the computer based development and production of vehicles with composite materials body
becomes reality for the small enterprises as well.
REFERENCE
1. Spur G.; Krause F.: CAD- Technik, Carl Hauser Verlag, 1984
2. M.P.Groover, E.W.Zimmers: CAD/CAM: Computer- Aided Design and
Manufacturing, Prentice Hall, 1984
3. R.L.Calcote: The Analysis of Laminated Composite Structure,
Van Nostrand-Reinhold,1969
4. K.G.Sabodh, V.Svalbonas & G.A.Gurtman: Analysis of Structural Composite
Materials, Marcel Dekker, New York,1973
5. J.Pawlowski: Vehicle Body Engineering, Business Book Limited, London,1969
6. V.Dukovski, Lj.Dudeski, G.Vrtanoski: Computer Based Development and
Production System for Vehicles with Composite Materials Body, Ninth World
Congress IFToMM, Milan,1995
7. Rao S.S.: The Finite Element Method in Engineering, Pergamon Press S.E., 1988
8. S.G.Advani, J.W.Gillespie: Computer Aided Design in Composite Material
Technology III, Elsevier, London, 1992
9. ALGOR FEA- System: Processor and Stress Decoder Reference Manual,
Pittsburgh, 1991
10. AutoCAD release 12 User's Guide, Autodesk Inc.,1993
11. S.Elliot,P.Miller, & G.Pyros:Inside 3D Studio release 3, NRP Indiana,1994
F. De Bona
University of Udine, Udine, Italy
M. Matteucci
C.N.R. Sezione di Trieste, Trieste, Italy
J, Mohr and F.J. Pantenburg
Institut fuer Mikrostrukturtechn ik Forschungszentrum Karlsruhe,
Karlsruhe, Germany
S. Zelenika
Sincrotrone Trieste, Trieste, Italy
488
F. De Bona et al.
MEMs are robotics [3], molecular engineering [7], fiber [8] and integrated [9] optics, fluid
technology [10] and microconnector arrays [11].
The success of micromechanics has been made possible by the development of a huge
variety of microfabrication processes (see Tab.l). Most of these techniques have been
derived from "traditional" microelectronics technology. In fact the first mass fabrication of
MEMs started with the process of wet-chemical etching, successively replaced by
anisotropic dry-etching processes by means of low-pressure plasma or ion beams. At
present microelectronics derived technologies in the MEMs area are essentially of two
types: surface processes and bulk processes.
Unit processes
-Photolithography
-Micro stereolithography (IH)
-Beam machining processes
-Etching techniques
-Deposition techniques
-Bonding techniques
-Micro electro-discharge (EDM)
-Mechanical micromachining
Surface processes (see Tab. I) are those that make it possible to obtain a silicon
microstructure by means of depositions of thin films, as an additive technique, and
selective etching of the thin films, as a subtractive technique. The thin film system usually
consists of a structural layer at the top of a sacrificial layer; in this way the etching of the
sacrificial layer allows tridimensional surface structures to be made such as microbeams,
microsprings and lateral mobile microelements [6].
Bulk microfabrication is based on photolithographic etching techniques. The most popular
materials for bulk micromachining are silicon, glass and quartz. Even if wet chemical
etching is still the dominating bulk machining technique, dry etching techniques are
rapidly growing. The main drawback of bulk micromachining, compared to surface
micromachining, is that it requires double sided processing of the wafer to make
tridimensional structures [6].
As microelectronics generally deals with planar silicon structures, the main limitations of
microelectronics derived technologies are related to the fact that traditional mechanical
materials such as metals can not be used and truly tridimensional structures can not easily
be obtained. Recently, to overcome these drawbacks several new "non-traditional"
489
F. De Bona et al.
490
3. Molding Process
1. Lithography
Mold AUing
lrndallon
Molding Mass
Gate Plate (Metal)
Development
of the Rllllt
2. Electroforming
Metal
Deposition
Slritll*la
of the
lk*ndalld
Ralst
---.. Microstructure
Wade ol Plasllc
~
~
Demoldlng
Process
4. Second Electroformlng
Process
~-. Microstructure
~
llada of Metal
Gle Plate =
Electrode
Plaatlt
Metal
491
which are heated by the absorption of the intense synchrotron flux. Additional valves and
stoppers are then used for safety reasons.
-SR
----- SR+280~mBe
--. - SR+480~mBe
- - - SR+480~mBe+200~mPMMA
.'
:'
'
.'
,' I
,' I
I
I
'
'
: I
:' i
' '
; I
10 6 ~~~--~~~~~,-~+-~~~~----~~
68
1~
68
1~
68
1~
Fig. 2: Power spectrum of the synchrotron radiation (SR) from a bending magnet of Elettra
upstream and downstream from the vacuum windows, the mask membrane and the resist
Synchrotron radiation has a broad radiation spectrum, covering a range from the visible
light to the hard x-rays; generally this spectral distribution has to be modified, as it is not
optimized for the irradiation process. Using PMMA as a resist material, the minimum and
the maximum dose required to have a good development and no irradiation below the
mask absorbing pattern are well known [8]; therefore, for a defined resist height and
filtering configuration, the actual doses at the surface and in the depth of the resist can be
evaluated and compared with the required values. If the synchrotron radiation power
spectrum of the source is known, using Bear's law, the power spectrum after the filtering
elements (Be windows, mask membrane, mask absorber, etc.) can be evaluated (Fig.2);
then, integrating in energy and multiplying by the absorption coefficient of the resist
material, the dose of irradiation and the irradiation time at different resist depths can be
easily obtained. If the ratio between the minimum and the maximum dose is too high, extra
filters must be added, even if this produces an increase of the irradiation time. However the
suppression of the longer wavelengths has always to be performed; in fact such radiation is
useless for irradiation purposes, as it has a short penetration depth and it is absorbed by the
mask membrane thus producing an undue heating of the mask, with its consequent
expansion and therefore a reduction of the accuracy of the fabrication process.
Recently it has been observed that the high energy components of the radiation spectrum
have also to be reduced as they could produce secondary electrons at the substrate interface
thus inducing undesired irradiation processes [ 15]. In that case it is necessary to remove
the high energy photons by using grazing incidence mirrors [ 16].
4. FIRST MICRO-STRUCTURES OBTAINED AT ELETTRA
Irradiations were performed at the Elettra synchrotron radiation source, operating at an
energy of 2 GeV with an electron beam current of 200 rnA. The X-ray beam used in the
492
F. De Bona eta!.
experiment had a vertical aperture of 1 mrad and an horizontal aperture of 7 mrad. The
power spectrum of the beam is represented in Fig.2; two beryllium windows (thickness:
200 1.1m and 80 1.1m respectively) were used. Fig. 2 shows the power spectrum before and
after these filtering elements, the mask membrane and the resist. The area below these
curves corresponds to the power per unit angle. Altough the overall beam power is 57 W.
a great part of it (25 W) is absorbed by the first beryllium window, which has to be
therefore water cooled. The residual power is absorbed by the second beryllium window,
by the mask and by the resist and the substrate. The irradiations were performed using the
Jenoptik's DEX X-ray scanner mounted 12 meters from the source. The developing
process was performed according to [ 17].
Several test micro-structures made of PMMA have been produced. Fig. 3 shows a gear
wheel with involute teeth; the diametric pitch is 200 1.1m, the thickness is 100 Jlm. As the
close-up view clearly shows (Fig. 4) the tooth surface is very smooth and regular. It can be
noticed that these are "negative" structures, in fact, as described previously, electrodeposition should be performed in order to obtain the final "positive" metallic structure.
Deep X-ray lithography permits to obtain very high aspect ratios. In Fig. 5, 6 and 7 this
concept is stressed. Fig. 5 shows a cross with smallest bare width of 10 Jlm, Fig. 6 shows a
single stand-in wall of the same width and Fig. 7 shows column structures with 20 1.1m in
diameter and a height of 200 Jlm.
493
All these structures were obtained using a beryllium mask membrane (thickness: 200f.llll)
patterned with gold an~ a PMMA resist with a thickness of 200 Jlm.
Fig.8 shows a grating structure used for a IR-spectrometer developed at the
Forschungszentrum Karlsruhe. In this case a beryllium membrane with a thickness of 500
11m was used. The great precision of the fabrication process is here enhanced: the grating
steps have in fact an height of 2 11m and a thickness of 100 11m (aspect ratio: 50); wall
roughness lower the 10 nm can be also observed.
Several other test structures (rnicroconnectors, micro turbines, etc.) have been produced. In
all these cases good accuracy has always been obtained, thus confirming that Elettra is
very well suited for performing this technique. Future work will consist of experiments
dedicated to a increased optimization of the synchrotron spectrum, particularly concerning
the reduction of the high energy components, while in parallel the first rnicromechanical
products are under detailed design.
ACKNOWLEDGEM ENTS
The authors wish to thank the Jenoptik company for providing the X-ray scanner.
This work was partially supported by E.C. grant ERBCHRX-CT-9303 94/130.
REFERENCES
[1) Lim, G., Minami, K., Sugihara, M.: Active Catheter with Multilink Structure Based on
Silicon Micromachining, Proc. IEEE Micro Electro Mechanical Systems Conf.,
Amsterdam (NL), (1995), 116-121
[2] Rapoport. S.D., Reed, M.L. and Weiss, L.E.: Fabrication and Testing of a Microdyamic
Rotor for Blood Flow Measurements, J. Micromech. Microeng., 1 (1991), 60-65
[3] Dario, P., Valleggi, R., Carrozza, M.C., Montesi, M.C., and Cocco, M.: Microactuators
for Microrobots: a Critical Survey, J. Micromech. Microeng., 2 (1992), 141-157
[4] Brown, A.S.: MEMs: Macro Growth for Micro Systems, Aerospace America, October
(1994), 32-37
494
F. De Bona et al.
[5] Liu, C., Tsao, T. and Tai, Y.C., Leu, T.S., Ho, C.M., Tang, W.L., Miu, D.: Out-ofPlane Permalloy Magnetic Actuators for Delta-Wing Control, Proc. IEEE Micro Electro
Mechanical Systems Conf., Amsterdam (NL), (1995), 7-12
[6] Ohickers, P., Hannebor, A., Nese, M.: Batch Processing for Micromachined Devices, J.
Micromech. Microeng., 5 (1995), 47-56
[7] Drexler, K.E.,: Strategies for Molecular System Engineering, in: Nanotechnology,
edited by Crandall, B.C. and Lewis, J., MIT Press, (1994), 115-143
[8] Ehrfeld, W. and Lehr, H.: Deep X-Ray Lithography for the Production of Threedimensional Microstructures from Metals, Polymers and Ceramics, Radiat. Phys. Chern., 3
(1994), 349-365
[9] Uenishi, Y., Tsugai, M., Mehregany, M.: Micro-Opto-Mechanical Devices Fabricated
by Anisotropic Etching of (110) Silicon, J. Micromech. Microeng., 5 (1995), 305-312
[10] Shoji, S., Esashi, M.: Microflow Devices and Systems, J. Micromech. Microeng., 4
.
(1994), 157-171
[11] Rogner, A., Eicher, J., Munchmeyer, D., Peters, R.P, and Mohr, J.: The LIGA
Technique- What are the New Opportunities, J. Micromech. Microeng., 2 (1992), 133-140
[12] Dario, P., Carrozza, M.C., Croce, N., Montesi M.C., and Cocco, M.: Non-Traditional
Technologies for Microfabrication, J. Micromech. Microeng., 5 (1995), 64-71
[13] Becker, E.W., Ehrfeld, W., Munchmeyer, D., Betz, H., Heuberger, A., Pongratz, S.,
Glashauser, W., Michel, H.J. and von Siemens, R.: Production of Separation-Nozzle
Systems for Uranium Enrichment by a Combination of X-Ray Lithography and
Galvanoplastics, Naturwissenschaften, 69 (1982), 520-523
[14] Mohr, J., Bacher, W., Bley, P., Strohrmann, M. and Wallrabe, U.: The LIGA-Process
- A Tool for the Fabrication of Microstructures Used in Mechatronics, Proc. 1er Congres
Franco-Japonais de Mecatronique, Besancon, France (1992)
[15] Pantenburg, F. J, Mohr, J.: Influence of Secondary Effects on the Structure Quality in
Deep X-ray Lithography, Nucl. Instr. and Meth, B97 (1995), 551-556
[16] Pantenburg, F.J., El-Kholi, A., Mohr, J., Schulz, J., Oertel, H.K., Chlebek, J., Huber,
H.-L.: Adhesion Problems in Deep-Etch X-Ray Lithography Caused by Fluorescence
Radiation from the Plating Base, Microelectr. Eng., 23 (1994), 223- 226
[17] Mohr, J., Ehrfeld, W., Munchmeyer, D. and Stutz, A.: Resist Technology for DeepEtch Synchrotron Radiation Lithography, Makromol. Chern., Macromol. Symp., 24
(1989), 231-240
496
Arata [4]. Rykalin's analysis was limited to a circular laser source, such as Gaussian source,
and to the determination of the temperature on the surface of the cutting front. Olson [5]
very carefully analysed the cutting front for which he plotted isothermal lines and then
determined the thickness of the molten and recrystallized layers of the workpiece material.
One of his important findings is that in case of a high temperature gradient, a thin layer of
the molten and recrystallized material and a small thickness of the heat affected zone are
obtained, which assures a good and uniform quality of the cut.
Rajendran [6) used thermoelements for temperature measurement in the vicinity of the
cutting front and then analysed the so-called temperature cycles. He found that the cut
quality is strongly related to the temperature gradient in heating and cooling.
Chryssolouris [7] gave a survey of various ways of sensing individual physical phenomena
in the workpiece material during laser cutting processes. He studied various possibilities of
temperature measurement and acoustic emission perception and ,presented various methods
of on-line monitoring of the temperature in the cutting front and acoustic emission in the
workpiece material with a survey methods for controlling laser cutting processes.
Nuss et al. [8] studied the deviations in the size of round roundels in laser cutting different
steels with a C02 laser in pulsating and/or continuous operation. The deviation was
gathered with regard to the precision ofNC-table control and direction of light polarisation.
Toenshoff. Samrau [9] and Bedrin [10) investigated the quality of the cut by measuring the
roughness at varying laser source power and varying workpiece speeds. They also studied
the quality of the cut while changing the optical system focus position with respect to the
workpiece surface. Thomassen and Olsen [11] studied the effects produced on the quality
of the cut by changing the nozzle shape and oxygen gas pressure.
On the basis of research investigations which consisted of temperature measurements with
thermocouples, we established the temperature cycles from which we defined the
temperature gradient and designed the temperature fields with isotherms. From these data it
was possible to assess the quality of the laser cut. In addition to temperature measurement
with a thermocouple in the vicinity of the cutting front, an infrared pyrometer was used to
measure the temperature in the cutting zone itself [12].
In addition to the temperature signals, analysis was also made of the surface. The data in
the histograms of the surface of the cut show that the surface signal variance is increasing
with increasing roughness. From the normalised auto correlation function we can note that
the surface signal variance is related to roughness [ 13, 14].
2. EXPERIMENTAL PROCEDURE AND RESULTS
This article will present, some of the results obtained in indirect temperature measurement
by means of infrared radiation density emitted from the cutting. The aim of the temperature
measurement is to achieve monitoring the thermal phenomena in the cutting front in the
laser cutting process with coaxial supply of oxygen as the auxiliary gas, which provides,
besides laser beam energy, additional exothermal heat which additionally affects the
temperature changes in the cutting front.
497
Cutting speed
v[mm/s]
35 40 45 50
30354045
25 30 35 40
20 25 30 35
Fig. 1 shows the measuring set-up for measurement of IR radiation from the cutting front,
including components for capturing, storage, analysis, and assessment of the temperature
signals. A temperature signal is proportional to energy flow density of infrared radiation,
spreading from the cutting front, which is detected by a sensor for IR radiation. The sensor
consists of a photo diode CONTRONIK-BPX 65 with an electromagnetic radiation
sensibility range of from 0.4 to 1.0 ).lm, an amplifier, and a transformer. During the laser
cutting process, such a sensor is directed towards the cutting front so that it intercepts the
498
radiation from the cutting front and its surroundings and transforms it into a temperature
signal TS expressed in millivolts /mV/. The value of the temperature signal in cutting a
given kind of material and a given thickness depends on:
- density ofiR radiation ofthe overheated material, of the molten pool, and the plasma in
the cutting front due to energy input by the laser beam;
- additional density ofiR radiation from the cutting front due to exothermal reactions.
The thickness vanatJ.on of the molten layer along the cutting front depends on the
temperature signal variation, i.e. its temperature. The expected frequencies of variation in
the molten layer thickness depend on the cutting speed and vary between I 00 and 800 Hz
/3/. The temperature signal from the photo diode is finally led into a I 00 l\1Hz digital
oscilloscope where it is digitized and stored on a floppy disk for subsequent statistical
processmg.
499
Fig. 2 shows bar charts of the mean values of temperature signals TS in mV (black)and
magnitude of standard deviations of temperature signals STD-TS in mV (white) after
cutting austenitic stainless steel having various thicknesses.
A comparison of the data provided by the temperature signal or temperature in the cutting
front in cutting the investigated materials with various cutting speeds shows that:
-
The mean values of temperature signals in cutting low-carbon structural steel range
from 155 mV up to 175 mV, with a standard deviation of about 14 mV.
The mean values of temperature signals in cutting austenitic stainless steel of various
thicknesses range from 145 mV up to 185 mV, with a standard deviation of about 4045 mV. The sole exception is the temperature signal in cutting thicknesses of 1.0 mm
and 1.5 mm with the highest speeds where the mean value of the temperature signal
decreases even to 110 mV and 75 mV respectively.
The low mean values of temperature signals measured in cutting austenitic stainless
steel, having thicknesses of 1.0 mm and 1.5 mm with the highest cutting speed can be
considered as a warning of the troubles in cutting caused by a too low energy input.
~SOO~-T_S__D__S_TD~I'~------------~
SlEEL Cr/Ni 18110
<f 400
S:2
300
C/)
w
~
I-
~~ 100 .........................................
~
o~~~~~~~~yy~~~~M
20
30
40
50
25
35
45
CUTTING
20
30
40
50
25
SPEED v[mm/s)
35
45
Fig. 2: Bar charts of the temperature signal measured for various steels, various
material thicknesses, and various cutting speeds
A consequence of the low energy input is a low quality of the cut. At a specific laser source
power, critical cutting speeds for individual kinds of materials and different thicknesses can
be defined. The critical cutting speed is the highest cutting speed which, with regard to the
value of the temperature signal, still ensures a good quality of the cut. The cutting speeds
lower than the critical one produce a too high energy input in the cutting front which
results in an increased cut width and in formation of a deeper heat affected zone.
Fig. 3 shows the coefficient of variation of the temperature signal from the cutting front in
cutting low-carbon steel with a thickness of 2 mm and austenitic stainless steel having
various thicknesses, i.e. 0.6, 0.8, 1.0, and 1.5 mm. The coefficient of variation of the
temperature signal CV provides more information than standard deviation of the
temperature signal, therefore, it is recommended for the description of signal variation by
500
CV = STD 100
TS
(%
501
speeds for smaller material thicknesses since the mean values of the temperature signal at
higher cutting speeds are even higher than those at lower cutting speeds, which is the result
of reduction in cut width. Thus a critical cutting speed of 45 mm/s could be chosen for a
material thickness of 0.8 mm; for the smallest material thickness, however, additional tests
should be carried out at higher cutting speeds in order to determine its critical value.
The procedure for determining the critical cutting speed is as follows:
- A specified cutting speed is selected, the temperature is measured by means of IR
radiation density in the cutting front, and its mean value is determined.
- The procedure is repeated by continuous or stepwise changing of cutting speeds.
- On a rapid decrease of the mean value of the temperature signal, the critical cutting speed
is achieved which, in accordance with our criteria, represents the optimum cutting speed.
The proposed procedure allows the determination of the critical cutting speed during the
cutting process itself, which can be utilized for controlling the process. The procedure is
both practicable and simple. By changing laser source power and/or optical and kinematic
conditions it is possible to determine the optimum laser cutting conditions. The same
optimization procedure of the laser cutting process may be used also for other related
materials.
25
30
35
40
45
50
502
signal measurement via IR radiation density. The described critical cutting speeds are the
optimal cutting speeds for the analysed lasers cutting process and austenitic stainless steel.
REFERENCES
I. Rosenthal D.: Mathematical Theory of Distribution during Welding and Cutting,
R. Cebalo
University of Zagreb, Zagreb, Croatia
A. Stoic
University of Osijek, Osijek, Croatia
KEY WORDS : Laser beam cutting parameters, cut quality, cutting costs
ABSTRACT: It is possible to achieve reducing of laser beam cutting costs by correct determination cutting
parameters. Production costs have a lot of sources and maximal reducing is achievable by doing multifactor
analysis of influence factors. In this paper, influence of cutting speed and cutting gas pressure shall be
analyzed. Each of these parameters strongly influence the obtained cut quality. Surface roughness, as
indicator of cutting ability, is usually ftmctional value which is dependent on cutting parameters. Fo~ its
optimization it is necessary to determine optimal cutting parameters. Following reducing of production costs
is possible by optimization of material utilization and laser cutting head path. Reducing of cutting time
(number of machine hours) is obtainable by selection of good cutting order of cutout, and by reducing a
number of free movement.
1. INTRODUCTION
Base approach of modem production is to rationally manage with material and time. Development
and application of CNC tool machine demand definition of optimal parameters, and regarding that
definition, interdependence of parameters in the form of mathematical model is necessary.
Conventional determination of technological parameters on the base of technologist experience and
machine producer suggestions is not recommended. That is the reason for application of
optimization methods. First step in optimization process is definition of mathematical model which
is very complex because of effect a lot of influence factors on cutting process. More accurate
mathematical model include linear and nonlinear parts, define by function of second order. In
practice, these model are sufficient for description of technological processes, if posstbility of
description exists at all [1].
Mathematical models are given by goal function and are base of optimization process
(determination of optimal parameters) in which we are determining the extreme value of goal
function. Analysis of influence factors on cut edge quality are more represented in the literature but
mathematical model given by goal function are rather infrequently.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
504
! ! ! !
v
h
p
MODEL OF
CUTTING PROCESS
Ra=Ramin=fl:v,h,p )opt
Rar'Ramin
Fig 1. Scheme of input/output parameters
2.2. TEST PLAN
A large number of cut ability testing use method for test planing. According to previous testing,
mathematical model for roughness determination is given by:
Ra =
c~
..J
i=1
fPo
(1)
505
For varying chosen parameters (independence variable) v,h,p on five level it is necessary to do
N=2 3+6+6=20 tests.
According to central composition test plan it is possible to get regression equation define with
po]ynom of second order :
k
k
k
2
Y = ~ b. X.+ ~ b .. X. X.+ ~ b .. X.
(2}
. 0 1 1 1<"
. 1 11 1
1=
_1<J. 1J 1 J 1=
what can be expressed as follows :
(3)
where are :
P; -Pim
(4 )
Pi max - P;min
2
where : xi is coded value of cutting speed ( 1,5m/min < v <3m/min), x2 is coded value of cutting gas
pressure (0,8bar < p <2 bar) and X3 is coded value of measuring location (0,8mm< h <2,2mm}.
1
Variables
Regression
coefficients
Standard
errors
t-value
significance
level
XI 2
xl
xl
XI
x2
4,1664
-0,9514
0,3755
0,4930
0,9350
0,2488
0,2918
0,1936
0,1936
0,1936
0,2529
0,1884
0,1884
0,1884
x3
xix2
X2X3
0,2529
14,2776 -4,9142
1,9398
8,7273
-2,4909 -1,0451
2,6165
4,9621
1,3204
0,0000
0,0671
0,0000
0,0243
0,0192
0,0003
0,1942
0,0003
0,2972
506
Regression equation is :
Ra = 4,166441- 0,95141X1 +0,37555X2 + 1,689623X3 - 0,630125X1X2 -0,26438X2X3 +0,49306X12 +0,93507X22 +0,2488X23
10
measuring
results
Ra,J.I.m
'
V'
~v
0
.v
(5)
lr2 l.0,93355tl
i
8
10
Fig. 2 Comparison oftheoretical (calculated from regression model) and surface roughness
measured results
Increase of surface roughness in dependence of measuring location is shown in fig.3.
~ =1,682)
DIN2310
9<:3 =0,7)
P'.3 = 1,682)
h,mm
Fig. 3. Increase of surface roughness
507
s::1.
16
X1 _ v-2,25
~~
- 0,75
~
"-'
"-'
Q)
X _
14
p-1,4
2-0,6
...c::
01)
::I
12
1-<
Q)
(.)
10
~
1-<
::I
"-'
6
2
4
-2
-1.5
-1
-o.5
0.5
1.5
Xj
X2
-2
aF
R. = 0 iii _Q_
=0
axi
axi
oRa = 0 aRa = 0 i x = o 7
D._
or:
ax1
'ax2
'
(6)
(7)
(8)
+0,98612X1 = 0
(9)
0
=
0,18785- 0,630125X1 + 1,87014~
From the equation system (8) and (9) follows the optimum values of parameters and surface
roughness:
-coded parameters XI0=1,1477 and X 2o=0,28626
- phisical value Vopt = 3,11 m/min and Popt = 1,57 bar
-surface roughness Ra = Ramin = 5,0089 J.lm.
-0,9514-0,630125~
508
utilization. Laser cutting hour costs are relatively high in comparison with flame cutting technology
or another similar. That is the reason for quantification of cutting hour costs and possibility of its
reducing. Cutting costs are vaulted on machine hour costs. Machine hour costs sources by cutting
head movement are : free movement, contour working movement, entry in contour working.
Reducing of free movement is achievable by change of cutting order of cutouts and by free
movement number reducing. For each layouts of cutouts it is possible to determine more cutting
order. PoSSible number of different cutting order depends on number of cutouts :
(5)
where N is the number of cutouts.
For determination all poSSible pathes and for calculating the machine hours (for more cutouts) a lot
of time is required. In this example maximum number of variants is limited on 500 random.
Calculated machine hours are shown in histogram. Similarity of machine hours frequency curve
with Gaussian ~e is noticed from the histogram. Mean value of calculated machine hours of all
500 variants is t and the shortest value is t.h. (fig.4 )
fre(!uency
ofmachlroe
hours
0(zl
t,
-tj
cr
(10)
509
3.00
total costs
2.00
material costs
1.00
12
16
20
4. CONCLUSIONS
Using test plan method and data processing, according to multiple regression analysis, it is poSSJ.'ble
to determine the functional dependence between surface roughness and cutting parameters ( cutting
510
speed and cutting gas pressure). By derivating a goal function it is poSSible to find the cutting
parameters which results with minimum surface roughness value. After the simulation of disposing
and calculation of cutting head path for cutouts it is poSSible to determine minimum cutting costs.
Optimal costs are the cost portion combination between cost for maximum material utilization and
costs for minimum cutting head path.
LITERATURE
[1]
[2]
[3]
[4]
The process which was investigated can be named LOW-VOLTAGE EDM (LV-EDM) and
is characterised by peculiar mechanism of the pulses forming and control of the transient
nature oftheir development [1,2].
The following points are significant in this paper:
The stationary electrical arc feeded by the direct current circuit can be converted into spontaneously interrupted pulses which are generated under specific conditions of relative movement of electrodes or cross section flow of technological fluid involving the arc into migration
slipping along the electrodes surface. Pulses generation depends on a number of inlet data,
such as velocity of arc slipping, open voltage, gap size and is of the random nature being the
mixture of the arc discharges and short circuits. Phenomenon which is responsible for the
current circuit interruption always occur at the anode solid metal- molten metal boundary and
is determined by the specific current density across the discharge channel. The generated pulses
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
512
t o
It , , , ,
tt o ttt t t 0
7'
<;:'.'
..~
~--
' 9"
'7~
" '
to o ot
t t lo ot
t t l
t oool t
tl t tll
fl t tl
II
o tlt
t t to o t OI It
.:.'W.J.'U -'--''-J.J'I..;.;hl\oJ-"t.J.JJ.IItJI.-'.-A'~
1 11 11111 II IIJIIIIIII Ill II II 1 11 II
I IIlii II II
f tl
1J
-Is.c.
--fare
-Io
R>~
ZONE2
--------
ZONE 1
a'
PULSES
DURaTiON
513
Short circuits have place when the gap is overlaped by the molten metal which is moving in
the electrical field from anode to cathode.
The bridge of molten metal exists a very short time and than is replaced by the intervals or
the arc discharges. Discontinued current circuit can be restored again by the bridge or the arc
discharge which occurs with a certain ignition delay.
Bridge, as a rule, ceases the arc burning. Bridge or discharge in tum, interrupt the interval.
Pulsation is distinguished by the duration of each element and ratio of their numbers.
Fig.2 shows these qualitative relatioinships derivated from the oscillogramm on Fig. I.
Duration of pulses elements depends on the open voltage of current circuit (Ui) and the gap
size (Sn). When Ui reaches some critical value the pulsation completely disappears.
With the reduction of Ui bellow 6 V the arc can not appear and the process is characterised
only by the bridges existance.
To access technological pecularities ofLV-EDM, the ratio Rn = Narc/Ns.c. must be analysed. Here Narc and Ns.c. are the arc discharges and the short circuits numbers existing per the
time unit. Ratio Rn is subdued to the random situation in the gap, but its average meaning
depends on the Ui and Sn (frontal gap).
The curve Rn characterises the change of these relations. With the change of Ui the Rn is
subjected to the regeneration. In the zone (1) which is related to the low Ui the bridge pulses are
prevailing and Rn < 1, whereas in zone (2) Rn > 1. Bridge and arc pulses are different in
contribution to the metal removal and electrode wear therefore the ratio Rn can exist which
provides reasonable removal rate and controlled wear.
Control of the outlet data can be provided also by the variation of Sn with the fixed U.
2.2. PHYSICAL EVOLUTION OF THE GENERATED PULSES
To clarify the nature of circuit interruption the following investigation was completed. The
experimental device was constructed which provided installation of two metal electrodes one
beind solid iron [Fe], the second of molten lead [Pb]. The bull-end size of electrode and the
temperature of the molten metal were fixed. This model was ment to imitate the interface
boundary between solid and the molten metal of electrodes.
It was found that circuit can be spontaneously interrupted when the current density in the
boundary zone will reach the certain critical value. Thereafter the current pulsations inevitably
occur to be similar to that which were observed above [Fig. 1].
In fact, during the pulse heating the boiling at the anode boundary zone was observed. The
gas bubles being charged are rushing to the opposite electrode, thus involving the molten metal
into motion. Therefore the force of electrical nature exists which can enter into interruction with
mechanical links stipulated by the boundary interphase absorbtion.
Reciprocal action of two electrical fields can infringe their equilibrium and therefore destruction of the boundary links will occur. As it was found the separation of molten and s9lid
metal can be observed when current density reaches the certain value. Number of experiments
have shown that it can be greatly increased whenever interphase mechanical links are reinforced.
514
(2)
515
which shows that arc diameter and, therefore, current density can be controlled when velosity [Vn] is changeable.
Discharges along with the metal bridges can be involved into migration by the magnetic
field or by the powerful liquid stream across the gap [ 1], or by electrodes relative motion.
Concentration of energy in the discharge channel involves the changes of pulses duration,
growing pulses frequancy and process of pulses generation becomes more stable. Any controlled factor which provides energy concentration can change the ratio:
R =Narc
" Ns.c.
(3)
516
517
4.CONCLUSIONS
- The low-voltage process of the pulses generation is existing, which is determined by the
peculiar physical phenomenon occuring at the solid-molten metal interphase boundary.
- Spontaneous interruction of the current circuit appears when specific current density
across the discharge channel aquires the certain critical value.
- Existance of the self-excited current pulsation is stipulated by the applied open voltage,
which also determine the pulses duration, their physical peculiarities and frequancy.
- Spectrum of the pulses generation with the different ability of the wear compensation and
efficiency of the removal rate can be created by the programmed and cycled changes of the open
voltage or the gap size.
- Multy-channel nature of the process can provide realisation of the high power and anables
to incorporate into the gap the current of thousand Arnpers.
- The scope of LV-processfor the ind~strial application is ilh,1strated by the described
technologies.
518
5.REFERENCES
1. G. Mescheriakov, N. Mescheriakov, V. Nosulenko. PhysiCal and Technological Control of Arc Dimentional Machining. CIRP Annals, vol57, 1, 1988.
2. G. Mescheriakov. Electro-physical Process in the Electro-Pulse Metal Cutting. CIRP
Annals, vol XVIII, 1970.
1. INTRODUCTION
With LAM (acronym of Laser Assisted Machining) is intended a turning operation by which
the material is heated, at least until a temperature of 500 K (as stated in [1]), in order to
lower the mechanical characteristics of the material and to make the machining easier. Up to
now all the studies performed (for ex. (1], [2], [3] and [4]), and also experimentations done
at the CNR-ITIA of Milano, agree about the possibility of decreasing cutting force up to
30%.
The mentioned studies concentrate only on an analysis of experimental kind, evidencing the
lack of an analitic tool for the optimization of the whole parameters involved. This work
proposes a numerical model, realized by fmite volume method, to investigate temperature
520
distribution inside a sample machined with laser assistance. The developed model is a useful
tool for identifying almost all working parameters (cutting speed, depth of cut, feed, laser
specific power, laser spot dimensions etc); the target during the development is to create an
easy to use model, that allows an agile adaptation of the grid to the specific geometry and
finally that allows to obtain the distribution with reasonably small calculation times (about
30 min on a DEC 3000 AXP model400 workstation).
2. EXPERIMENTAL CONDITIONS
The experimental conditions we referred to, during model defmition, follow. A laser assisted
machining operation of a cilyndrical sample of Inconel 718, 74.16 mm in diameter is
considered. The physical characteristics of the nickel based alloy used are: specific heat
equal to 501.3 J/(kgK), density equal to 8220 kg/m3 and thermal conductivity depending on
temperature in accordance with the equation
k=11.068+1.5964E-02 T
obtained by interpolation of the experimental behaviour.
The laser supplies a power of 384 W (absorbed by the material) and is focused on an
elliptical spot with axes 1.7 mm and 2.0 mm long. A carbide tungsten tool is used, for which
the build~J__f!lrnished a constant thermal conductivity of 50 W/(mK); the rake angle is equal
to 6. The feed is equal to 0.25 mm/rev and the tool to spot distance is 5 mm.
This study considers 3 values of depth of cut (1, 2 and 3 mm) with speed values of 15, 25
35 and 45 m/min.
3. THE MODEL
The numerical model is realized with the software PHOENICS 2.1 by CHAM Ltd., and
considers only a part of the whole cylinder: this portion has to be great enough in order to
ensure the correctness of the solution, but not too much, in order to avoid a very long
calculation time. The domain obtained with the above rules for a depth of cut equal to 2 mm
is 2 mm long in axial direction, 2.5 mm long in radial direction and embraces an angle of 1
rad (see fig. 1); the mesh is made by 32x40x70 cells along the just mentioned directions.
Three heat sources are considered: the laser beam, one placed in the chip formation zone
and one due to the tool-chip friction placed at their interface. The laser is represented as a
square source (side equal to 1.6 mm) with release of a costant power on the whole surface.
The deformation zone, or primary zone, is characterized by a shear angle <P of31.7 (see for
ex. [5]) and is 0.183 mm in width (see [6]); the integral value of the released heat in the
time unity is given by the product F,.xV. of the cutting force component along the shear
plane times the velocity component along the same direction.
The friction, or secondary, zone dimensions-are identified by depth of cut and length of
contact, set to 0.5 mm. Into the secondary zone the energy dissipation takes place both at
521
the interface between tool and chip because of friction, and into a volume of finite depth
due to deformation.
PHOTON
~X
ftg. 1
As the depth of the secondary zone is very small (about 0.02 mm), the released heat is
neglected, so that only a source on the surface is considered; the power supplied is given by
the product FxV, where F and V are respectively the cutting force and velocity component
along tool face. In the model the heat generated in the secondary zone is introduced as a
constant source on the whole surface considered, while for the primary zone the force is
related to temperature, following the formula (given for ex. in [7])
where A is the surface of contact between tool and uncutted chip and k. is the shear flow
stress, considered a function of temperature as shown in the following graphic in ftg. 2
(experimental and interpolated curves).
In our model the tool is not directly represented, but its presence is considered by the
subdivision of the heat generated into the secondary zone into two parts, proportionally to
the material and tool thermal conductivities; this implies that we suppose that the tool
dimensions are great enough, with respect to the chip, so that heat fluxes are not
obstructed.
522
k,
1200
[Ninm"2hooo
r---....,==---
800
600
interpolated data
experirrental data
4()()
200
0~++++++++++++++++++~++~
0
I00 200 300 400 S00 600 700 800 900 1000 II 00 1200
T ['q
fig. 2
4. THE VALIDATION
The validation of the model was realized by comparison between the calculated distribution
of temperature and the one obtained by experiment. The experiments were performed with a
cylinder of titanium machined so that it was identical in shape to the cylinder worked during
LAM sessions. The surface created by the tool is heated by a laser beam (with the same
characteristics used for LAM) and the temperature reached in the centre of the spot is read
by an optical pyrometer, available at ITIA. The model was changed accordingly with the
new conditions; it is to be noted that the modifications involve only some costant
parameters used for the description of the material. The pyrometer reads the temperature
inside of a square spot ( 0.2 mm of size) and can do up to 5000 acquisition in 1 second. The
measure of the temperature is based on the hot surface emission of radiation: the radiation
emitted is focused on a diffraction grid and two images are created on two diodes,
corresponding to two different wavelength (950 and 650 nm). These ones give as output an
electrical signal linearly proportional to the incident radiation (i.e. to the temperature). The
first diode is utilized to acquire a range of low temperature (1000-2000 C), the second for
an high range. In order to be s~re to measure a temperature higher than the threshold of
l000C, the pyrometer acquisition spot coincides with the center of the laser spot.
The comparison between the temperature read by pyrometer and that calculated shows the
very good precision of the model (maximum error within the range of 5% of the actual
value).
The validation of a model realized for Inconel 718 by experiments on titanium is correct, as
already remembered, because the only differences between the two models are some
constant parameters, that do not modify the distribution of temperature (only the local
values).
5. ANALYSIS OF CALCULATION RESULTS
The obtained results are shown in the following pictures as graphs of temperature on cutting
edge.
523
Figures 3, 4 and 5 show the distribution of temperature for depth of cut equal to 1 mm, 2
mm and 3 mm respectively and for the velocity values indicated in each picture. These
graphs show that:
- the depth of cut must not be too high with respect to spot dimensions, because 1aser
effects are negligible in the external zones of the tool created shoulder if the above
condition is not satisfied;
- the velocity increase lowers the maximum value of the temperature and lets the obtaining
of distribution of growing uniformity on the cutting edge;
- the maximum temperature is reached in the middle of the cutting edge, apart for depth of
1 mm; for this value an effect of heat accumulation takes place, and so the maximum
shifts toward the upper edge of the tool created surface.
60~---r----T---~----,-----r---~
15 m/ in
25 m/ in
35 m/ in
45 m/ in
18.
0.0360
0.036
fig. 3
60
T
so
40
30
20
18.0345
0.035
0.0355
fig. 4
. 037
524
60~------r-----~,-----~------~
T
40
20
0.033
0.034
0.035
0.0
0.037
fig.5
These distributions are obtained under the hypothesis that laser beam hits the shoulder
exactly in the middle. The condition just stated can be not rigorously respected while
machining, because of the vibration either of the lathe or of the final focusing lense. For this
reason two more situations are investigated, by shifting a little upward, or downward, the
laser spot. If beam dimensions and those of tool created surface are very different, these
shifts do not change significantly the temperature distribution, apart a transfer of the curves.
If the dimensions are near the same (depth of cut equal to 2 mm for conditions here tested)
the spot shift determines great modifications in the temperature distribution (see fig. 6)
45
40
Upper edge
35
30
25
20
15
8.o
0.0355
Middle
-j
Lower edge
.03 5
0.037
fig. 6
The results were analized also to study the heat flux inside the chip, in order to identify the
machining conditions for minimizing tool wear (that exponentially depends on temperature,
525
as stated in [8]): Because of this we realized some simulations which consider only the laser
as heat source. Fig. 7 shows the distribution of temperature on the cutting edge and on a
parallel line placed immediately into the chip (at the least distance allowed by the mesh).
The curves in the graph show that an increase of speed decreases the heat flux directed from
chip to tool. This condition is positive because allows the tool to machine in the same
conditions it would experience when working a material of low mechanical characteristics
without laser assistance.
50~----~----,------r-----.----~
T
tig. 7
6. CONCLUSIONS
From what said in the previous paragraph, it is possible to conclude that:
- it is necessary to machine with high laser power (at the limit of melting in the hottest
point of the laser spot), so to lower as much as possible material resistance;
it is necessary to machine at the maximum speed (at least in the range considered) so to
obtain distribution as uniform as possible, and also to reduce laser heat flux from chip to
tool;
the laser spot must be focused in the middle of the tool created shoulder and their
dimensions have to be almost the same;
the feed and spot-tool distance have to be properly choosen, in order to let the heat
supplied by the laser to spread enough inside the material (in this sense the two
parameters mentioned are function of cutting speed, too).
REFERENCES
1. I.Y. Smurov, L.V. Okorokov "Laser assisted machining"
2. M. Albert "Laser on a lathe" from "Modern machine shop", May 1983, pp. 50-58
526
3. S. Copley, M. Bass, B. Jau, R. Wallace "Shaping materials with lasers" from "Laser
materials processing", North-Holland Publishing Company, 1983
4. F. Cantore, S.L. Gobbi, M. Modena, G. Savant Aira "L' assistente laser" from "Rivista
di meccanica", 1993, pp. 92-99
5. M.E. Merchant "Mechanics of the metal cutting process. II. Plasticity conditions in
orthogonal cutting" from "Journal of Applied Physics", Vol. 16, 1945, pp. 318-324
6. P.L.B. Oxley "Mechanics of machining - An analytical approach to assess
machinability" John Wiley and Sons, 1989
7. G.F. Micheletti "Tecnologia meccanica - Vol.! - Il taglio dei metalli" Unione
Tipognifico-Editrice Torinese, 1975
8. N.H. Cook, P.N. Nayak "The termal mechanics of tool wear" from ''Transaction of the
ASME- Journal of Engineering for Industry", 1966, pp. 93-100
1. INTRODUCTION
Small hole drilling is a rather difficult process in most technological applications [1,2,3,4].
The producing of deep small holes (d<1mm, h/d> 10) is often associated with unfavourable
metal and other particles, tool cooling and especially the stiffitess the tool in conventional
machining processes [5]. Some of the nonconventional machining processes like laser beam
machining (LBM) or electron beam machining (EBM) enable producing of deep and small
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
528
holes. The problem of costs and insufficient accuracy and costs still remains. Electro
discharge machining process (EDM) is convenient for machining electro conductive
materials. It is made up to produce irregular shapes in the workpiece from the beginning of
EDM technology. Present technological inconveniences in small hole drilling with EDM are:
- working accuracy
- electro4e material
- to clalrlp and to position the electrode
- to attain acceptable dielectric pressure in deep small holes
- the selection of dielectric
- the working condition selection
2. SMALL HOLE DRILLING WITH EDM
The radical technological problem in producing deep small holes with EDM is to attain
efficient working conditions in the working gap by means of flusbillg out the produced
particles [2,3). For this purpose we developed special chamber-device (Figure 1.) to enable
EDM process locally without submerging the workpiece into the dielectric and to achieve
higher dielectric pressure in the gap.
ELECTRODE
WORKPIECE
Figure 1: Dielectric flow in chamber-device
The technological database for existing EDM sinking machine is based on standard
experiments (diameter of the electrode d=20mm). The optimal working conditions for small
hole drilling ( d> lmm) differ from standard conditions essentially. We have chosen 5
different working conditions to define proper working parameters with tested Ingersoll SOP
sinking machine (tab.l. ).
529
....
200
...
1"'
~
150
50
'
50
100
150
200
250
300
TIMEt [ J!S]
530
----r- Vw standard
1000
-::- Ve standard
~Vw sma l l ho les
--o- Ve small holes
- e - Ve iVw s tandard
--D-Ve iVw .sma ll ho les
100
10
900
800
700
a.>
>
"0
>
600
500
0.1
400
0,01
300
~
~
~
<
!l.l
>
200
0.001
100
0,000 1
WORKING REGIME
Figure 3.: The removal rate Vw, wear Ve and relative wear VeNw versus working regimes
The removal rate is not eligible for machining estimation. The process of small hole drilling
is estimated better with V)J [ mm/min] which is the measure of electrode penetration speed
(Figure 4.). We achieved the best machining parameters with 4.regime. These results were
applied in the next step of the research - the recognition of the small hole drilling process.
4
--_
.s
0..
>
3,5
2
1,5
0,5
0
....
r-
A~
// ~
//
Vw (weight m.)
___.,_no chamber
2,5
-2
........:
"'"
WORKING REGIME
531
results enable us to define the discrepancy between standard EDM process and deep small
hole drilling process with EDM (Figure 5.).
1
0,9
g.
.=
<
.=
0
=
=-
0,8
0,7
0,6
0,5
0,4
0,3
0,2
0,1
PUlSES
At this point the research was focused on the study of voltage pulses in progression only.
The process classification was based on the computed pulse area which is as a significant
process attribute. Namely the small hole drilling process is completely different comparing
with standard process. So we described both processes with the examples (8 pulses in
progression) consisted on probabilities of certain pulse area (-1.6 to 18 mVs). The
examples representing the tested working regimes were ranged into three classes:
INEFFECTIVE, LESS EFFECTIVE and EFFECTIVE process. We evaluated the most
effective working conditions (regime 4) in the case of small hole drilling. The decision of
the probability of defined pulse area as a process attribute derived from the pulse analysis.
For example: pulse with area of 17 mVs is typical open voltage, pulse with area of 8 m Vs
is effective discharge with long t.J (60% of te), pulse with area of I mVs is an arc and pulse
with area of 0 mVs is short circuit. The samples of probability trends enable the process
ponderation (Figure 6.). It is obvious that small hole drilling process is completely different
compared with standard ED sinking process. The portion of effective pulses is about a half
532
of the portion attained in the case of standard EDM sinking. The process optimization
parameters for small hole drilling process need to be prescribed separately.
....
d
~
<
~
0
Q..
0,5
0,45
0,4
0,35
0,3
0,25
0,2
0,15
0,1
0,05
0
t- t-
....
;...
~
~
g:
~::till hi
LH..,
0,5
0,45 S>.{ALL HOLE DRI LL!NG( E:IY'{l.65mm):
EFFECl' JVE PROCESS Ao>30%
0,4 ]
0,35
0,3
0,25
0,2
0,15
0,1
E
0,05
.nJ
. l~lh
0
VOLTAGEPUREARFA [mVs)
0,5
....
~
d
~
g:
0,45
0,4
0;35
0,3
0,25
0,2
0,15
0,1
0,05
0
VOLTAGEPUREARFA [mVs)
....
=
<
=
=
0
Q..
lh
0,5
0,45
0,4
0,35
0,3
0,25
0,2
0,15
0,1
0,05
0
VOLTAGEPUREARFA [mVs]
....
I
Jlli
VOLTAGEPUSEARFA [mVs]
533
- at certain conditions it is possible to work out a hole with diameter d=0.65mm and depth
h=6.5mm in the time t<3.5min. This results enable comparison with other competitive
machining processes directly;
- the process recognition enables to define the optimization parameters. The accomplished
experiments contributed to the development of a Technological knowledge data base
(TKDB) and the decision system for the process control.
5. LITERATURE
1. Masuzawa, T.; Kuo, C.; Fujino, M.: Drilling of Deep Microholes using Additional
Capacity, Bull. Japan Soc. ofPrec. Engg., Vol.24, No.4, Dec. 1990,275-276
2. Masuzawa, T.; Tsukamoto, J.; Fujino, M.: Drilling ofDeep Microholes by EDM, Annals
ofthe CIRP, Vol. 38/1/1989,195-198
3. Toller, D. F.: Multi-Small Hole Drilling by EDM, ISEM 7, 1983,146-155
4. Znidarsic, M.; Junkar, M: Crater to Pulse Classification for EDM with the Relative
Electrode to the Workpiece Motion, EC'94, Poland 1994
5. Roethel F., Junkar M., Znidar8ic M.: The Influence of Dielectric Fluids on EDM Process
Control, Proceeding of the 3rd Int. Machinery Monitoring & Diagnostic Conference, Dec. 9-12,
Las Vegas, USA, 1991, 20-24
6. Junkar, M.; Sluga A.: Competitive Aspects in the Selection of ,Manufacturing Processes,
Proceedings of the lOth Int. Conference on Applied Informatics, 10-12 February, Innsbruck,
Austria, 1992, 229-230
7. Junkar, M.; Filipic, B.: Grinding Process Control Through Monitoring and Machine
Learning, 3rd Int. Conference "Factory 2000", University of York, UK, 27- 29 July, 1992,
77-80.
8. Junkar, M.; Kamel, I.: Modeling of the Surface Texture generated by Electrical
Discharge Machining, Proceedings of the 12th lASTED International Conference on
Modeling, Identification and Control, Insbruck, Austria, Acta Press 141 - 142
9. Junkar, M.; Filipic, B.; Znidarsic, M.: An AI Approach to the Selection ofDielectric
in Electrical Discharge Machining, presented at the 3 .rd Int. Conf. on Advanced
Manufacturing Systems and Technology, AMST 93, Udine, 1993. 11-16.
S. Trajkovski
University "Sv. Kiril i Metodij", Skopje, Macedonia
In hot machining, heat is applied at the workpiece material in order to reduce the shear
strength in the vicinity of the shear zone. Briefly, the technique of electric contact heating
consists of passing a relatively large current (AC or DC) between the cutting tool and
workpiece, heat being generated on the workpiece material by the Joule-effect.
Austenitic manganese steel is characterized by the high resistance to abrasive wear and
considerable work- hardening effect which causes a very poor machinability.
Applying additional heat in the cutting zone it is attended primarily to reduce the work
hardening effect and the shear strength of machined material in order to obtain a better
machinability.
S. Trajkovski
536
2. EXPERIMENTAL CONDITIONS
2.1 Workpiece-material
Casting specimens of manganese steel (heat treated), with following chemical composition:
1.2% C, 11.7% Mn, 0.66% Cr, 0.96% Si, and the following mechanical characteristics: 580
Mpa tensile strength, 372 Mpa yield strength, HB 200-220, 31% relative elongation and
24% reduction of area, were used.
2.2 Tool-material
SINTAL- throwaway cemented carbide tips: HV-08 (ISO-K10), HV-20 (ISO-K20), SV08
(ISO-P10), SV20 (ISO-P20), and coated tips with TiC+TiN coating: TNC-HPLus, TNCSPLus, all with tips geometry SNMA 120408 and TNC-HPLUs SPUN 120308 (on the base of
ISO-K25 and ISO-P20) were used.
When the carbide tips are clamped on the tool holder the principle cutting angles were as
follows:
-for the tool tips SNMA: rake angle y=6, inclination angle A.=-6,
-for tool tips SPUN: y=6, A.=-6, and
-for both tool tips clearance angle a=6, principle cutting angle K=75 and the
auxiliary cutting angle K1=15.
2.3 Experimental arrangement
Figure I shows the experimental arrangement for electric contact heating. A relatively large
D.C. passes through the chip tool interface during a turning operation, by connecting a
welding type transformer-rectifier across the tool and workpiece. Amperage was measured
by calibrated shunt connected with a mV meter (600A/60mV).
.______0u,~
6
Fig. I Circuit diagram for D.C. electric contact heating. 1- workpiece, 2- cutting tool, 3tool isolation, 4- D. C. rectifier, 5- graphite-copper brush, 6- V meter, 7- m V meter, 8calibrated shunt, 9- switching device.
537
3. TOOL-LIFE
The tool life tests were performed in turning operations using a new short time method
developed by author [4]. The obtained results show that the tool tips of grade P (SV-08 and
SV-20) are more wear resistant specially on the rake face, but they are more unreliable.
After short period of machining (3-5 min) failure of cutting edge appears. Fig. 2 shows the
wear process developed on the rake face and the flank face of the cutting tool.
Fig 2 Tool wear development in machining austenitic manganese steel with electric
contact heating. a) for HV-20 SNMA; b) for TNC-!I'ws SNMA
Results obtained in tool life test are expressed as a ratio of tool life in hot machining (Th) to
the tool life in conventional machining (Tc). Figure 3 shows the change of the relative tool
life (T h/T c) with the current density in machining with tool tips TNC-HPLus SNMA. From
fig.3 can be observed existence of two optimal current densities (one for 80 A, and the
other for SO A) in which the maximal relative tool life can be obtained.
6
u
1;;
- - v=20 .nlmin
I=
0 -~------+-------+-------+-------4-------~
0
20
40
60
80
100
I(A)
Fig. 3 Effect of heating current density on relative too/life for different cutting speeds
538
S. Trajkovski
With the increase of the cutting speed above 60 m/min (for the given conditions) the
optimal current density is removing to the smaller values.
In the table 1 are given the calculated cutting speeds from the Taylor's equations, obtained
from the tool life tests, with tool tips types TNC-HPLus SNMA for different current
densities I (A), and the vh I Vc - ratio, where Vc - cutting speed in conventional machining
and vh in hot machining. Figure 4 presents the change ofvh I Vc with the current density for
different tool life.
Tablel Cutting speeds (v) and the relative increase ofcutting speeds (vfi'vJ for different too/life
I (A)
v (m/min)
T= 30min
T = 60 min
30,6
24,3
Taylor's equ.
Vh / Vc
T= 30 min
1
T= 60 min
1
-ro
30
T0.36 v = 115
33,8
26,3
1,1
1,08
60
44,2
33,2
1,44
1,37
80
T029 v = 126
47
38,4
1,54
1,58
100
T03 v=116
41.8
34
1,37
1,4
33
v = 94
v = 178
0.41
1.6
1.5
~-- T=60
min
~~
I
I
1.4
~;;.
1.3
1.2
1.1
-==~
//
//
'-..,
TNC-HPLUSSNMA.l2.04.08-
0.9
0
'l
20
40
60
80
100
I (A)
L.O
'E 3,5
..=.
~ 3.0
539
I.
HV20 SNMA
3 TNC-
>20408
H~'"'5PUN 120304
3
a)
1.5
1.0
60.lmm
ro.amm
x ?s
0.5
o'--------,J ci-ro-so.-:oso=---......:ceo=----:~-:;oo;;---;~-:;2o;:;----;-;
,s:no~
I (~
1. \ ) ..
, , , 2m/m in
2. Ll 29.9
m / m in
b)
1.0
io
&o
~o
110
liAJ
1-
J~ O
(A)
-<>- 1 ~ 60
(A)
--- 1 ~ so
(A)
4 .
---
.
2
__....
...--/
__.
~ :.--
1-
::--
-~
0.06
0 .08
0 .1
0 . 12
0 . 14
0.1 6
0 . 18
0 .2
0.22
Fig. 6 Effect offeed rate on surface finish for different current densities.
540
S. Trajkovski
From figure 6 can be seen that the influence offeed rate on surface finish is much smaller in
hot machining in comparison with the conventional machining. This is a favorable effect of
electric current heating which make possible for the given Ra to obtain higher production
rate.
5. PRODUCTION RATE
In the production practice, production rate commonly is determined with the number of
pieces (or operations) produced per unit time, called "cyclic production rate".
Qc
(piece/h)
(1)
where:
T 0 - operation (cycle) time (min)
Tc- cutting time (min)
Tn- auxiliary time (min)
For turning operations in machining of a cylindrical surface, with the diameter D (mm) and
length L (mm) cutting time can be calculated from the equation:
Tc = 1t D L I 1000 v s (min)
(2)
assuming auxiliary time Tn = 0, the "technological production rate" (Qt) can be expressed:
Qc = Qt = 60 I 000 v s I 1t D L = C s v
(3)
where:
v - cutting speed (m/min)
s- feed rate (mm/min)
C - constant depending of the size of machined surface.
From the equation (3) can be seen that increase oftechnological production rate (Q1) in hot
machining can be obtained with the increase of cutting speed for a given tool life (T) and
feed rate for a given surface finish quality (R.).
The relation between the surface finish Ra and cutting speed and feed rate usmg
experimental results can be expressed with the following equations:
Ra = 166
Vc-0.386 Sc
(4)
(5)
where qEo- specific current density (A/mm 2)
Solving equations (5) and (4) for~ can be obtained:
541
sh=
(R/1013)
Sc =
I.S 22 Vh
(6)
and
(7)
According to the equation (3) and the equations (6) and (7) the relative production rate can
be written:
(8)
ForT= 30 min,
(9)
Vh 2.03
T = 60 m in. Ra = 3
Ill
- o - T =30 m in . Ra =3
111
T = 60 m in , Ra = 6
111
- ><- T - 30 m in. Ra 6
111
I-
14
"~=--~~~
..- C:::::::,
LA
L/ ~
/
.,..
/_./>--/' ~
..... ~
----2
--=--=--- r----.-
_.,
/
--=:::::::
------
200
400
600
800
1000
q g0 (A /nun 2 )
542
S. Trajkovski
As higher is the cutting speed, the lower additional heat is needed to obtain the optimal
cutting temperature for maximal relative tool life, relative cutting speed and relative
production rate in hot machining.
REFERENCES
I. Trajkovski, S.: Investigation of machinability of high manganese steels at elevated
temperatures, Research Project supported by the State department of Science, Skopje,
Macedonia, 1983
2. Trajkovski, S.: Machinability of austenitic manganese steel at elevated temperatures using
electric current heating, Proceedings of the 7th ICPR, Windsdor, Canada, 434-440
3. Trajkovski, S.: Distribution of heat sources in the cutting zone in hot machining by
electric current heating, Proceedings of the 9th ICPR, Cincinnati, Ohio, USA, Vol. II, 1987,
1619-1623
4. Trajkovski, S.: Flank wear intensity method for determination of tool life equation, Proc
ofthe 9th ICPR, Cincinnati, Ohio, USA, Vol. II, 1987, 1770-1775
5. Trajkovski, S.: Influence of heat treatment on machinability of high manganese steel,
Proceedings ofthe AMST-87, Opatia, Croatia, 1987, 79-83
E. Capello
C.N.R., Milan, Italy
M. Monno and Q. Semeraro
Polytechnic of Milan, Milan, Italy
1. INTRODUCTION
The suitability of Abrasive Water Jet (AWJ) machining to the glass industry is well
known and several applications can be found in many sectors. In fact, AWJ is mainly
used in glass machining when traditional manufacturing processes fail, that is, for
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
544
example, in cutting complex plane profiles (with curvature radii< 50 mm) or in multilayered glass cutting. Due to the intrinsic flexibility of the AWJ systems, several sectors
of the glass industry might be involved in the use of an AWJ system for different
operations, such as cutting, piercing or drilling.
Only few studies concerning the efficiency of cutting briitle materials with AWJs have
been presented in the literature and the aspects concerning the "quality" of the generated
kerf have been partially investigated up to now. The paper presents an experimental
study on the influence of the cutting parameters on efficiency and quality of the cut. This
study has been divided into three parts: the first part aims to the definition of a set of
macro and micro geometric parameters that can be used to characterise, or "measure",
the quality aspects of the generated kerf The identified parameters are: a quality score
which is obtained using a quantitative classification of the cracks and flaws caused by
AWJ cutting, the kerftaper and parameters related to the topography and morphology of
the side surfaces ofthe kerf(waviness and roughness).
The second part of the paper aims to the identification and validation of a statistical
prediction model for the "threshold feed rate", that is the maximum cutting head feed
rate by which the jet completely cuts the workpiece (and does not generate a blind
groove). The threshold feed rate is one of the most interesting process variables from the
industrial and operative point of view, because productivity is a critical factor when
cotnParing AWJ cutting to other cutting processes.
Finally, the third part is an analysis and a characterisation of the quality of the generated
kerf: in terms of taper and surface finish. In this study the influence of the process
variables has been investigated. A family of statistical models has been identified and
validated, which has shown the possibility to establish a direct relationship between
process variables and quality of the cutting results.
2. QUALITY OF THE CUT
In order to perform a statistical analysis of the influence of the process variables on the
quality of the cut, a set of indexes has been identified to quantitatively '~easure" the
quality of the resulting kerf
Damage and not-through-passing cuts: As a first approach, a class model has been
defined (see figure (1)). An acceptable cut (that is through passing and without severe
damage) can be classified at least in class 2; it should be noticed that traditional
manufacturing processes reach class number 0 or 1 maximum.
Taper: The taper of the kerfhas been defined as (see figure (2)):
a-c
r =t-
(1)
Roughness and waviness: The surfaces generated by AWJ cutting are characterised by
the presence of striations in the lower part of the kerf: while the upper part is dominated
only by roughness (see figure (3)). Therefore, in order to qualify the AWJ surfaces it is
necessary to evaluate both roughness and waviness parameters at least at two different
kerf depths. The investigated parameters are R,., R., Rt, R,, Rs1, Wt, Sm.
545
a+c
W=p --st
m 2
(2)
The same quantity can be expressed in terms of erosion process as the product of the
abrasive mass flow rate ma, the erosion ratio g,Jga (the ratio between the mass per unit of
time of the material gm eroded by the abrasive mass ga delivered to the workpiece in the
time unit) and the time "t (duration ofthe cut):
W= gm ma r
ga
(3)
gm
7:
ga
aS
gm ma
-=-m - = - ga
(4)
WTFR
S
= gm
ma
ga UTFR
(5)
546
where UrFR is the TFR. Substituting equation (5) in equation (2), and considering that the
kerf taper has been evaluated using formula (1), one obtains:
-1
(6)
As stated before, the kerf taper YrFR and the width a of the upper side ofkerfs obtained at
the TFR are almost constant and, therefore, the TFR increases if the erosion ratio g,/ga
increases, that is if the mass of material eroded by the unit of abrasive mass increases.
The influence of the grain size and of the process parameters on the erosion ratio is not
known, since many physical aspects of the erosion phenomena, due to their complexity,
are still unknown. In order to identify a predictive model of the TFR it is therefore
necessary to use a statistical model based on experimental data.
3.2 Predictive Model of the TFR
The previous analysis has been used to select the process variables that significantly
influence the TFR and should be included in a predictive model. It has been found that
the effect of the abrasive mass flow rate on the TFR is very small and can be neglected.
Mor~over, since the second order interactions between the process variables are
significative, a linear double logarithm model has been analysed and validated:
(7)
where a1 (i=0 .. 4) are the regression parameters. The linear regression has led to the
following exponential equation, which is the empirical representation of equation (7) and
can be used to evaluate the influence of the process variables on the erosion ratio:
(8)
The model has been validated and the regression coefficient is ~=0.986; the residuals are
not correlated (confidence level 98.7%) and normally distributed (confidence level
88.3%). The pure error test, which expresses the coherence between the model and the
data, has been executed and verified.
4. ANALYSIS OF THE QUALITY OF THE KERF
4.1 DOE and ANOVA
Aiming to the investigation of the quality of AWJ kerfs, a new experimental plan has
been designed (see table (2)). The experimental cuts have been performed in a random
sequence, in order to reduce the effect of any posSible systematic error. Each cut has
been replicated twice. The quality of the kerf has been measured in terms of taper and
surface finish. In particular, the surface parameters used to characterize the quality of the
kerf surfaces are the ones previously identified in the first part of the work. These surface
parameters of the kerf have been evaluated using a profile recording instrument Perthen
S6P (sampling length 17.5 mm, cut offlength 2.5 mm). The results ofthe ANOVA are
reported in table (3). These results clearly show that:
547
1. The grain size is the most relevant process variable for all the quality factors.
2. The kerf taper is deeply influenced by the feed rate and the grain size.
3. Ra, R., Rq, Rt, (that will be referred to as the R,. family) are strongly influenced by the
four variables and by some interactions, while the values of Rr1 are extremely
widespread but none of the process variables has a significative influence on them
4. Wt, Sm, are influenced neither by the mass flow rate nor by the interactions, but the
variance explained by the remaining factors is a small part of the global variance. This
implies that a predictive model is not very significative and of small practical interest.
Based on these results, a second experimental plan has been designed and executed,
using the scheme reported in table (4). The roughness parameters family R,. has been
measured at different kerf depths h. The collected data have been used to perform a
linear regression between process variables and roughness parameters. The aim of this
regression analysis is to find a common predictive model with a simple mathematical
structure for all the roughness parameters, that is a model for the R,. family.
4.2 Predictive model of the surface finish
The search for a significative model common to the R,. family has lead to the definition of
the following equation:
lnR.
(9)
(10)
for the Olivina sand, where, as explained, R, expresses the family of roughness
parameters, and aa are the regression coefficients for each component of the family. As
can be seen, the presence of the grain size G in the model (equation (9)), valid for the
Garnet abrasive, has led to a complicated structure of the model. In fact, the grain size
deeply influences the effect of the other process parameters (second and higher order
interaction). On the contrary, the model for Olivina sand (equation (10)) is simply the
exponential model, since this abrasive is commercially available only in mesh # 70. Tests
on the hypothesis of applicability of the linear regression have been executed and
verified. The values of the regression coefficients are reported in table (5), together with
the estimated parameters. In table (6) are summarised the trend between the relative
parameter and the predicted value. As expected, the signs remain the same for Olivina
sand and Garnet. As an example, figures (4, 5) show the predicted vs. actual valnes of
Ra, and the residuals vs. actual values graphs for the garnet model. The other parameters
of the R, family show similar graphs.
5. CONCLUSIONS
The overall problem addressed in the paper is the influence of the process variables on
the efficiency and quality of the AWJ cutting of glass. To this end an experimental study
has been conducted in order to investigate the relationshiP between six process variables
and the threshold feed rate (TFR) and the quality of the kerf (taper and surface
morphology). From the analysis of the experimental data it can be observed that:
548
The efficiency of the AWJ machining of glass (related to the TFR) strongly depends
on the glass thickness, on the water pressure and on the abrasive grain size.
The quality of the machining strongly depends on the abrasive grain size and on the
feed rate.
The abrasive mass flow rate mildly effects the efficiency and the quality of the cut.
From these results it can be stated that the critical parameters that must be carefully
selected are the feed rate and the grain size. In particular, the grain size has a deep
influence on the efficiency and the quality of the cut. A decrease in the grain size leads to
an increase of efficiency of the machining and of the quality of the results (surface
roughness and kerf taper). Moreover, in order to predict the efficiency and the quality of
the AWJ machining, a set of experimental models has been identified and validated. The
proposed models can be used to predict the results of the machining and to evaluate the
suitability of AWJs to the glass industry.
ACKNOWLEDGEMENTS
This work was carried out with the fundings of the italian M.U.RS.T. (Ministry of
University and Scientific and Technological Research) and CNR (National Research
Council ofltaly). The authors are grateful to Dott.sa G. Boselli for her help in reviewing
the final manuscript.
REFERENCES
1. S. Bahadur and R Badruddin ''Erodent Particle Characterization and the Effect of
Particle Size and Shape on Erosion", Wear, vol. 138, 1990, pp. 189-207.
2. S. Yanagiuchi and H. Yamagata, "Cutting and Drilling of Glass by Abrasive Jet",
8th International Symposium on Waterjet Cutting, Durham, UK, 1986.
3. A.J. Sparks and I.M. Hutchings, ''Effects of Erodent Recycling in Solid Particle
Erosion Testing", Wear, vol. 162-164, 1993, pp. 139- 147.
4. D.J. Whitehouse, Handbook of Surface Metrology, Institute of Physics Publishing,
N.Y., 1994.
5. M. Hashish, ''Characteristics of Surfaces Machined with Abrasive Water Jet",
Transactions of the ASME, vol113, July 1991.
6. E. Capello, M. Monno, Q. Semeraro, "On the Characterisation of the Surfaces
Obtained by Abrasive Water Jet Machining", 13th International Symposium on Jet
Cutting Technology, BIIRA, Cranfield, UK, 1994
549
striations (waviness)
0.2
0.1
~..
0.0
~
.0.1
-0.2
-0.3
15
10
-<---..,------,.-- ------r
dr
t
5
Abr.
Oliv.
Gam.
1.2
Oliv.
15
Gam.
Oliv.
Gam.
Mesh
70
50
80
120
70
50
80
120
70
so
80
120
3.0
In Ra
p
m.
2.5
2.0
Predicted R. values
200
5
200
10
300
5
300
10
1.1
1
1.3
1.45
0.25
0.25
0.35
0.35
0.3
0.4
0.5
0.6
1.15
1.05
1.3
1.5
0.275
0.275
0.35
0.35
0.5
0.6
0.6
0.6
2
1.8
2.35
2.9
0.45
0.45
0.65
0.65
0.6
1.1
2.15
1.9
2.6
3.2
0.5
0.5
0.75
0.75
0.7
1.2
1.2
1.2
1.1
1.1
Grain size
Pressure
Feed rate
Mass flow rate
Mesh
MPa
mls
~min
50
200
0.2Um
0,3
120
300
0.8u,.
~6
550
Proc. variable
u
lrla
G-P
G-u
G-ma
P-u
P-ma
u -ma
G-P-u
G-P-ma
G-u-ma
P-u-ma
G-P-u-ma
T~er
Ra
99,5%
99,5%
99,5%
99,5%
75,0%
99,5% 99,5%
= 99,5%
99,5% 99,5%
75,0% 99,5%
=
-
99,0%
99,5%
75,0% 97,5%
= 75,0%
= 95,0%
75,0% 99,5%
= 75,0%
R_g_
75,0%
75,0%
75,0%
95,0%
99,5%
=
-
Quality parameter
R,
R,
R,k
-
w,
Sm
99,5% 99,5%
= 99,5% 99,0%
99,5% 99,5%
97,5%
=
75,0%
=
= 90,0% 99,5% 75,0% 95;0%
99,0%
97,5%
75,0% 95,0%
75,0%
= 95,0% 75,0% 75,0% 75,0% 95,0%
=
90,0%
=
=
99,0%
-
99,5%
99,5%
99,5%
99,5%
75,0%
99,5%
99,5%
99,5%
99,5%
95,0%
90,0% 75,0%
95,0%
80
300
0.4 Urn
0,6
1201
0.6 u,..l 0.8u,..l
15
300
0.4 u,.,.
0,6
2/3 t
lcro
~
~2
~3
~.
a,
,2
Gamet
Olivina sand
R.
R.,
R,
R,
R.
2,29
-0,42
0,03
0,08
-0,12
-0,12
0,92
2,59
-0,39
-0,12
-0,13
0,05
0,08
0,89
4,54
-0,35
0,04
0,08
-0,10
-0,10
0,82
4,10
-0,33
0,03
0,07
-0,09
-0,09
0,84
0,20
0,36
0,39
-0,11
-0,22
0,57
2,22
0,37
0,34
0,40
0,38
-0,15 -0,09
-0,24 -0,22
2,15
0,29
0,32
-0,05
-0,21
0,91
0,90
0,84
R_ll_
R,
0,82
R,
Gamet
u
p
ma
h
R.
+
+
Olivina sand
R,
+
1. INTRODUCTION
The increased market competition as well as the need to provide for a desired level of
product quality as one of the highly demanding requirements of modem manufacturing is
directly related to the introduction of new or high technologies. In this sense, the laser
technology has managed to impose itself very quickly as indispensable in almost all
manufacturing industries. It has found a wide application in metal working industry for
material forming, measurement and quality control. In material forming laser machines are
mostly used for contour cutting of sheet metal, drilling, surface hardening and welding. The
laser cutting is one of the largest applications in metal working industry. It is based on the
precise sheets cutting by means of a focused beam of laser rays. The laser beam is a new
universal cutting tool able to cut almost all known materials. With respect to various other
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
552
procedures (such as gas cutting, plasma cutting, sawing and punching), its advantages are
numerous, namely, a narrow cut, minimal area subjected to heat, a proper cut profile,
smooth and flat edges, minimal deformation of a workpiece, the possibility of applying high
velocities, intricate profile manufacture and fast adaptation to changes in manufacturing
programs. That is why comprehensive research of both theoretical basis and experimental
aspects of the laser cutting is being carried out.
2. LASER CUTTING
Laser cutting is based on applying a highly concentrated light energy obtained by laser
radiation that is used for metal forming by melting or evaporation. Laser cutting processes
make heat action fully effective (namely, heating, melting, evaporation), that is those that
are produced by the laser beam affecting a workpiece surface.
(a) focused laser beam; (b) nozzle con; (c) cut width; (d) distance of the nozzle
con from the workpiece surface; (e) cutting velocity; (t) focused laser beam at
the focus; (g) molten material flow; (h) heat zone; (i) cutting front; (k) molten
material particles
Fig. 1: Schematic Drawing of Laser Cutting Process
553
The laser beam' effect upon a workpiece material can be divided into several characteristic
phases:
absorption of the laser radiation in the workpiece surface layer and transformation
of the light energy into the heat one,
heating of the workpiece surface layer at the place subjected to the laser beam,
melting and evaporation of the workpiece material,
removal of the break-up products, and,
workpiece cooling after the completion of the laser beam' effect.
A desired cut is obtained by moving the laser beam along a given contour. Since our desire
is to remove the evaporated and molten material from the affected zone as soon as possible,
the laser cutting is performed with a coiDclal current of the process gas. The gas blowing
increas~s the cutting velocity for as much as 40%. Fig. 1 gives the schematic drawing of the
laser cutting process.
By combining the laser as the light radiation source and the machine providing motion, in
addition to the applied numerically controlled system, it is possible to provide for a
continual sheet cutting along the predetermined contour.
A very important indicator of the laser cutting is balance of power in the cut zone and the
used heat conditioned by it. The balance of power with the laser cutting is given by the
expression:
(1)
where: PL- laser radiation power; PR- power used for inducing the molten state; Po- power
led away by the molten material and process gas; Pp- power lost due to its passing through
the workpiece, Ps- power obtained by the exothermic reaction.
Part of the laser beam power is lost due to its passing through the workpiece, by the molten
material and process gas. Still, its greatest part is absorbed and used for inducing melting
and evaporation of the material at the cut point. The absorbed energy quantity is mostly
depending on thermal and physical properties of the workpiece material; the choice of
material also depends upon them. Absorption is essential only at the first moment of the
interaction between the laser beam and the workpiece material. Later on, heat diffusion is of
crucial influence.
In the laser cutting operation, in addition to the heat obtained by focusing the laser beam,
the process gas is also used for removing the molten material from the cutting zone, to
protect the lenses from evaporation and to aid the burning process. The useful power can be
increased in the case that the process gas is oxygen due to the exothermic reaction. The
energy balance of the exothermic reaction is given in the following equations:
Fe+ 11202::>Feo-3,43kJ/g
(2)
3Fe0+1/202=>F~04-1,29kJ/g
(3)
Thus this energy presents a greater part of the overall energy used for melting the
workpiece material. In the cutting process, during the focused laser beam' movement with
respect to the workpiece, a part of the power PP is used in the cut direction since it preheats
the cut place. Due to the rapid heating and cooling in the cut zone the molten surface layer
554
hardens thus affecting the cut quality. As a function of the power density there is a break-up
starting-point at which the process of the material removal begins. What is often meant by
breaking-up is the achievement of melting point on the workpiece surface. At powers
greater than this one the material is removed by evaporation.
For laser cutting, the most commonly applied machines are C02 lasers (90% of all sheet
cutting lasers for continual and impulse working regimes) due to their high productivity in
the modern manufacturing. With respect to their effectiveness the C02 lasers are among the
top ones in the laser techniques. Their efficiency ratio reaches 20% and their wave length
radiation is 10,6J..UD absorbing a great number of materials. They are used for cutting all
sorts of metal (carbon steels, stainless steel, alloyed steels, aluminium, copper, brass,
titanium, etc.) and non-metal (plastics, rubber, leather, textile, wood, cardboard, paper,
asbestos, ceramics, graphite, etc.).
3. WORKING QUALITY
Working quality obtainep by laser cutting is determined by the shape and dimension
precision as well as by cut quality. The workpiece shape and dimensions' accuracy are
determined by the characteristics of the coordinate working table as well as by the control
unit quality as in the case ofNC or CNC laser sheet cutting machine. The cut quality refers
to the cut geometry, the cut surface quality and physical and chemical characteristics of the
material in the surface cut layer.
555
in the surface cut layer refer to the surface layer formed in the laser cutting process due to
the heat effect of the laser beam upon the workpiece material. What is observed is the
material's microstructure as well as its hardness, delayed strains, oxide layer thickness and
slag's deposits. Fig. 2 gives a schematic drawing ofthe laser cut.
The cut width is an essential characteristic of the laser cutting process giving it advantage
over other sheet cutting procedures. The cut width of metals is small; it ranges 0,1 +0,3 mm
with steel sheets' cutting. The cut width increases along with the sheet thickness. The cut
sides' inclination also determines the cutting quality. The cutting of material by means of the
focused laser beam is characterized by narrowing of the cut. Its size depends on many
factors, primarily on the focal distance of the focusing lenses as well as on defocalization, in
addition to the properties of the workpiece material and the laser beam's polarization. In
order to determine quantitatively the cut sides' inclination the cut sides' inclination tolerance
(u) and the cut sides' inclination angle (f3) are used. The cut edges at the laser beam entrance
side are rounded out due to the Gauss distribution of radiation intensity over the laser beam
cross-section. The edges' rounding-out is very small; the cut edge rounding radius ranges
from 0,5 mm to 0,2 mm with steel sheets cutting; the round increases along with a rise in
sheet thickness.
The laser cut surface reveals a specific form of unevenness. As either semicircular grooves
or proper grooving they are the consequence of the focused laser beam shape, the cutting
velocity and formation process, as well as of the removal and hardening of the molten
material at the cut place. Observation of the cut surface can reveal two zones: the upper one
in the area of the laser beam entrance side and the lower one, in the area of the laser beam
exit side. The former is a finely worked surface with proper grooves whose mutual distance
is 0, 1+0,2 mm while the latter has a rougher surface characterized by the deposits of both
molten metal and slag. That is why it is determined to measure roughness of the cut surface
at the distance of one third of sheet thickness from the upper cut edge. There is a difference
between the cut surface roughness in the direction of the laser beam from that in the
direction perpendicular to the laser beam axis, that is in the cutting direction. The former is
of no crucial importance in considering the problem of the cut surface roughness due to the
fact that the laser is applied to thin sheet cutting. The latter is a more obvious phenomenon
that can be observed and analyzed. The parameters that are most often used for accessing
the surface roughness are: ten point height of irregularities (Rz) and mean arithmetic profile
deviation (Ra).
The laser cutting is a high-temperature process causing a noticeable yet small heat damage
of the material surrounding the cut zone, that is an insignificant change of the basic
properties of the workpiece material. The shape of the changes upon the materials induced
by the laser radiation can be of various forms. The changes may involve the crystal structure
as micro and macro cracks of the material on its surface or as zones molten together or
evaporated. A great number of metals are characterized by two or more crystal structures
stable at various temperatures. High temperatures cause polymorphous modifications to
change into one another along with the change of properties ofthe basic material. Since the
laser cutting is actually the thermal way of cutting then the structure of the material changes
in the cut zone. Changes of hardness in the surface cut layer are due to the fact that the
556
s,. (mm)
~e)
r(mm)
Radius
Roughness
Rz (J.Lm)
R. (J.Lm)
Slag Height
h. (mm)
p0.406 so 259
L 0394
0
S,
=0,321
P=0,226
r =0,039
v1,138 5 o,8o5
0540
P
L
vo.414 5 o.6IO
0 189
PL
5 o,542
hs =0,005
5 o,670
0451
PL
0330
V,
v 1.687 5 1 53o
?, 1311
L
557
unit CNC ZIT SOOM. The technical characteristics of the C02 laser are: radiation wave
length 10,6J.Ull, zone of the continual power regulation 0,2+1,3kW, continual work regime,
beam divergence less than 4mrad, beam diameter 22mm, mode TEMoo, circular polarization.
The optimal laser power is O,SkW The focusing system lens is of28mm in diameter and of
focal distance 125mm. The nozzle con opening is 1,6mm. The material used for examination
is low carbon steel Ust 13/Werkst.No 1.0333.5 (DIN). The work process is carried out by
the oxygen process of98% in purity.
Fig. 3 shows the cutting velocity change along with the sheet thickness change for various
dross heights at the laser power of SOOW and the process gas pressure of 80 kPa. Fig. 4
shows the cutting velocity change along with the laser power change for a variety of sheet
thickness at the process gas pressure of 80kPa and the dross height of 0,2mm The cutting
velocity rapidly decreases when sheets of greater thickness are cut. While cutting a sheet of
concrete thickness the cutting velocity can increase at the expense of increasing the allowed
dross height. The cutting velocity can also increase along with the laser power. However, it
has to be remembered that the laser almost always works with an optimal radiation power
so that this way of increasing the cutting velocity is not desirable.
~~------------------------,
Mmial: Ult 13 (DIN)/
Watot.NoUB33.5
1..--:
cw
~~----------------------~
Mmial: Ust 13 (DIN)/
WabtNo 1.0333.5
l..as<r:CO,,cw
co,.
-Cllwllr
l..tD:l\a12Smn
Nalzlocatll.&m!
NalzloC<Jl:ll.&m!
0.: O,.lnl'l
I.Jiscr Powor. PL.O.IIkW
Ou: O,.lnl'l
Dro&sli<ign:
--hoO.OOnm
12
-o-ho0.1CIDn
--hoO.:nun
-.....- ho0.3Qlm
_.,_ho0.40ml
10
- - hoO.S!mn
-X-I}C().6mn
-w-b=().llmn
--11-l.OO!m
4
2
2
O~rr~-r~-rrT,-rT~rr~-r4
9 10
Sheet Thick:Ix:ss s (nm)
0+-~~-r~~~~,-~~~~
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
558
4. CONCLUSION
Technological problems related to the application of laser machines to continual sheet
cutting are in insufficient knowledge of the laser technique application as well as due to
absence of sufficiently reliable practical data and knowledge about the parameters
influencing the work process itself. Therefore, in order to contribute to practical data, this
paper gives results of the experimental research referring to the determination of: quality
indicators of the cuts obtained by laser cutting, cutting velocity changes due to the ones of
sheet thickness for various dross heights at particular laser power as well as changes of
cutting velocity induced by those of laser power for various degrees of sheet thickness and
at particular dross height.
REFERENCES
1. Lazarevic,D., Radovanovic, M.: Nonconventional Methods; Metal Forming by Removal,
Mechanical Engineering Faculty, Nis, 1994
2. Radovanovic,M.: Automatic Design of Laser Technology, Ph. dissertation, Mechanical
Engineering Faculty, Nis, 1996
M. Ferrari
University of California, Berkeley and San Francisco, CA, U.S.A.
W.H. Chu
University of California, Berkeley, CA, U.S.A.
T.A. Desai
University of California, Berkeley and San Francisco, CA, U.S.A.
J.Tu
University of California, Berkeley, CA, U.S.A.
L INTRODUCTION
The inadequacy of conventional insulin-therapy for the treatment of the chronic disease of
Type I Diabetes has stimulated research on alternative therapeutic methods. The most
physiological alternative to insulin injections is the transplantation of the whole pancreas or
portions thereof, namely the pancreatic islet of Langerhans. About one to two percent of
the mass of the pancreas is composed of islets of Langerhans which secrete the hormones
insulin, glucagon, and somatostatin into the portal vein. The beta cells of the islets secrete
insulin in response to increasing blood glucose concentrations. In diabetes, insulin
secretion is either impaired or destroyed entirely. Ideally, transplantation of pancreatic islet
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
560
M. Ferrari et al.
561
2. CELLCULTUREWAFERFABRICATION
Cell culture wafers have been fabricated in order to evaluate the response pancreatic islets to
biocapsules-simulating environments. Culture wafer microfabrication involves standard
bulk processes in order to obtain the several mm-wide square pockets within a (100) wafer.
In figure 1, a preferred embodiment is shown wherein the pores of the diffusion membrane
are photolithographically defined.
562
M. Ferrari et al.
=:J
u
(a)
(h)
~
PSG
(b)
:=J
(i)
Boron dltTusion
+ + + + ++ + ~ + + +
(c)
[--~-->-~9
(j)
(k)
(d)
=='iii""""""7]
l-------~
~
r~
~rum~
(I)
,--~..!.~-,
Boron ditl'usion
t~J'F' ::::-4
+++++++++++t
563
oxidation eases the removal of the borosilicate glass, generated during p+ diffusion, on the
polysilicon surface. Rectangular polysilicon holes (20x4 pm2) (Fig. 2) are
photolithographicallydefined on the deposited p+ polysilicon (Fig. 3(d)).
Low temperature oxidation is used to produce a silicon dioxide layer of uniform thickness
which will be removed in the final step. The thickness of this layer determines the
maximum sized particles that can pass through the filter. Because of the well controlled
oxidization environment, the variation of pore channel oxide layer is less than 10% over a 4
inch wafer. By changing the oxidation time, temperature, and gas composition, pores
ranging from several tens of nanometers to several micrometers can be fabricated easily.
Several rectangular (12x10 pm2) anchor points are defined on top of this thin oxide layer
(Fig. 3(f)). A second polysilicon layer (6 pm) is then deposited and heavily doped usedthe
previously described recipe (Fig. 3(g)). The second layer is anchored to the first deposited
p+ polysilicon layer through the defined anchor points. Similarly, a 30 minute 9.5(YC low
temperature oxidation is done and the borosilicate glass is removed from the silicon wafer.
Square polysilicon (3x3 pm2) holes are patterned photolithographically and plamsa etched
into the second p+ polysilicon (Fig. 3(h)).
Finally, the wafer is cleaned and wet oxidized for 1 hour at lOOOOC to get 0.38Jlm thick
silicon dioxide layer. Three micrometers of phosphorous silicate glass (PSG) is deposited
on the front side of silicon wafer to provide further protection against the final silicon
anisotropic etching (Fig. 3(i)). Etch windows are opened on the backside of the silicon
wafer (Fig. 3(j) ). Since the filter membrane structure is connected on the front side, it is not
necessary to use a double-side aligner to define the position of the backside etch holes. The
wafer is etched in ethylene-diamine-pyrocatechol (EDP) at lOOT for about 10 hours. The
anisotropic silicon etching will stop automatically at the silicon and silicon dioxide
interface(Fig. 3(k)). Buffered silicon dioxide etch is then used to dissolve silicon dioxide
and to generate the pores for the nanofilters (Fig. 3(1)). Final rinse in deionized water
removes the residual acid from the filter. The biocapsules and culture wafers employed in
this study utilize pore paths obtained via the use of a sacrificial layer, as illustrated above,
but differ in their final structural embodiment.
3. CELL VIABILITY AND FUNCTIONALITY RESULTS
The viability and functionality of rat islets of Langerhans within the porous pockets of the
silicon culture wafer were monitored in vitro. Figure 4 is a schematic diagram showing the
microfabricated cell culture wafer with several cells inside the pocket. The bioseparation
membranes used in the experiments described below are similar to the ones presented in
section 2. All silicon culture wafers were autoclaved for twenty minutes prior to use.
M. Ferrari et a!.
564
3SO
300
2SO
500
Unencapsulated cells
D 78 nm half capsule
j
200
400
78 nm half capsules
unencapsulated cells
300
150
100
20_0
100
50
0
At 24 hours
24
72
Hours
565
Preliminary studies were also done on fully encapsulated islets. Full capsules were
obtained by joining two half-capsule units containing cell-prefilled pockets. Perfonnance
was again monitored by measuring insulin concentration after glucose stimulation of the
capsules in cell culture medium. As seen in Figures 7 and 8, fully encapsulated islets
remain viable and functional as long as free-floating control islets. In addition, islets in the
half capsules exhibited superior viability, in tenns of both survival time and insulin
production. This latter result is attributed to direct oxygenation of the islets from the free
surface.
4. CONCLUSION
In this study, a microfabricated biocapsule for the immunological isolation of cell
transplants is introduced and characterized. The biocapsule is determined to be sufficiently
biocompatible and nondegradable for the intended purposes, and to provide sufficient
diffusion of nutrients, glucose, and insulin for islet cell longevity. Preliminary results
indicate that microfabricated porous silicon environments, providing partial and full
containment of islets of Langerhans, maintain viability and functionality of the islets.
140
120
II~
D
3 J1rr1 c lased ca ps.ule
100
80
60
. 40
20
0
20
30
40 50
60
70
80
90 100
Tine (hOIJrs)
24
48
72
96
Time (hours)
5. ACKNOWLEDGEMENTS
Financial sponsorship for this research program was provided by MicroFab BioSystems,
which is hereby gratefully acknowledged. Special thanks to the Whittier Institute for Islet
Research in San Diego, CA., for the isolation of islets. Our gratitude to other researchers
of the Berkeley Biomedical Microdevice Center, for their encouragement, help,
suggestions, and support: Derek Hansford, Tony Huen, Lawrence Kulinsky, Debbie
Sakaguchi, Jason Sakamoto, and Miquin Zhang.
566
M. Ferrari et al.
REFERENCES
1. .KJ Lafferty, "Islet cell transplantation as a therapy of Type I Diabetes Mellitus," Diab.
Nutr. Metab. 2:323-332 (1989).
2. P Soon-Shiong et al. "Successful reversal of spontaneous diabetes in dogs by
intraperitoneal microencapsulated islets" Transplantation 54:769-774, n. 5, 1992.
3. P Lacy et al. "Maintenance of normoglycemia in diabetic mice by subcutaneous
xenograft of encapsulated islets," Science, 254: 1728-1784, 1992.
4. "Living Cure" by P.E. Ross, Scientific American, 18-20, June 1993.
5. 0 Weber et al. "Xenografts of microencapsulated rat, canine, porcine, and human
islets," PancreaticlsletCellTransplantation, C. Ricordi, ed. pp. 177-189. 1991.
6. R Calafiore et al. "Immunoisolation of porcine islets of Langerhans with
alginate/polyaminoacidmicrocapsules," Horm. Metab. Research 25:209-214, 1990.
7. M Goosen et al. "Optimization of microencapsulation parameters: semipermeable
microcapsules as an artificial pancreas," Biotechnology Bioengineering 27: 146-150, 1985.
8. CK Colton andES Avgoustiniatos. "Bioengineering in Development of the Hybrid
Artificial Pancreas," Transactions ofthe ASME, 113:152-170,1991.
9. ME Sugamori. "Microencapsulation of pancreatic islets in a water insoluble
polyacrylate," ASA/0 Trans. 35:179-799, 1989.
10. A Gerasimidi-Vazeo et al. "Reversal of Streptozotocin diabetes in
nonimmunosuppressed mice with immunoisolated xenogeneic rat islets," Transplantation
Proceedings, 24:667-668, 1992.
12. W. Chu and M. Ferrari, "Silicon nanofilter with absolute pore size and high
mechanical strength'', Microrobotics and Micromechanical Systems 1995, SPIE
Proceedings Vol. 2593.
13. M. Ferrari, WH Chu, T Desai, D Hansford, T H11en, G Mazzoni, M Zhang. "Silicon
nanotechnology for biofiltration and immunoisolated cell xenografts," MRS Proceedings,
Fall1995 (to appear).
14. R Normann et al. "Micromachined silicon based electrode arrays for electrical
stimulation of the cervical cortex" MEMS '91, Nara, Japan, 1991, pp. 247-252.
15. B Lassen et al. "Some model surfaces made by RF plasma aimed for the study of
biocompatibility," ClinicalMaterials 11:99-103, 1992.
16. YS Lee et al. in Mat. Res. Soc. Symp. Proc. Vol. 110, pp. 17-22, 1989.
17. J. Balint et al. in Mat. Res. Soc. Symp. Proc. Vol. 110, pp. 761-765, 1989.
567
18. Hellerstrom C, Lewis NJ, Borg H, Johnson R, Freunkel N. "Method for large scale
isolation of pancreatic islets by tissue culture of fetal rat pancreas," Diabetes, Vol. 28, pp.
766-769, 1979.
l.INTRODUCTION
At present electro-chemical machining has not a great area of utilization because complex
interrelations between its characteristic parameters are partially known or even unknown.
As a result electro-chemical processes that lead to anodic dissolvation of the material
piece are very hard to control and conduct, and machining accuracy is relativelly low.
[3],[4],[5].
With a view to improving these deficiencies the authors have done researches regarding
electro-chemical machining with externally imposed magnetic field (ECM-MF). For this,
it has been taken into account the following:
a) The possibility of an electric and hydrodinamic parameter control and favorable
conditions for raising productivity and accuracy of ECM by carrying out the electrochemical processes in the presence of an exterior magnetic field deliberately generated.
b) The fact that ECM unfolds in the presence of an interior magnetic field generated as
a result of specific ECM conduction phenomena such as electronic tool and piece
conduction, ionic conduction in electrolyte, and electric field intensity variation.
570
(1)
(2)
where K~< is an coefficient that takes into account the magnetic behaviour of the mediums
571
raJ
e
(d)
Fig. I Spectrum of uniform magnetic
field lines directed towards pieoe
at ECM-:MF
(d)
crossed by lines of force when they enter the gap, and K"' is a non-uniformity coefficient
of the magnetic field determinated by the form of the active surface of the tool and by
the value of distance a, (Fig.l a,2a) according to relation (3)
Koc =Koc(x,y,z,t)
(3)
The coefficient K"' has a constant value determinated for gaps limited by plane surfaces
and has the value K"'=l for the frontal gap.
3. DETERMINATION OF MAGNETIC FIELD INFLUENCE ON ELECTRICAL
PARAMETERS
The magnetic flux of the field generated by the coil is variable because the coil moves
at the same time with the tool, and the anodic surface continually modifies its form and
dimensions. As a result, induction phenomena are produced in the gap. These cause the
electrically-induced field, with Eim intensity.
The local form of magnetic induction law applied for continuity domains, in the gap, for
quasi-stationary regime of the magnetic field, allows the determination of the induced
electric field intensity Eim(V/m) depending on gap magnetic field induction BdT) and the
flowing speed of the electrolyte vL(m/s) according to relation (4).
(4)
Vectorial fields Eim and vL x BL are either identical or different through a scale field
gradient since the vectorial fields have the same vector. If this scale field is considered
constant, the gradient becomes null and relation (4) becomes:
(5)
572
The current density Jim of inducted current, may be determined from the electric
conduction law:
(7)
(8)
where K is the specific electrolyte conductivity. _
As a result, the total intensity of the electric field E. in the gap is obtained by adding the
intensity of the electric field produced through anodic polarization E to the intensity of
the induced electric field Eim:
(9)
On ECM-MF, the flowing speed of the electrolyte vL is obtained as the sum of two
components: component
determinated by the pressure forces of circulation system of
the electrolyte and component vm, due to the electromagnetical forces created by the
externally imposed magnetic field with the help of a special coil:
v;
(10)
The speed vP may be determined either experimentally or analitically on the viscous
liquids mechanical law. It has perpendicular direction on the local normal at the anodic
surface. As the electrolyte is a magnetic liquid [1], vm component may be determinated
by integrating the movement equation of anodic particles with a medium mass m. and
an electric charge Q. [I]
(11)
(12)
The solution of vectorial equation (11) using magnetic field laws 1s given by
(13),(14),(15)
(13)
(14)
(15)
The relatigns (13_1(14),(15), point out to the fact that only when (BL E)>O (the angle
between BL and E is smaller than 90) the externally imposed magnetic field produces
the acceleration of the electrolyte flow.
Otherwise the flow slows down, with turbionary phenomena with negative effects on
ECM productivity and accuracy. That is why the following research will consider only
the practical applications where magnetic field lines are directed towards the tool. (Fig.2)
At ECM-MF with uniform magnetic field in frontal gap limited by plane surfaces
(Fig.3a) we emphasise the following:
E=-Epk
(16)
573
(17)
(18)
(19)
Making the substitution of relations (16)-(18) in relation (5) it results:
E.un=-vp .n
rLKH-J=-F
p,
I H:-J
where: F 1=vP. f.LL' K~'->0
(20)
(21)
(22)
is.Jb.e polarisation electric field intensity in the frontal gap. It may be noticed
that Eim -h ~ Th~ at ECM-MF in uniform magnetic field growths of electric field
intensity E, (E, > Ep) are produced, without the modification of the normal component
Eimz which determines the intensity of the electro-chemical processes.
For machining in non-uniform magnetic field in the same frontal gap (Fig.3b}, it is
considered that the coil is cylindrical and has symmetry ax Oz.
The field generated by it has axial symmetry. E and vP may be calculated with relations
(16} respectively (18}. Using notation ~ and in Fig.3b, it results:
wher~Ep
BL=P.c K; H
(23)
In this case the greatest l!_lfluence on electro-chemical processes development comes from
Oz~omponent, Eimz of Eim vector. This is due to the fact that Eimz has the same direction
as Ep vector. From relations (23}-(26} and (5) it results:
Eimz=-vP f.LLK; Hsin'f= -F 2H (VIm)
where: F 2=vP P.L' K~ sin-e>O
~E;
fsL
"'
(f)
(27}
(28)
rii:::~:~l
z
( Q)
r'
s~<r
( b)
574
It may be noticed that at ECM-.MF with non-uniform magnetic field, growths of normal
component of electric field intensity in the gap, proportional with H are produced.
From (7) and (27) we can formulate similar observation about current density J. in the
Imz
conditions above as in relation (29):
(29)
The same relations and conclusions were obtained for normal and lateral gap zones.
It results that for the general case of any surfaces machining in non-uniform magnetic
field, E vector of anodic polarisation electric field intensity is oriented after the local
normal at anodic surface and its sense is towards the tool. Thus the greatest influence on
electric parameters and also on electro-chemical processes development, is that of the E.
vector projection of the induced magnetic field on the local normal, whose versor is ;
By using (5), (23),(27) it results the following form of the normal component Eun,n of
vector Eirn.
E.un,n =E.n=FH
n
un
(V/m)
(30)
(m3/min)
where: 11 is current efficiency (%); pP-mass density of piece material (Kg/m 3);
(32)
575
(A)
(33)
(39)
576
It results that sFM<sF, because KFn H>O, and this demonstrattes that on ECM-:MF,
Fiel~,
R. Miihlhiiusser
Institute for Production Systems and Design Technology
Fraunhofer-Gesellschaft, Germany
H. Milller and G. Seliger
Technical University of Berlin, Germany
1. INTRODUCTION
The ship building market has become increasingly competitive, with Korean shipbuilders
increasing market share, and the US funding it's moves from naval to marine production,
factors which are threatening European shipyards. To survive and prosper in the
Published in: E. Kuljanic (Ed.) Advanced Manufacturing $y.stems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
578
continuously growing global markets, the European ship industry needs a modernisation in
shipbuilding and shipping as it has been stated in one of the Task Forces set up by the
European Commission. The development of new technologies should improve
competitiveness, safety and environmental protection whilst protecting employment.
Quality and safety within ship-building and operation can be significantly improved using
the high potential of information technology for life-cycle oriented production and
operation. Approaches towards life-cycle modelling and design are mainly focusing on the
product design and development. An extension of life-cycle modelling and design results in
life-cycle engineering which covers manufacturing, maintenance and inspection over the
whole product-life-cycle. Especially in ship-building and operation a high standard of quality
and safety is required, whilst barriers of place, time and ship's one-of-a-kind nature have to
be overcome. A higher level of computer integration has to be achieved to perform lifecycle engineering by an effective and efficient exploitation of life-cycle modelling and design
and therefore, recognised gaps within the computer integration and production tools in shipindustry have to be filled Realising a higher level of computer integration, the development
aims at
providing real world data by the use of innovative robot tools suited for ship
production and maintenance.
Time-variable product models have to represent ships at every stage of their life-cycle.
These product models can be used for the different engineering tasks over the product's life.
Thereby, the adaptation of product models based on measurement and data acquisition
guarantees an increased model accuracy.
Within the ship industry the assembly, disassembly, inspection or maintenance tasks such as
gas cutting, welding, painting, annealing or ultrasonic testing and leak finding are mostly
performed manually under difficult and demanding working conditions. Because of the
dimensions of the products and their modules, conventional stationary robot types would
have to be of huge proportions, requiring high investment. Consequently, no computerised
tools suitable for integration are used in this area yet. Performing manufacturing, assembly
or inspection tasks, the small climbing robot prototype of the IWF can be integrated as a
tool for life-cycle engineering to realise measurements and data acquisition.
579
2. STATE-OF-THE-ART
Since ships can be characterised as one-of-a-kind products the concept on the application of
computer integrated technologies is quite different from other industries. This was resulting
in a tailored development and research on product models for ships which have been stated
as the essential part of computer integrated technologies within shipbuilding [1,2]. Although
the now existing STEP-based product model standard is widely used in various tailored
computer aided applications within shipbuilding, there is still a gap concerning the
integration of non-tailored applicable technologies.
Quality control and management has reached strategic importance in nearly all industries
and has led to developments of various computer aided quality management and control
tools (CAQ). An extension of quality oriented product and process development results in
newer approaches on life-cycle modelling and design [3,4]. The objective of life-cycle
modelling is to achieve high quality standard and environmental conscious products
regarding the whole product-life-cycle. Especially in the case of ship-industry the high
environmental and human risk has to be minimised by a high quality standard over the whole
product-life-cycle. Although, Quality Control is implemented in ship production and
inspection the application of QC as well as CAQ results in isolated solutions not part of the
computer integrated infrastructure. The information received within inspection processes is
not used in any way for the product model or the application of CAQ neither to improve the
model accuracy nor to represent quality information of the product nor to use life-cycle
modelling for the different engineering tasks towards product-life-cycle.
The main objective is to improve manufacturing quality by an increased model accuracy
concerning the geometric properties towards the manufacturing progress reducing
production time and costs. Although geometric properties of the ship parts are enormous
important within shipbuilding especially within the final assembly in the dry dock or in the
berth which is an extremely expensive bottleneck for most shipyards, the application of
geometric measurement methods for work in progress QC is neither implemented nor
integrated in the IT infrastructure.
Research and development on automated manufacture or inspection techniques applicable
for large scale steel constructions such as ships have been carried out in recent years. Most
of the approaches are focusing welding and material inspection and therefore are based on
climbing robot developments, either attached by vacuum cups, magnetic wheels, electromagnetic feet, or even using the propulsive force of a propeller to remain attached to a
surface. Except the magnetic wheeled design none of these prototype developments are
suitable to be used in shipbuilding environment. Either they are too sensible or they are not
580
able to perform continuous movements on the surface. Concerning the information flow it
can be stated that currently all approaches are island solutions not suitable to be integrated
in the IT infrastructure. Neither the planning of the processes executed by the climbing
robots nor feedback mechanisms to use inspection data for product models are possible.
Computer aided design and manufacturing (CAD/CAM) is widely introduced in European
shipyards as well as an increasing number of robots are installed for shipbuilding. Computer
Aided Robotic (CAR) combines the application of robots and CAD/CAM, allowing the offline planning and simulation parallel to the running production. Although CAR has already
been successfully applied in other industries such as automotive or aircraft industries, todays
ship-industry cannot realise the advantages. The application of this technology becomes
even more important within ship-industry since ships are one-of-a-kinq-products. Indeed,
there are a lot of industrial applied powerful CAR-tools including model libraries for the
most common robots but none of them can be efficiently used for ship-industry. The major
limitation are given by the missing interfaces from the CAD/CAM-systems for design and
work-preparation in ship-industry to available CAR-systems. This has again led to some
island solutions which can only be used for one special kinematic-type robot for defined
tasks."
3. APPROACH
The outlined technical limitations within ship-industry are defining the scope of our
approach on the computer integrated planning, execution and analysing of manufacturing,
maintenance and inspection processes over the whole product-life-cycle. The main
innovations within this approach are the following:
The feedback coupling of product information to adapt the product model based on
model matching mechanisms regarding material and geometric properties to increase
model accuracy.
Performing work~in-progress inspection instead of end-of-pipe inspection to increase
manufacturing quality as well as to reduce production time and costs.
The closed loop design, planning, manufacturing, inspection and feedback to model
management, allowing improved product quality.
Application of life-cycle modelling within ship-industry including the continuous
analysing and updating of product models over the whole product-life-cycle from
production towards inspection to prevent damage by increased information quantity and
quality.
581
CAD
Prod.Jct
Geometry
Product Model
D
Geometry CJ"Jd
Process Information ) .
Tracking
~
Positioning
582
quality. The system does not focus one single process, but can be specifically programmed
and equipped with different special tools. As ships can be characterised as an one-of-a-kind
product that is tailored to the requirement of each customer the use of a Computer Aided
Process and Trajectory planning system (CAPT) is absolutely necessary. The tasks to be
performed by the ciimbing robot have to be planned off-line before the manufacturing date.
Process models are used to find the optimal operating of the robot combining the planning
of trajectory and application processes. Simultaneously to the running production the
simulation of production processes can help to improve the quality of the product as well as
of the manufacturing process. The CAPT-system is characterised by an open system
architecture to combine the strength of different Comput~r Aided Tools which are
communicating via interfaces to be standardised in future work. Due to the various tasks
which can be performed by the proposed climbing robot within inspection and
manufacturing of the same object, the different models of the system have to be adapted
based on real world data. The needed information for this model adaptation will be provided
by different sensors installed on the climbing robot. To adapt the different models based on
real world data model matching mechanisms have to be devised and integrated into the
CAPT-system. Further methods have to be developed to link the model adaptation to
quality management considering the inspection and maintenance of ships.
Base of the robot body development are the magnetically attached prototypes built up at
IWF for automated welding at inclined ship hull surfaces [5]. These small wheeled robots
provide high adhesive forces and robustness. These adhesive forces achieved through the use
of rare earth magnets allow the robots to work in any position and orientation on the surface of a
magnetisable steel workpiece (figure 2). The robot motion is generated by the different speed of
the wheelpairs on the base plate. Uke a tank it is able to turn around a fixed point. In order to
ensure continuous grip and constant robot motion the magnetic wheels are fleXIbly suspended in
different axes. Beside surmounting small obstacles such as welding spots, high compliance is
required by movements on bent surfaces. Therefore, the wheel pairs are induvidually mounted on
a rocker. The wheel sets must be as compliant - normal to the surface - as the whole
construction has to be rigid in other respects to obtain a smooth motion and to avoid jerks during
tracking a curved path.These prototypes are being improved and adapted to the requirements
of the focused processes and environment. That concerns for example the smoothness of
tum movements, adaptibility to bend surfaces, light-weight or ease of handling in industrial
environment. The robot weight must not exceed 25 kg for workers protection rules. It will
have a minimum load capacity of its own weight under unfavourable surface conditions.
583
584
4. CONCLUSION
In order to increase productivity and quality in shipbuilding industries we were following an
approach towards the modernisation of manufacturing, assembly and inspection processes.
Life-cycle Engineering in shipbuilding and operation requires new computer integrated tools
in production and maintenance. The proposed robot is based on a prototype construction of
the IWF using magnetic wheels. The system design includes the integration into the
company IT which provides both, efficient robot programming and application. as well as
model adaptation for production monitoring and product and process improvement. The
programming of the robot within a computer aided process and trajectory planning system
can be done simultaneously to the running production. The quality can be verified within the
CAPT'-system by the simulation of the production processes before manufacturing. Future
work will be conducted in several directions. The prototypical integration into a commercial
available CAPT-system has to be extended regarding the requirements of the end-user. To
adapt the different models based on real world data newer model matching mechanisms
have to be devised and integrated into the CAPT-system. Methods have to be developed to
link the model adaptation to quality management considering the inspection and
maintenance of ships. Finally, the transfer of the system to other industries building large
products like pressure vessels, off-shore platforms, tanks or power stations will be
considered There is a major importance for the production and maintenance of tanks
containing dangerous goods as liquid gas or petrol and for the nuclear industries.
REFERENCES
1. Kuo C., 1992, Recent Advances in Marine Design and Applications, Computer
Applications ofShipyard Operation and Ship Design - VII, Elsevier Science Publishers
B.V. (North-Holland), 1992, 13-24
2. Koyama T.,1992, The Role of Computer Integrated Manufacturing for Future
Shipbuilding, Computer Applications of Shipyard Operation and Ship Design - VII,
Elsevier Science Publishers B.V. (North-Holland), 1992,3-12
3. Spur G., 1996, Life Cycle Modeling as a Management Challenge, Life Cycle Modelling
for Innovative Products and Processes, Chapmann & Hall, London, 1996, 3-13
4. Krause F.-L & Kind Chr., 1996, Potentials of information technology for life-cycleoriented product and process development, Life Cycle Modelling for Innovative Products
and Processes, Chapmann & Hall, London, 1996, 14-27
5. Seliger, G., Miiller, H., 1994, Programming and navigation of a mobile welding robot,
Proc. ofthe 25th Int. Symp. onindustrialRobots-ISIR, 25:749-753
KEY WORDS : synergic MIG welding robotic system; Finite Element analysis, industrial application
ABSTRACT: The paper presents a Finite Element Analysis perfonned to optimise an industrial
welding process for the production of an header, used in boilers for conventional thennal power
plants. The production level was in fact below the actual production capability of the robotic MIG
welding system utilised, due to a number of set-up problems.
The approach followed was to built two complete parametric three-dimensional FE model; the one of
a section of the header was made to determine the optimal welding condition. The complete model
of the whole structure was instead intended for distortion analysis. By means of a validation of the
models, the output of the simulation runs were found to be in good agreement with the experimental
evidence. The promising results obtained in forecasting the main control parameters for this process,
even with these simple thermal models, allows to be confident on the possibility to adopt the same
FE tool as a decisional support for the frequent set-up phase, due to the company job-production
type.
1. INTRODUCTION
586
might have a stronger influence than others, such as the optimisation problem can be stated
as a multivariable multiobjective non-linear problem, which seldom can be treated in a
mathematical form. Further, quite often the objectives conflict and/or compete with one
another (for instance, as quality and cost). The cost of finding the optimal solution of this
kind of problem (where it is possible!) is so high, with respect to the income of the
operation, that sub-optimal conditions are in most of the cases accepted as good.
The cost can be thought as proportional to the number of factors (variables) considered. In
this way, the easier empirical approach adopted in the industrial practice (the trial and
error strategy) usually takes into account no more than one or two variables at time in the
expensive process of browsing among the several sub-optimal solutions. This approach
becomes even more effective if it is essentially tied to what is called the 'know-how', that
is the amount of operative experience; this latter allows to reduce the order of the problem
by taking several variables as fixed in a tight range. The optimisation process thus only
becomes a long stepping process spread over several years. The customer's requirement
acts as an explicit constraint to this process. The true problem behind this is the cost of
the process, which sometimes may dramatically reduce the profit margins.
The other approaches adopted so far, essentially belong to two broad categories : the
mathematical and the experimental approach.
As experimental approach we intend all the set of scientific methods applied to performing
practical experiments: several techniques for design of experiments have been developed so
far (see, e.g., [2]). The main limitation of these approaches is the number of experiments
required, which sometimes are not sustainable for the practical industrial application.
Furthermore, there is a limitation of the forecasting equations, which usually holds in the
strict range of values where the experiments where carried out.
Among the mathematical approaches,
problem via appropriate models of the reality : the number of limiting assumption that has
to be taken in order the mathematical problem to be tractable, make the solution quite far
from the practical applicability. Another mathematical approach is the numerical one,
which basically relies on modelling of the process via discrete set of differential equations :
the most common one is the finite element method , even though the finite difference and
recently the boundary element approach have been explored. Despite the approach
basically requires some simplifying assumptions, these are not so far from reality as in the
case of the analytical approach. This fact results to be a significant advantage, because it is
possible that results are quite more useful for industrial application than in the other cases.
Furthermore, it allows to analyse several different applications with minor changes in the
structure of the simulation models, thus requiring small efforts.
In the paper, we adopt the F.E. approach to select an optimal operating welding condition
for the process analysed, once the FE model has been validated. The basic hypothesis that
is behind this approac~ is that the scale of the problem, at a macro level with respect to the
weld zone size, does allow to compensate for the errors embedded in the use of a thermal
model. This means that as far as real industrial problems have to be addressed this
approach, which requires anyway a final limited experimental validation procedure, may
contribute to the economy of the problem-solving process.
587
Figure l. The operation of MIG welding on the header (by permission ofTERMOSUD)
The analysis was carried out to solve some problems arose during the set-up phase, which
lowered the productivity level allowed by the robotic system. In particular, the occurrence
of a permanent bending distortion (in the case presented its maximum value is about
10[mm] at the centre). This required an elastic positive-contact bracing on a curved surface
(realised using the special jig visible in Figure 1) to mechanically correct the bending
distortion. When the magnitude of the imposed bending was not correct, a flame
straightening has to be performed (3]. Another set-up problem was the selection of the
optimal welding parameters, since a partial penetration were required by design.
588
The object of the present analysis is to optimise the production process; thus no attention
has been paid to the engineering choices.
3. THE FINITE ELEMENT ANALYSIS
Two different FE models have been built; a local three-dimensional model of a section of
the header, to analyse in detail the welding process at a micro-level, and a complete 3-D
model of the structure. In both cases no simplification was allowed given the asymmetry of
the process performed. A uncoupled thermo-mechanical analysis has been carried out for
both the models, based on the assumption that dimensional changes during welding are
negligible and that mechanical work done during welding is insignificant compared to the
thermal energy change.
100
cr 1s
......-~-~
.E 50
~ 25
.:.:
0
Finite Element models have been build in parametrical form, such that a different diameter,
length, thickness, number and diameter of tube stumps, can be modelled by simply
specifying a variable value. Material properties can also easily be changed, even though no
data are available close to the melting temperature [4]. In Figures 2 and 3, the thermal
properties adopted for the analysis are shown [5]. The convective heat transfer coefficient
has been assumed constant to 15[W/m2 0 C). Radiative heat exchanges has been neglected.
As concern the mechanical analysis, the deformation process is assumed to be strain rate
independent ; a thermo-elastic-linear work-hardening constitutive model is used in the
mechanical analysis for both the models: see Figure 4 for the stress-strain curve and Table I
(material constant values are linearly interpolated when values are not given for a
temperature). The Poisson ratio has been taken constant equal to 0.3, as well as the
density, taken to be 77000[N/m3].
The results presented in the paper refer to an header consisting of a cylinder of ASTM 106
(B grade) steel, 2565[mm] length, external diameter of 168.3(mm] and thickness of
16[mm]. 84 tube stumps made of ASTM 210 (Al grade) steel are welded in three rows of
28 tubes each.
589
250
200
150
100
50
Q
~__;;;..;~==~==':"'u
0,001
0,002
0,003
0,004
3.1 THELOCALMODEL
A thermal F.E. model of a section of the header has been built, in order to study the effects
of the local surrounding structure on the single-pass weld ofthe tube stump (see Figure 6).
An equivalent heat-exchange coefficient has been assumed for the boundary surfaces to
take into account the presence of the rest of the pipe.
The weld joint has a geometry continuously varying around the tube, and so for the volume
to be filled (Fig.5-b), resulting from the intersection of a cylinder (the pipe) orthogonal to a
truncated cone (the mill that is used to shape the weld groove according to Fig.5-a).
The selection of the optimal operating parameters to assure a constant depth of
penetration is not a trivial problem, given the geometrical complexity of the surrounding
structure.
In the synergic type of MIG welding, the independent variables selected for the F.E.
analysis are welding speed and power, considering also the expertise of the company. The
distribution of the heat input has been assumed according to the meshing refinement
adopted (see Fig.5-a and b); the weld seam has been divided into 16 sectors.
The optimal welding speed has been determined in few runs by taking fixed the input
power, such as a constant penetration of 2[mm] was reached. This is represented in Figure
7 in terms of 'delay' time per sector. The depth of penetration has been determined by
tracing the isothermal at the fusion temperature of 1400[0C]. The resulting optimal
processing time for each tube stump is 76[sec], with a power of 5[kW]. Experimental
evidence confirmed the result, satisfying the weldment the quality requirements.
590
Su. A-A
7
6
Is
"'
E 4
I=
3
2
I
l
I
l
0 +"'--r"'iya..t-<'1+'-Y'"+
1 2 3 4 5 6 7 8 9 10 111213141516
Sector (no.)
591
despite the coarser mesh. A careful selection of the constraints was made, which gave a
significant influence.
Several welding sequencing strategies (with two simultaneous welding at time to reduce
bending) have been analysed without simulating the elastic bracing, in order to verify the
possibility to reduce at the minimum the bending distortion. None of the several runs gave a
significant reduction of the permanent distortion. It has been verified that, for the operating
conditions adopted, even performing only one welding in the less favourable condition at
centre of the pipes, a permanent bending distortion of about 0.2[mm] results. These facts
confirm the bending shrinkage expected by theory [3], being the entire welding processes
equivalent to a single-side longitudinal welding.
Figure 8. FE model of the whole header : pipe plus tube stumps (1846 elements)
4. DISCUSSION AND CONCLUSIONS
The F.E.A. approach adopted is starting to be a quite standard practice in design analysis in
several fields of the welding technology. This practice however is still far from being used
also for process optimisation scope, as an alternative to the traditional trial & error
approach in the common industrial practice. In the paper we present such an application, of
a F.E. thermal model which allowed to reduce significantly the set-up times of a robotic
welding operation. This was possible once the model was validated by comparing in few
cases the results with very experimental evidence. The parametrical feature of the models
allows also simply extend the analysis of the production process for other similar products
with no significant efforts.
Some final consideration about the potential economics allowed might give insight on the
advantages of this 'hybrid' approach. From very simple considerations, it has been
592
calculated that net potential production savings for the company, are of the order of
magnitude of 6.500[US$] per year. This roughly estimation has been made based on the
volume of production of about 7 boilers per year, considering also the average cost for
software development and code licence fee, neglecting the cost for the hardware.
0,00
0,000
-0,001
-0,002
-0,003
-0,004
-0,005
-0,006
-0,007
-0,008
-0,009
-0,010
0,50
1,00
1,50
2,00
2,50
...L!:!~~~~~~~::L::::::.l:L~~J:Li~~22U
Figure 9. Predicted vs. actual values of deviation from the linear axis (measures in [m])
An extensive use of the approach at the company would give a higher confidence on the
true potentialities of the method and could surely increase the quality of the design and
manufacturing.
AKNOWLEDGEMENTS
The Authors wish to tank the l:ERMOSUD S.p.A, in particular ing. Ernesto
CHIARANTONI, for the availability in supporting all experimental activities. The Authors
are also grateful with Prof. Attilio Alto for his fruitful suggestion and encouragement.
This research is partially funded by the contribution of the Italian Ministry ofResearch.
REFERENCES
1. Lancaster, J.F.: Metallurgy ofWelding, Chapman & Hall; Cambridge, 1993
2. Taguchi, G.: System of Experimental Design,1-2, UNIPUB/Krons Int. Pub., U.S.A.,
1987
3. Radaj, D. :Heat Effects of Welding, Springer-Verlag, Berlin, 1992
4. Alto, A. and Dassisti, M. : Role of some approximation assumptions in finite element
modelling of arc welding processes, II Convegno AITEM, Padova, Italy, 18-20
settembre 1995
S. Toulokian, Y. S.; Ho, C.Y. :_Thermophysical Properties ofMatter: Specific HeatMetallic Elements and Alloys, New York, 1970
6. ANSYS, Analysis Inc. User Manual for Revision S.OA, 1993
The importance of Artificial Vision for industrial applications is increasing even for small
and medium enterprises for the reduction of hardware costs. It allows performing a closed
loop process control not interfering with the observed system. Unfortunately this
technology is limited by the low resolution of sensors; for this reason even a small increase
in precision becomes attractive.
'
In this article a general algorithm for this purpose is described, which has been extensively
tested at different space resolutions and camera configurations.
The main advantages of this algorithm are:
594
- easy to implement;
-fast;
- versatile.
The idea is to increase precision by multiple acquisitions, but not to interfere with the
operations and in particular not to increase the acquisition time, the Observed Object (00)
is followed in real-time on its trajectory which precedes a critical point. This implies that a
great computing power must be available, which is not a tight constraint when high
accuracy is needed. The information to be exploited in the process concerns the kind of
trajectory of the 00 which is supposed to be known in parametric form. In common
applications the presence of approximately straight lines is usual.
The Galilean relativity allows the application of this algorithm both to a moving camera (for
instance in a robot hand-eye configuration [I -2] or ego-docking [3] for autonomous robot
or vehicle navigation [4 -5]) and to a fixed camera observing a moving object. Some
examples of the latter are:
- robot gripper obstacle avoidance [6 1or objects localisation and catching;
- in the case of autonomous navigation the so called echo-docking.
This algorithm can be applied both to trajectories in two and three dimensions.
Since in experimental tests it has been shown that it is resolution-independent, it can be
applied in many different fields:
- mobile robot docking [7 1for the compensation of the errors due to incorrect positioning
in the execution of a robot program relative to the robot base;
- closed loop control of assembly operations [8 -9 ];
-tracking and control of an AGV position [10 ];
- robot calibration, [ 11 ]; in the case of hand-eye configuration, Stereopsis can be achieved
even with just one moving camera, by the observation of a known pattern [2].
For all these cases it is necessary to know the 00 trajectory in a parametric form.
2. SYSTEM CALffiRATION AND STEREOPSIS
To reconstruct the position of a point in 3D, more than one view is necessary or one view
and at least one of the three coordinates.
In this article two cameras have been used with different configurations. The 00 in the
experiments was the sensor of a measuring machine. Since at any instant the 00
coordinates are provided by a measuring machine, an absolute system has been defined
coincident with its main axes. The transformation from three space coordinates
W=(X Y
to four image coordinates w1,2 =(x
(a couple for each camera) is
defined by the system calibration [ 12 ].
This can be achieved by calibrating each camera separately through the minimisation of the
squared errors of known 3D points projected onto the plane of view [13 ]. It sould be
emphasised that the law of projection of a point with the pinhole camera model is nonlinear. Expressing the problem in homogeneous coordinates we get the following expression
zr
yr
"'tz "'t3
lllzz "'z3
~2
~3
m4z
m43
(1)
595
where h; stands for homogeneous coordinate of the i-th point of the c-th camera, with
(2)
The unknowns in this expression are the m1.k, the elements of the matrix of projection of a
point from 3D space onto the camera plane of view. To find an exact solution of the system
almost six control points are needed. To get the maximum performances by a real system it
is suitable to use a high number of points. Experimental tests have shown that about 30-40
points were enough for the utilised system.
Once the system has been calibrated, given a couple of point projections, it is possible to
estimate the 3D coordinates calculating the pseudo inverse of the projection matrix. This
latter is constituted of the calibration parameters taken from a couple of equations like the
(2) for each camera c [14]. From equation (2) an expression in the form
A:zcx3 W =b2cxl
(3)
can be derived. Equation (3) can be solved by the inversion of matrix A in the sense of least
squared errors in order to find the unknown W vector.
It sould be noticed that the explained calibration model does consider translation, rotation,
scale and perspective projection of a point in 3D space from/to one or more view planes,
but it does not consider other effects which may introduce even a high degree of
uncertainty, such as optical aberrations, lack of a suitable lighting and focus.
The application of this analytical model does not depend on the relative position between
the camera and the observed system.
To achieve more general results, no further mathematical correction has been applied in
experimental tests beside the Stereo Vision algorithm. In order to reduce the effect of
optical aberration, the described algorithm has been applied with the fragmentation of the
working space in smaller volumes and performing a different calibration on each of them.
Different configurations have been tested. The configuration which provides the lowest
error (e. g. the Euclidean distance between the measured and the real point), was achieved
when the three measured coordinates had the same accuracy. The best condition is with the
camera optical axes perpendicular to each other.
In the case of two cameras forming an angle of about 75, perpendicular to the Z axis and
symmetrical with respect to the XZ plane, the accuracy on X coordinates were about half of
the Y coordinates and about one third of the Z ones.
3. THE ALGORITHM
For the application of this algorithm, the following items are required:
- compensation of errors which can be approximated by a function of higher order than the
calibration model or the trajectory form;
- knowledge about the real trajectory (e. g. if it is really a straight line or a curve);
596
otherwise the result of interpolation is to improve some trajectory parts and to worsen
others.
The algorithm can be summarised in the
following steps:
1. Acquisition of the point coordinates in 2D
2. Interpolation for movement compensation
3. Recovery of the 3D point coordinates
4. Calculation of the interpolated trajectory
5. Correction of points
The chosen criterion to correct a measured
point meas, is to project it perpendicularly
W.st
onto the interpolated trajectory. The reason ..----~..,
..__~
which inspired this idea is that the most
~
accurate point belonging to the interpolated L----~
line is the closest one.
In order to treat fewer data, the real trajectory is described by its endpoints. Thus just two
exact points are enough to test the algorithm on several measured points; this is very useful
for practical applications.
For the algorithm estimation the following exact information is exploited:
- the known trajectory endpoints W.,p;
- for particular straight lines (parallel to the main axes), the two coordinates which remain
constant during displacement.
For each measured point Wmeas , an estimation of the exact one West, is obtained by
projecting onto the known real trajectory the corresponding co"ected point Wao".
For the trajectory endpoints W.,p,meas whose corresponding real points Wep,reaJ are known, in
experimental tests it has been shown that the distance between Wep,est , the estimated one
and the real endpoint Wep,real is lower than 2/10 of initial error.
Considering the application of the described algorithm to a straight trajectory, the least
squared errors straight line in parametric form, for any str8ight line non parallel to the main
axes, is given by
...
X=t
{ Y=st+q
Z=nt+p
(4)
597
4. EXPERIMENTAL DATA
The process performances, viz. the maximum space resolution and the 00 maximum speed
are limited respectively by the sensors resolution and by the computing power of the
Artificial Vision system. The used system is made by a high performance acquisition and
processing card. A complete frame with a resolution of 756x512 is acquired every 40 ms.
One of the cameras sends a synchronisation signal to the other one and to the frame grabber
that switches between them at the synchronisation frequency. This implies that to get the
position of a point at instant j an interpolation of coordinates j-1 and j of the other camera is
necessary. For this reason, after the acquisition of N couples of points, one gets M=2N-2
measured coordinates (the last one is static and the one before is taken during deceleration).
The interpolation between pointj-1 andj depends on the trajectory form, which is supposed
to be known.
The AID conversion and grabbing produce a negligible delay but does not affect the
frequency of acquisitions of25 non-interlaced frames/s.
The system is able to locate within a grayscale image of the indicated width and height, a
pre-defined model in about 210 ms. This time can be reduced to about 1/10 by limiting the
search area. In this specific application, the area reduction can be performed considering
that in the time between two couples of acquisition, the 00 moves just a few pixels from
the present position along the pre-defined trajectory.
The system is able to locate within the image all the matches over a minimum score without
increasing the search time. The option of following more than one point can be exploited
- to increase precision;
- to describe the trajectory of complex objects the central point of which is not visible or
unknown;
- to retrieve the inclination of an object;
- to track multiple patterns for monocular robot hand-eye configurations [15 -16].
In the next step the found coordinates of the 00 in the image are used to calculate the
corresponding 3D point performing the product of the pseudo inverse of matrix A and b in
equation (3).
The Vision system has been programmed in order to follow the 00 after it has entered the
field ofview at the beginning of its known trajectory and interrupts the acquisition when the
00 stops. At this time the measured 3D trajectory has already been reconstructed and the
corrected one is calculated. The parametric information on the corrected trajectory can now
be used either to correct the end point or the whole sequence of points. The interpolated
trajectory computation and the measured points correction are performed in less than 80 ms
on a 120 MHz Pentium based PC.
In real-time tracking, line-jitter phenomenon can occur, viz. different lines in a frame are
acquired with the 00 at different positions. Line-jitter effect is to limit the 00 speed
relative to a camera because the 00 model matching within the resulting image is affected
by a greater error. Both the movement compensation and the correction through the
described algorithm reduce the error of points belonging to a trajectory in space described
at speeds up to about 40 mm/s. Below this value, the improvement coming from the
application of this algorithm does not depend on the 00 speed.
Different trajectories have been followed at different speeds in the range between 1 and 40
mm/s. For the same trajectory this implies acquiring an inversely proportional number of
points in the following range:. 150 ~ M ~ 10.
598
The estimation of the algorithm has been perfonned on the known final point of a sample of
26 straight trajectories inside a working space of about 3003 rnm3 .
1. An average reduction of 25% of the initial error the mean value of which is 1.13 rnm,
with a standard deviation of0.47, has been achieved. This data consider the presence of
less. than 4% of negative values. The error was due to a bad approximation of the
measured trajectory by a straight line; the other endpoint accuracy was improved.
2. For over 80% of these trajectories the error between the estimated and the real
coordinates Wep.est and Wep,real (see figure) was lower than 20% of the initial error. As a
consequence we can state that West represents a good estimation of the unknown real
point corresponding to a measured one.
An extension of 1. to any measured point of the whole sequence is that by projecting it onto
the interpolated trajectory we achieve a remarkable correction in most cases. In all tests,
this has been shown by a reduction of standard deviation of about 500/o on the examined
sample.
A consequence of 2. is that projecting a co"ected point onto the real trajectory, we can
estimate its corresponding exact point coordinates.
Finally we can state that given a parametric description of a trajectory, we can estimate the
errors on a sequence of measured points (the corresponding real point of which are
unknown), their mean value, standard deviation, etc. by computing the distance between
W_.,. and West
To get a higher precision, a higher space resolution (e. g. a higher camera resolution or a
smaller workspace) is required and more sophisticated techniques of optical aberration
compensation and a sub-pixel analysis are needed.
5. EXTENSIONS
In order to achieve a computation time reduction, this algorithm could be applied directly
finding the interpolated trajectory as it will appear after projection on the camera sensors, e.
g. if the trajectory is a straight line, finding the interpolated line inside the image, if the
trajectory is a circular arc, finding the ellipse arc, etc. This sort of analysis would probably
involve a different numerical approach to Stereo Vision, considering visual maps [6],
epipolar lines and other primitives [14], and geometric relations between the absolute and
the camera reference systems in order to optimise the overall process.
Dealing with a parametric description of primitives in space instead of points could allow
providing a correction to the robot directly in this fonn.
Multiple Stereo algorithms [ 17 ] are available to optimise the search in image; a direct use
of features extracted from the image instead of operating on the coordinates could represent
a shortcut.
In this article the least squared errors approach has been used; the experimental tests have
been perfonned on a straight trajectory. A different kind of interpolation can be applied
according to a different trajectory and error distribution (e. g. with low weights for less
accurate points). Furthermore the benefits coming from the use of Kalman filter which is
suitable for time dependent problems can be investigated.
6. CONCL!JSJONS_
A simple algorithm to increase accuracy in the case of an object moving on a trajectory the
parametric description of which is known, has been described. The algorithm has been
extensively tested on straight trajectories with different camera configurations. The
599
interpolation of points both for movement compensation and for the trajectory calculation
allow an increase of accuracy which depends on the initial error distribution.
It has been shown that a simple perpendicular projection onto the interpolated trajectory
gives a suitable correction to most of the points of the whole sequence.
In order not to worsen some parts of the measured trajectory by the application of this
algorithm, the following condition must be satisfied: absence of higher order discrepancies
between
- the real trajectory and its mathematical parametric description;
- the real 3D points coordinates and the stereo reconstruction model.
Since the increased accuracy remains about the same on wide ranges of number of measured
points, it has been shown that it is 00 speed-independent.
If the 00 inclination in the trajectory after the observed one is known, for instance in the
case of the coupling of two parts, the angle between the interpolated trajectory and the
exact one represents the correction to apply before the coupling.
The increase of accuracy can be exploited by increasing the field of view (thus
compensating the reduction of spatial resolution) to monitor several critical points and
trajectories with just one couple of cameras.
Once the interpolated trajectory is calculated, the absolute 00 position can be
reconstructed even after it exits one of the camera fields of view or if the localisation
reliability of a camera has significantly decreased in that view area.
The method to test the described algorithm can also be used to test the performances of a
general Artificial Vision system by employing just a few exact data (e. g. the trajectory
endpoints).
REFERENCES
1 . Tsai, R.Y.; Lenz, R.K.: A New Technique for Fully Autonomous and Efficient 3D
Robotics Hand/Eye Calibration, IEEE Journal of Robotics and Automation, 3 (June 1989)
3, 345-358
2. Ji, Z.; Leu, M.C.; Lilienthal, P.F.: Vision based tool calibration and accuracy
improvements for assembly robots, Precision Engineering, 14 (July 1992) 3, 168-175
3 . Victor, J.S.; Sandini, G.: Docking Behaviours via Active Perception, Proceedings of
the 3rd International Symposium on Intelligent Robotic Systems '95, Pisa, Italy, July 10-14
1995, 303-314
4. Matthies, L.; Shafer, S.A.: Error Modeling in StereoNavigation, IEEE Journal of
Robotics and Automation, RA-3 (June 1987) 3, 239-248
5 .. Kanatani, K.; Watanabe, K.: Reconstruction of 3-D Road Geometry from Images for
Autonomous Land Vehicles, IEEE Transactions on Robotics and Automation, 6 (February
1990) 1, 127-132
6. Bohrer, S.; LOtgendorf, A.; Mempel, M.: Using Inverse Perspective Mapping as a Basis
for two Concurrent Obstacle Avoidance Schemes, Artificial Neural Networks, Elsevier
Science Publishers, 1991, 1233-1236
7. Mandel, K.; Duffie, N.A.: On-Line Compensation of Mobile Robot Docking Errors,
IEEE Journal ofRobotics and Automation, RA-3 (December 1987) 6, 591-598
600
8. Nakano, K.; Kanno, S.; Watanabe, Y.: Recognition of Assembly Parts Using Geometric
Models, Bulletin of Japan Society of Precision Engineering, 24 (December 1990) 4, 279284
9. Driels, M.R; Collins, E.A.: Assembly of Non-Standard Electrical Components Using
Stereoscopic Image Processing Techniques, Annals of the CIRP, 34 (1985) 1, 1-4
10. Petriu, E. M.; McMath, W.S.; Yeung, S.K.; Trif, N.; Biesman, T.: Two-Dimensional
Position Recovery for a Free-Ranging Automated Guided Vehicle, IEEE Transactions on
on Instrumentation and Measurement, 42 (June 1993) 3, 701-706
11 . Veitschegger, W.K.; Wu, C.-H.: Robot Calibration and Compensation, IEEE Journal of
Robotics and Automation, 4 (December 1988) 6, 643-656
12. Tsai, RY.: A Versatile Camera Calibration Technique for High Accuracy 3D Machine
Vision Metrology Using Off-the-Shelf TV Cameras and Lenses, IEEE Journal of Robotics
and Automation, RA-3 (August 1987) 4, 323-344
13 . Fu, K.-S.; Gonzales, C.S.; Lee, G.: Robotics, Me Graw-Hill, 1989
14. Ayache, N.: Artificial Vision for Mobile Robots, The MIT Press, Cambridge, Massach.,
London, Engl., 1991
15 . Fukui, I.: TV Image Processing to Determine the Position of a Robot Vehicle, Pattern
Recognition, Pergamon Press Ltd., 14 (1981) 1-6, 101-109
16. Zhuang, H.; Roth, Z.S.; Xu, X.; Wang, K.: Camera Calibration Issues in Robot
Calibration with Eye-on-Hand Configuration, Robotics & Computer-Integrated
Manufacturing, 10 (1993) 6, 401-412
17. Kim, Y. C.; Aggarwal, J.K.: Positioning Three-Dimensional Objects Using Stereo
Images, IEEE Journal ofRobotics and Automation, RA-3 (August 1987) 4, 361-373
602
L. Carrino et al.
to deposit a composite filament along a path which guarantees the requirements of strength
and stiffitess for the part.
This automatic process is characterized by a high repeatability and quality, and it has been
successfully used to produce fiber-reinforced parts such as rocket-motor pressure vessels,
launch tubes, storage tanks, pipes and other aeronautic components. Its main disadvantages
are the high cost, justified only for high production volumes, and the limited obtainable
shapes.
To reduce costs, to increase flexibility and to use this technology also for non-axisymmetric
part, some example of the use of a robot to move a feeding head were presented in [2,3,5].
In all this studies, however, the considered parts are characterized by having the possibility
to be wound following a convex geodetic path [4,5]. In particular, the whole surface of the
mandrel has to be covered as in the tube-like components: the main examples are the
filament winding of T-shaped and elbow tubes. The issues associated with the concave
winding are investigated only at CCM [7) where a special feeding head and a thermoplastic
winding process are considered.
The increase of flexibility and the reduction of cost may be reached also by developing
adequate CAM software modules able to help the user in defining the part program for the
filament winding machines or robots. Among the few example of such systems [1,2,4,6,8],
the more relevant results were reached by the work done at CIRA [1,9] and at the
University ofLeuven [2].
In the first case, the Arianna software is able to generate collision free trajectories for the
pay-out eye of a traditional 2 to 5 axes filament winding machine. The filament paths may or
may not be geodesic paths, they are calculated solving a set of differential equations and the
feasibility of these paths are guaranteed by the control of the slippage tendency ratio. The
considered shapes are axisymmetric shapes, but the possibility to have deviation from the
axisymmetric shape is considered. However those deviation do not lead to concave surface
sections. The fiber bridging problem is well treated.
Cawar (2] is a software module to calculate the fiber path trajectory both in a traditional
axisymmetric filament winding process and in a prototype of robotized tape winding cell for
asymmetric components. In the first case semi-geodesic fiber paths are considered, while in
the second only geodesic paths are possible. A heuristic collision avoidance method is
implemented, but it do not cover all the possible collisions which may occur: a detailed
detection is done in the final simulation stage.
At the University of Cassino, a robotized filament winding process for asymmetric complex
parts is studied. In this research, a software prototype to define and simulate the process is
under development. In section 2 the preliminary implementation of this CAD/CAM software
is presented, in section 3 some examples are discussed.
2. DESIGN OF THE PROCESS FOR A FILAMENT-WOUND PART
The architecture of the CAD/CAM system under development at the University of Cassino
in collaboration with CIRA is shown in figure 1. It respects the basic principles of
Concurrent Engineering. An adequate interface enables the user to design a filament-wound
603
part by defining not only its shape and geometry but also the main technical features like the
desired fiber orientation in each part zone, the admitted tolerance of this orientation and the
dimensional and geometric tolerances of the part. Starting from these data, an intelligent
module should be able to generate alternative filament paths which represent different real
filament-wound parts by approximating, if possible, the design requirements. If no feasible
path is able to respect the requirements, the user will be asked to change some parameters
or even the component shape.
Interacting with the system the designer should be able to perform a complete stress analysis
on the composite workpiece with a given exact fiber orientation and, simultaneously,
queries the system on the manufacturability of the part by running the robot trajectory
generator module. This module will generate, if possible, the collision free trajectories of the
robot to correctly wind the fiber on the form and will give an estimation of the process cycle
time and cost. Final modules will perform a complete simulation of the process and will
generate the part program for the robot (in this example the Val II language for a Puma 562
will be used).
USER INTERFACE
t
CAD MODEL
I
F.E.M.
I
~
FILAMENT
PATH
I
..........
ROBOT
TRAJECTORY
PROCESS SIMULATION
l
ROBOT PART PROGRAM
604
L. Carrino et al.
benchmark the complete prototype. To each part a particular composite filament (roving)
may be associated. Each roving is characterized by the shape and dimensions of its crosssection, and by its fiber and resin materials from which the final structural behavior and the
maximum friction force between the filament and the support will depend.
Actual part solid model may be generated as a composition of prismatic and cylindrical
primitives. To each primitive a preference in fiber orientation is given. A simple algorithm is
used to define a filament path respecting the requirements, if possible. The possible paths
are geodesic paths: straight lines for planar surfaces, circular or helix arcs for cylindrical
surfaces. The continuity of the filament tangent is guaranteed by a smooth approximation.
Therefore, the deposition of the filament will be stable.
By an appropriate visualization of the results, the designer could evaluate the part and
generate alternative solutions.
It must be pointed out that, in the actual implementation, a complete filament path is
represented as a union of a set of curve segment (straight line, circle and helix). The
resulting entire curve must not present any concave segment. In this situation, in fact, it is
not possible to find methods to deposit the filament by a simple pay-out eye without causing
the so called fiber bridging. Only, special pay-out eye (e.g. with a roller in contact with the
support surface) should be used and the filament paths and the robot trajectories should be
ad hoc generated. Moreover, it must be noticed. that up to now no application considering
the deposition along concave segment bas been presented.
605
previously computed is found to be inside the safety volume the distance d is increased to
let the point be on the volume boundary. This is not sufficient to have a collision free path,
in fact nothing is known about what happens to the pay-out eye between two subsequent
via points. If a linear interpolation between two via points is supposed, it is possible to
verify if this line segment intersects the safety volume. If this happens a further set of
intermediate via points is introduced on the surface of the safety volume to move the payout eye on a collision free path (see figure 2).
E'1
--
E,
--~
pi
r[
E,
--..
606
L. Carrino et al.
3. EXAMPLES
The prototype under development is implemented according to the object oriented paradigm
using the Visual C++ compiler and the ACIS solid modeler. It runs on a Intel Pentium based
personal computer. In the following some implemented examples are discussed.
607
, 1&5
Figure 4- Filament path, safety volume and robot trajectory.
~
J~ I ~hll\
l:JI<
~l 1 !\>'..4i>ltii!J.
Pnjcd
Slonhni_Yiewo
{.jlr!.IJ:IId
r:Jr:J
Jl
z- ac-wv ,_...,
. -_R _
Figure 5 - Filament path and robot trajectory for a more complex object.
608
L. Carrino et al.
4. CONCLUSIONS
At the University of Cassino in collaboration with CIRA, a robotized filament winding
process for non-axisymmetric parts is studied. In this research, a software prototype to
define and simulate the process is under development. The actual implementation of the
system is limited to a simple part representation and filament path generator, and to a more
complete robot trajectory generator. The major limit of this initial implementation is the
ability to deal only with convex part, even if with a complex tridimensional shape.
Even if at a preliminary stage, it is possible to conclude that the development of a complete
CAD/CAM system for the robotized filament winding process is feasible. Moreover, the
application of such a system is considered very important in order to increase the
competitiveness and the diffusion of this composite material production technology, which
seems very promising also for small batch production of parts, not only for the aeronautic
and aerospace sectors.
REFERENCES
F. Cosmi
University of Udine, Udine, Italy
V.F. Romano
Federal University of Rio de Janeiro
and
COPPE-PEM, Rio de Janeiro, Brazil
L.F. Bellido
SUFIN I Nuclear Engineering Institute, Rio de Janeiro, Brazil
1. INTRODUCTION
Techniques used to analyze chemical compositions of materials are often based on
destructive processes or req!Jire a lot of intermediary steps to prepare the samples for the
tests.
A technique known as Analysis by Nuclear Activation (ANA) can be used in
organic and inorganic materials. This process consists of:
1) transforming stable isotopes of a sample (target) into radioisotopes, through nuclear
reactions with neutrons and/or accelerated particles (protons, deuterons, and so on),
from a cyclotron or reactor.
2) measuring by means of a detector the induced radioactivity of the sample (gammaray). Measuring errors in the order of 1% can be obtained.
Minerals like gold, tungsten, uranium and molybdenum are sensitive to this
technique. Sediments, graphite, biological materials and ceramics are also often analyzed.
below:
Some of the advantages of this technique over the conventional ones are mentioned
it requires samples with very small masses (in the order of milligrams);
Published in: E. Kuljanic (Ed.) Advanced 'Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
610
Since the half lives of radiation originated in this processes are of short time
duration, the frequency spectrum of the irradiated samples, obtained by the data
acquisition devices, must be performed in a small period of time.
Information obtained by ANA technique regards the description of the elements
present in the sample and their decay curve.
2. INTEGRATED AUTOMATIC MEASURING SYSTEM
The automatic measuring facilities, composed by a sample changer and its
auxiliary systems, should be able to:
611
SYSI'EM PARAMETRIZATION
SAMPLE IRRADIATION
SAMPLE PREPARATION
MANIPULATION
DATA ACQUISITION
612
Cartesian Arm
Detector
Mechanism
Shield
Sample
Recipe
00000
00000
00000
00000
00000
00000
00000
00000
613
I' I
Dr
upperpart
~ intermediary part
1........
u . I
Iowerpart
------~) II'
sample reciple or
detector mechanism
614
Sample-supports are placed in a sample recipe in a n x m matrix form (figure 5). Full
information about samples and their relative locations in the recipe are necessary to the
control system, so that the Cartesian manipulator can connect a specific sample-support
and move it to a pre-defined detector.
1.1
2.1
1.2
2.2
- .........
.,n.l
n.2
l.m
2.m
1.3
2.3
-.......
n.3
--
n.m
615
~
l
jJ~>~- -:::....
I
F"\....
r-1-":"'":"'":~~
...
TQJ
~
-~
(a)
(b)
(c)
~
--+
(d)
Figure 6: Manipulator and gripper motion: (a) approach, (b) rotation, (c) sample
deposition, (d) manipulator back to home position.
linear sensor
sample support
motor
..----./structure
616
REFERENCES
1. Romano, V.F., Bellido, L.F, Cosmi, F.: Estudos de Viabilidade de urn Trocador
Automatico de Amostras para olEN, Internal report LabRob-COPPE/PEM, Rio de
Janeiro, Brazil, 1994;
2. Bellido, L.F.: Programa RAE - Rotina de
SUFIN-IEN, Rio de Janeiro, Brazil, 1995;
Aquisi~ao
U. Viaro
University of Udine, Udine, Italy
KEY WORDS: Model Reduction, Feedback, Time Lag, Step and Frequency Response.
ABSTRACT: Transportation lags are often present in manufacturing processes. The design
of the related control system is easier if reference is made to a proper rational approximation of
the transcendental transfer function of the delay elements. To this purpose, both an analytic
method and an empiric procedure are suggested. The first approximates the step response of
a unity-feedback system including the delay element in the direct path, whereas the second
is based on the direct inspection of the Bode phase plot of the delayor. The results compare
favorably with those obtained using the standard Pade technique.
1. INTRODUCTION
In the analysis and synthesis of control systems, the designer has often to deal with
delay phenomena, by which an input acting for t ~ t 0 affects the output only for
t ~ to+T, where Tis the so-called dead time or time lag. In terms of Laplace transforms,
the considered time-shift operation is represented by the factor e-Ts, which is therefore
the transfer function of the delay element (delayor).
As is known, many manufacturing systems exhibit transportation lags; this is typically
618
the case when a piece is successively operated at different working stations located
along a conveyor: an operation executed at a station affects the operation performed
at the next one after a time interval that depends on the distance between the stations
and the speed of the conveyor.
The problem of approximating the trascendental transfer function e-Ts by means of
a rational function has a long history (see, e.g., [1]). In the field of dynamic system
simulation, much work has been done since the Fifties (see, e.g., [2],[3],[4]), when the
main simulation tool was the electronic analog computer, particularly suited for implementing circuits characterized by rational transferences. Today, simulations are usually
performed with the aid of digital computers which are not subject to such limitations.
Nevertheless, the considered approximation problem is still of interest [5],[6],[7], with
particular regard to the synthesis of control systems using standard techniques that
are applicable only to finite-dimensional systems, i.e., systems characterized by rational
transfer functions.
Various methods have been suggested to derive a rational approximant of e-Ts. The
most popular is probahly the one .based on the Pade procedure [7]. It consists in determining a rational function Gp(s) with numerator of degree m and denominator of
degree n, whose first m + n + 1 MacLaurin series expansion coefficients match the
corresponding coefficients of
e-Ts = ~ (-T)i si
(1)
L.J
.,
i=O
z.
By this method, it is possible to obtain both proper (m :::; n) and non-proper (m >
n) models with different characteristics. In particular, when m = n the magnitude
IGp(jw)i of the frequency response of the Pade approximant is identically equal to 1,
like le-Ti"'l, whereas form= 0 (all-pole approximant) its step response is equal to zero
together with its first n - 1 derivatives at t = 0, like the step response of the ideal
delayor. On the other hand, the Pade procedure does not ensure the stability of the
approximant of a stable original system and, in fact, for certain values of m and n,
Gp(s) may have a non-Hurwitz denominator.
The other approximation methods have either an empiric or an analytic basis and
can be classified according to the specific response considered (typically, the frequency
response e-Tiw or the step response c5_ 1 (t- T)- cf., e.g., [8]).
In this paper, two simple but effective procedures for approximating the transfer function of the delayor are presented. The first refers to the step response of a unity-feedback
system with the delayor in the forward path and leads to rational approximants that
match well the ideal frequency response (magnitude equal to 1 at all frequencies and
phase interpolating -Tw over a suitable frequency interval). The second is an empiric
procedure based on the inspection of the delayor's phase plot. The results obtained are
finally compared to those obtainable using the Pade technique.
0
619
_!~~'--e--Ts---'
Figure 1: Delay element inserted into a unity-feedback loop.
2. APPROXIMATION VIA FEEDBACK
It is easily seen that the step response w_ 1 (t) of a unity-feedback system with a delay
element in the direct path (Fig. 1) can be decomposed for t > 0 into a step and a
square wave in the case of negative feedback (Fig. 2(a)) or into a step, a ramp and a
saw-tooth wave in the case of positive feedback (Fig. 2(b)), whereas a periodic term
is not present in the step response of the isolated delayor. It is therefore natural to
approximate the feedback system of Fig. 1 by means of a system with (proper) rational
transfer function W(s) whose step response retains the non-periodic components of the
original response together with a suitable number of the first harmonics of the periodic
component. The (proper) rational approximant of e-Ts will then be obtained as the
transfer function G(s) of the direct path of a unity-feedback system whose closed-loop
transfer function is W (s), i.e.,
W(s)
G(s) = 1- W(s)
(2a)
G(s) =
W(s)
1 + W(s)
(2b)
w_l(t) =
~8-l(t)- ~
2
1r i=l
t > 0.
(3)
By differentiating (3), transforming term by term, and retaining the first k terms in
the summation, we obtain the following rational approximant of even order 2k for the
closed-loop transfer function:
1
w2k(s) = 2- T
E
k
s
s 2 + [(2i -1)7r/TJ2
(4)
620
W_t(t)
W-t(t)
I
I
I
---'
---'
I
I
I
I
I
I
I
---'
I
I
I
2T
2T
(a)
(b)
Figure 2: Step response w_ 1 (t) for (a) negative feedback and (b) positive feedback.
which may be rewritten as
W ( ) _ ~ _ N2k-t(s)
2k 8 - 2
D2k(s)
where
2 k
N 2k-t(s) = T t;s
g
k {
(5)
T
+ [(2j -1)7r]
(6)
J-r
and
(7)
N2k-t(s)
_ D2k(s)- 2N2k-t(s)
N2k-l(s) - D2k(s) + 2N2k-l(s)
D2k(s)
_ 2- D2k(s)
!
(8)
which is a stable Blaschke product since: (i) its numerator and denominator have the
same even part D 2k(s) and opposite odd parts N2k_ 1 (s), and (ii) the denominator of
the third fraction in (8) is Hurwitz because N 2k_ 1(s)/D 2k(s) appearing in the second
fraction is an odd positive-real function (cf. (4)-(7)) that may assume the value -1/2
only for IRe [s] < 0.
Similarly, by expanding into Fourier series the periodic component of the step response
in Fig. 2(b) (positive feedback), we get
1
1
w_ 1 (t) = T8_ 2 (t)- 28-l(t)
tr
00
i1r sin
[2i7rt]
> 0.
(9)
621
W2k+l(s) =
where
N21e-1(s)
2k-1
tr s I]
k {
=T
s2 +
-2 + Ts +
N21e-1(s)
D2k(s)
[2;J]2} ,
(10)
(11)
#i
from which, according to (2b), the following odd-order approximant of e-Ts is derived:
_l
G2k+1 (s)
+ .1.. + N2k-I(s
Ts
D2k
= ------=-~:..._:.,..""7""
1 _ 1 + .1..
2
Ts
+ N2k-l
D2k
(12)
In this case too, G 2k+ 1 (s) turns out to be a stable Blaschke product.
3. A HEURISTIC PROCEDURE
As seen in the previous section, both the suggested approximant and the Pade one are
Blaschke products whose poles and zeros are in complex conjugate pairs, except for
one pole (and the corresponding zero) in the odd-order models. This leads to a perfect
reproduction of the magnitude of the delayor's frequency response and to a good fit of
its phase within a given frequency band.
The standard "asymptotic" approximation of the Bode phase diagram of these Blaschke
products is formed by a sequence of segments of different slopes connecting points whose
phase is 2f:rr, f = 0, 1, ... , k (even-order models) or 0 and (2f + 1)7r, f = 0, 1, ... , k
(odd-order models). This observation suggests to obtain another Blaschke product
approximating e-Tjw as follows:
(i) the Bode phase diagram of e-Tjw is subdivided into stripes of breadth 2Jr, except
for the first stripe of breadth Jr in the case of odd-order approximants (Fig. 3);
(ii) the course of -Tw (as a function of log w) in each stripe is approximated by a
segment of suitable position and slope, so that the entire diagram can be considered to be the sum of a number of subdiagrams formed by two horizontal straight
half-lines connected by such slanted segments;
(iii) each subdiagram, except for the first in the case of odd-order approximants, is
then regarded as the standard approximation of the phase plot of a factor of the
type:
(13)
622
where Wni is the abscissa of the central point of the slanted segment and ~i is
chosen according to its slope; the first subdiagram of the odd-order approximants
corresponds to a factor of the form
Gt(B) = 1- TtS
1 +T1s
(14)
GH(s)
=IT G,(s).
(15)
4. CONCLUSIONS
The problem of approximating e-Ts by a rational function has often been considered
in the literature [1]-[7]. Two new solutions have been presented in this paper; they are
characterized by an easy implementation and a good accuracy.
The first method (Section 2) starts from the approximation of the step response of a
unity-feedback system with the delay element in the direct path. Both in the case of
negative and in that of positive feedback, this response contains a periodic component
that can be approximated by truncating its Fourier series expansion. The corresponding
approximation of e-T is finally obtained according to relation (2a) or (2b), and turns
out to be a stable Blaschke product formed by pairs of complex conjugate poles (and
zeros) except for one pole (and a zero) in the case of odd-order approximants.
An approximant of the same form has also been obtained in a heuristic way (Section 3)
by fitting a sequence of segments with suitable slopes to the Bode plot of the delayor's
phase. A term of type (13) or (14) is then associated with each segment, thus arriving
at the desired Blaschke product (15).
623
o~--------~~--~-.~~~~--~~~~~~
I
I
I
-100
-_j----
-200
1
I
I
I
-300
--r-------
-400
1
I
-500
-600
-700~----------------------------~----------~
-800
...___.__._........_.........._--1-__.____.___.__._.__._._......,
Hfl
to2
101
radlsec
300
250
200
i'150
"C
100
50
0
-50
8
rad/sec
10
12
14
16
Figure 4: Phase deviations from -Tw for (a) the Pade model, (b) the model of Section
2, and (c) the heuristic model of Section 3.
624
REFERENCES
1. 0. Perron, Die Lehre von den Kettenbriichen. Stuttgart: Teubner, 1913. 3rd ed.
1957.
2. L. Storch, "Synthesis of constant-time-delay ladder networks using Bessel polynomials," Proc. IRE, vol. 42, no. 11, pp. 1666-1675, 1954.
3. W.J. Cunningham, "Time delay network for an analog computer," IRE Trans.
Electronic Computers, vol. EC-3, no. 4, pp. 16-18, 1954.
4. C.H. Single, "An analog for process lag," Control Engineering, vol. 3, Oct. 1956.
5. K. Glover, J. Lam, and J. Partington, "Balanced realizations and Hankel-norm
approximation of systems involving delays," in Proc. 25th Conf. on Decision and
Control, (Athens, Greece), 1986.
6. K. Glover, J. Lam, and J. Partington, "Rational approximation of a class of infinitedimensional systems," Tech. Rep. CUED/F-INFENG/TR.20, Cambridge University, Engineering Department, 1988.
7. C. Glader, G. Hognas, P. Makila, and H. Toivonen, "Approximation of delay systems- a case study," Int. J. Control, vol. 53, pp. 369-390, 1991.
8~
B. Liu, "A time-domain approximation method and its application to lumped delay
lines,", IRE 7rans. Circuit Theory, vol. CT-9, no. 3, pp. 256-261, 1962.
W. Krajewski
Polish Academy of Sciences, Warsaw, Poland
A. Lepschy
University of Padua, Padua, Italy
U. Viaro
University of Udine, Udine, Italy
KEY WORDS: Linear Continuous-Time Multivariable Systems, Reduced-Order Models, Matrix Fraction Descriptions, Equation Error.
ABSTRACT: This paper is concerned with the problem of constructing reduced-order models of a stable continuous-time multivariable system by minimizing the L 2 norm of suitably
weighted equation errors. To this purpose, the approximants are represented by either left
or right matrix fraction descriptions. According to the adopted weighting and approximant
description, the resulting model matches different sets of both first- and second-order information indices of the original system, which allows a great flexibility in the choice of the
order and characteristics of the approximant.
1. INTRODUCTION
626
some meaningful parameters of the original system [2] and those based on the minimization of an approximation error norm [3].
In the recent literature, remarkable interest has been devoted to the construction of
reduced-order models that preserve a set of both first- and second-order information
indices of a given original system [4].
The first-order information is usually provided by Markov parameters and/or time moments. In the case of continuous-time systems, the former correspond to coefficients of
the asymptotic series expansion of the transfer function, whereas the latter correspond
to coefficients of its McLaurin series expansion.
The second-order information is usually provided by entries of the impulse-response
Gramian or of the Gram matrix [5]. The "essential" second-order information is supplied by the energies of the impulse response and its derivatives or the transient parts
of its integrals, which correspond to the diagonal entries of the mentioned matrices. In
fact, all the other entries can be obtained from these and from a corresponding set of
first-order indices.
The last methods are computationally simple and lead to stable reduced models of a
stable original system. The present authors have been concerned with such techniques
[6], [7] using a direct approach according to which the parameters of an input-output
description of the reduced model are computed from the first- and second-order information indices to be retained.
The same results can be obtained by minimizing the L 2 norm of a weighted equation
error, instead of the norm of the output error (which would be more demanding from
the computational point of view). This approach is adopted in the following with reference to multi-input multi-output (MIMO) continuous-time reduced models represented
by either a left or a right matrix fraction description (LMFD or RMFD, respectively).
It is shown that RMFD's are more convenient when the number of inputs is smaller
than the number of outputs. Moreover, a greater flexibility of the reduction procedure
is achieved by considering suitably weighted equation errors.
2. NOTATION AND PROBLEM STATEMENT
The problem considered in this paper is that of determining a reduced model of a
given linear, time-invariant, asymptotically stable, strictly proper original system of
order n (dimension of a minimal realization) with m; inputs and m 0 outputs. Its
(transformed) output vector ~(s) is related to its (transformed) input vector u(s) by
the transfer-function matrix W (s), i.e.,
y(s)
where W(s) has MacMillan degree n.
= W(s)u(s)
(1)
627
The corresponding time-domain functions will be denoted by u(t), y(t) and W(t); the
last is thus the m 0 x m; impulse-response matrix. A minimal realization of (1) will be
denoted by
+ Gu(t),
x(t) = Fx(t)
(2)
(3)
y(t) = Hx(t)
where x(t) is the state vector of dimension n.
The approximant (with the same numbers of inputs and outputs) will be represented
by an LFMD as
or by an RFMD as
Ya(8) = A- 1(8)B(5)u(5)
(4)
(5)
where A( 8), B( 8), A(8) and B( 8) are the following matrix polynomials:
(6)
B(8) = Bq_ 18q- 1 + Bq-28q- 2 + ... + B18 + Bo,
-
A(8) = Aq8
-
B(8) =
q-1
(7)
(8)
(9)
If the above MFD's are irreducible, then the order of any minimal realization of the
reduced model is equal to the degree m 0 q of detA(8) or m;q of detA(8), respectively [8].
In order to ensure true simplification, the degree q must therefore satisfy the inequality
As a consequence, the order of the
inequality q < ~,
q < ..."l.
.
m, respectively.
mo or the
reduced model is a multiple of m 0 in the case of LMFD's and a multiple of m; in the
case of RMFD's. These considerations are in favour of LFMD's when m 0 < m; and of
RFMD's when m; < m 0
Under the adopted assumptions, W(8) admits both the asymptotic series expansion:
W(8) =
L C_;8-i
00
(10)
i=1
whose coefficients are the so-called Markov parameters, and the MacLaurin series expanswn:
L C;5i.
00
W(8) =
i=O
(11)
628
As is known, the Markov parameters C_; are equal to the coefficients of {~1\, in the
MacLaurin series expansion of W(t), i.e.,
(12)
where the exponent between brackets denotes the order of differentiation.
Instead, the coefficients C; with nonnegative subscripts are related to the system time
moments M; as follows
( -1)i
C; = - .1-M;,
z.
M; =
100 t W(t)dt.
i
(13)
The reduced model will be derived by minimizing the (squared) L 2 norm of the weighted
equation error:
(14)
or
1
Ek(s) = -dW(s)A(s)- B(s)].
A
(15)
s
Obviously, for k = 0 the standard equation error is obtained. As is known, the expression "equation error" is due to the fact that the reduced transfer matrix Wa (s) satisfies
the equations A(s)X(s)- B(s) = 0 and X(s)A(s)- B(s) = 0 whereas W(s) does not,
so that Eo( s) and Eo( s) are the deviations from zero of the respective left-hand sides
of these equations when X(s) = W(s).
Jk = ].__
211"
joo trEk(jw)EZ(jw)dw
(16)
-oo
or
(17)
3. MAIN RESULTS
Let us form recursively the sequences of functions:
w_,(t)
= j~ w-(t-l)(r)dr,
Wi() = dWt-I(t)
I t
dt
'
z>
l>0
(18)
(19)
Wo(t)
= W(t).
629
(20)
The entries of (18) correspond to the transient part of the system responses to the
canonical inputs (integrals of the impulse), whereas the entries of (19) are the responses
to the derivatives of the impulse.
From the above functions, the following m 0 X m 0 matrices of second-order indices are
obtained:
P;j
00
W;(t)WT(t)dt
= pj~
(21)
where
(23)
np-k
(24)
0[-k:q-k] =
HFq-k
and We is the controllability Gramian associated with the original system (2)-(3), which
is the solution of the Lyapunov equation:
FX
+ XFT + GGT =
0.
(25)
= E A;Ck-i-j,
Bk-j
i=O
Bk+j
A;Cj-i+k,
i=k+l+i
= 1, 2, ... 'k
(26)
(27)
In this case, by denoting the row block vector of matrix coefficients A;'s in (6) by
-A
[A o,
A 1, . . . ' A q ] E Rmoxmo(q+l)
(28)
630
trAP[-k:q-kJAT.
If (22) is positive definite, quadratic form (29) admits a unique minimum corresponding
to the solution of the set of linear equations:
q
i=O
(30)
p[-k:q-k]
P,_:_,
p-k,q-k
(31)
_ P,T
[-k:q-:k]
E Rm;(q+l)xm;(q+l)
(32)
Pq-k,q-k
(33)
where
C[-k:n-kJ
= [ p-ka. . .
G . . . pq-kG ]
631
(34)
(35)
In order for index (17) to be finite, it is necessary that:
k-j
ih-j
2: ck-i-jA;,
j = 1, 2, ... , k
(36)
i=O
Bk+j =
Cj-i+kAi,
j = 0, 1, ... 'q- k- 1.
(37)
i=k+1+j
In this case, by denoting the block vector of matrix coefficients A;'s in (8) by
-T_
[A~T A-T
A-T] E Rm;Xm;(q+1)
0' 1, ... , q
-A
(38)
- T
tr AP[-k:q-kJ1
(39)
If (32) is positive definite, the unique minimum of (39) is attained for those values of
i=O
Pi,j-kAj
= 0,
= -k, -k + 1, ... , q- k.
(40)
A reduced model of form (5) is thus obtained by normalizing all coefficients A;'s to one
of them, e.g., setting Aq = I, by solving (40) for the remaining A;'s after discarding
the last matrix equation, and by computing coefficients B;'s according to (36)-(37).
In this case, the reduced model can be realized by the triple {Fa,Ga,Ha}, where Fa is
the m;q x m;q block companion matrix whose last block column consists of (from top)
-Ao,-At, ... ,-Aq- 1 , G~ = [0 ... /. .. 0] with I in (k + 1)-th position, and Ha is the
m; X m;q row block vector: [-Ck-1, ... , -Co, c_1, ... 'C-q+kl
Again, if the pair {Fa, Ha} is observable, the approximant is asymptotically stable and
matches the relevant first- and second-order information indices of the original system.
632
4. CONCLUSIONS
Some model reduction techniques considered with interest in the recent literature can
be regarded as particular cases of a general procedure, based on the minimization of
the weighted equation error norms (16) and (17), from which new effective variants
can be derived. The adopted approach refers to matrix fraction descriptions of MIMO
continuous-time systems and is characterized by remarkable flexibility, strictly related
to the choice of k in (14) and (15).
References
1. Fortuna L., Nunnari G., and Gallo A.: Model Order Reduction Techniques with
Applications in Electrical Engineering, Springer-Verlag, London, 1992.
2. Bultheel A. and Van Barel M.: Pade techniques for model reduction in linear
system theory: a survey, J. Comp. Appl. Math., vol. 14, 1986, 401-438.
3. Glover K.: All optimal Hankel-norm approximations of linear multivariable systems and their 00 -error bounds. Int. J. Control, vol. 39, no. 6, 1984, 1115-1193.
4. de Villemagne C. and Skelton R.E.; Model reductions using a projection formulation, Int. J. Control, vol. 46, no. 6, 1987, 2141-2169.
5. Sreeram V. and Agathoklis P.: On the computation of the Gram matrix in the
time domain and its application, IEEE Transactions on Automatic Control, vol.
AC-38, no. 9, 1995, 1516-1520.
6. Krajewski W., Lepschy A., and Viaro U.: Reduction oflinear continuous-time multivariable systems by matching first- and second-order information, IEEE Transactions on Automatic Control, vol. 39, no. 10, 1994, 2126-2129.
7. Krajewski W., Lepschy A., and Viaro U.: Model reduction by matching Markov
parameters, time moments, and impulse-response energies. IEEE Transactions on
Automatic Control, vol. 40, no. 5, 1995, 949-953.
8. Kailath T.: Linear Systems, Prentice-Hall, Englewood Cliffs, NJ, 1980.
633
4. CONCLUSIONS
Some model reduction techniques considered with interest in the recent literature can
be regarded as particular cases of a general procedure, based on the minimization of
the weighted equation error norms (16) and (17), from which new effective variants
can be derived. The adopted approach refers to matrix fraction descriptions of MIMO
continuous-time systems and is characterized by remarkable flexibility, strictly related
to the choice of k in (14) and (15).
References
1. Fortuna L., Nunnari G., and Gallo A.: Model Order Reduction Techniques with
Applications in Electrical Engineering, Springer-Verlag, London, 1992.
2. Bultheel A. and Van Barel M.: Pade techniques for model reduction in linear
system theory: a survey, J. Comp. Appl. Math., vol. 14, 1986, 401-438.
3. Glover K.: All optimal Hankel-norm approximations of linear multivariable systems and their L 00 -error bounds. Int. J. Control, vol. 39, no. 6, 1984, 1115-1193.
4. de Villemagne C. and Skelton R.E.: Model reductions using a projection formulation, Int. J. Control, vol. 46, no. 6, 1987, 2141-2169.
5. Sreeram V. and Agathoklis P.: On the computation of the Gram matrix in the
time domain and its application, IEEE Transactions on Automatic Control, vol.
AC-38, no. 9, 1995, 1516-1520.
6. Krajewski W., Lepschy A., and Viaro U.: Reduction of linear continuous-time multi variable systems by matching first- and second-order information, IEEE Transactions on Automatic Control, vol. 39, no. 10, 1994, 2126-2129.
7. Krajewski W., Lepschy A., and Viaro U.: Model reduction by matching Markov
parameters, time moments, and impulse-response energies. IEEE Transactions on
Automatic Control, vol. 40, no. 5, 1995, 949-953.
8. Kailath T.: Linear Systems, Prentice-Hall, Englewood Cliffs, NJ, 1980.
636
problems of this kind. For a review of these models the reader is referred, among the
others, to [6], [2] and the references therein. In this paper, uncertainties are modeled in
a different way, as it has been done in [3]. Production and demand are assumed to have
a known range of allowed values, but no knowledge is given on which allowed value
will actually be taken. These unknown-but-bounded specifications for uncertainties
are quite realistic in several situations. In general, upper and lower bounds for production yields and demands can be inferred from historical data or from decision makers'
experience much more easily and with much more confidence than empirical probability distributions for the same quantities. In other cases, these bounds are explicitly
stipulated in supply contracts. In this paper we report, in an abridged form, the result
presented in [4] to which the reader is referred for proofs, details and a complete list
of reference.
The discrete-time dynamic model that describes our class of systems has the form
x(t + 1)
(1)
where B and E are assigned matrices, x(t) is the system state whose components
represent the storage levels in the system warehouses, u(t) is the control, representing
controlled resource flows between warehouses, and d(t) is an unknown external signal
representing the demand, or more in general non-controllable flows. We assume that
the following constraints are assigned. Both storage levels and control components
must be nonnegative and upper bounded:
x(t) EX= {x E
~n:
0::;
< x+},
(2)
u(t)
~q
(3)
U = {u E
the external uncontrolled inputs are unknown but each included between known bounds
(4)
where x+, u+, d- and d+ are assigned vectors.
The state variables of the system represent the amount of certain resources in
their warehouses, each one represented by a node. These resources are raw materials,
intermediate and finished products, as well as any other resource used in the production
processes. Production processes are represented as "flow units". Formally, a flow unit
is an activity which, in unit time, takes amounts J.l;, i = 1, ... , k, of resources from k
source warehouses (s 1, ... ,sk) and generates Vj, j = 1, ... ,h, amounts of products in
destination warehouses .. Such a flow unit can be associated to an hyper-arc (a subset
of nodes each associated to a certain coefficient, see Fig. 1).
Both source and destination nodes may be non homogeneous: for instance, in the
source we can consider materials, capitals, auxiliary goods, tools, work force, while
in the destination nodes we can have both final products or productions remains.
637
638
ut
u ut
u ut
u
u u ut
u ut
ut,
ut.
ut
u ut ut ut,
ut
-1
[ I U
B = 0 1 -1
0 0 1
and
E=
[ -1
Tl
0 0
-1 u
-1 0 0 -1
0 -1 1 1
ut)
In this setting, a game between two players P and Q is considered. At each time,
player P decides the flow u(t) and player Q decides the flow d(t). The information
about X, U and D is known to each player. The game is the following. For a certain
initial distribution x(O) of the commodity within the nodes at time t = 0, player P
first chooses a flow distribution u(O) according to (3), and then player Q chooses a
flow distribution d(O) according to (4). These moves produce a new distribution x(1)
of the commodity according to (1). Then the two players choose new flows u(1) and
d(1) in their feasible ranges in order to produce x(2), and so on. The basic problem
considered in this paper is that of finding a feasible initial condition set and a feasible
feedback control for player P, that is a function of the form u(t) = <P(x(t), t), such that
the constraints (2)-( 4) are always fulfilled.
Definition 1 Given the constraints (2), (3) and (4), we say that <P : X x N' --> U is
an admissible control strategy, and that X 0 ~ X is an admissible initial condition set,
639
t:::: 0,
(1) when u(t) = <P(x(t), t) are always feasible, in the sense that u(t) E U and x(t) EX.
It is clear that the permanence of the state x(t) and the control u(t) in their
allowable ranges is a a basic condition for the system. Our next problem is to choose,
among all the admissible controls, the one which minimizes the storage amount, i.e.
the work-in-progress (WIP). To this aim, we introduce the following definition.
Definition 2 The vector x ::; x+ is a feasible storage level (FSL) if there exists an
admissible control <P such that
x(t)::; X
for all t ~ 0
(5)
for some initial condition x(O) E X 0 and for all d(t) as in (4).
CTX
is introduced for the FSL where c E Rn is an assigned vector with positive components.
We assume now that the largest admissible initial condition set X 0 is given and we
c9nsider the problem of finding the optimal FSL, and driving the state below the
optimal FSL.
640
Problem 1 Find (if it exists) the feasible storage level xopt of minimum cost and
find (if it exists) a feedback control <P such that, for all admissible initial conditions
x(O) E X 0 , and d(t) ED (cf. 4)
limsupx;(t):::;
t-+co
(6)
x?t
Clearly, being d(t) an uncertain sequence, the state evolution x(t) may have no
limit, thus we had to introduce the superior limit. The concept above roughly means
that, in the worst case of d the states converges to a level which is below xopt. We
will show that actually the condition x(t) :::; xopt may be assured in a finite number of
steps.
Xs
= { x E ?Rn:
x +sEX Vs E S }.
(7)
Theorem 1 {3} [4} There exist an admissible control and an admissible initial condition set if and only if the following two conditions are satisfied:
=f. 0,
(8)
~-BU.
(9)
XED
ED
Moreover, the largest initial condition set (i.e. the set of all the initial conditions) for
which the game is favorable to player P is given by
(10)
(11)
<P(x) E U
and
x + B<P(x) E XED
for all x E X 0 , t
0,
(12)
is an admissible control.
Now we are able to provide an expression for the optimal storage level Xopt. Then
we give necessary and sufficient conditions for the existence of a control <P that drives
the state to the optimal level and we show that convergence occurs in a finite number of
steps. For convenience, for a, b E R:', we denote the following parameterized hyper- box
X
X(a,b)='={xERn: a::;x::;b}.
(13)
8,:-
=min E;d,
If~"
in
and
dED
641
fln
(14)
dED
X(a,b)ED = {x:
xopt
= 8+ -
8-.
(15)
The Theorem above points out the following property. The set X(O, xopt) is independent on c > 0. Furthermore its erosion is reduced to a singleton
x ='=
-8-.
(16)
The vector x characterizes what we call the central strategy. In other words, if x = Xopt,
then the control is obtained by pointing to the central state x, in the sense that u(x)
is selected in such a way
u = u(x): x + Bu = x.
(17)
Selecting the control in this way is a both necessary and sufficient condition to keep (in
the worst case sense) the state below the minimal level, say in X(O, xopt). Let us now
consider the problem of the convergence of the storage amounts to the minimal levels.
In order to exclude trivial cases, in the following we assume
Assumption 1 xopt ='= X(O, xopt) =J. X.
If xapt = X then the problem of the convergence does not arise and the only feasible
strategy is the central one. Under Assumption 1, there exist admissible initial states
which are not in X(O, Xopt). This also implies that there exits x 0 E X 0 such that
Xo (j. xopt (see [4]). Define the vector() as
() = x+ -
8+
+ 8-
2: 0.
(18)
Under Assumption 1, the vector() 2: 0 has at least one positive component. The next
theorem gives a necessary and sufficient condition for the existence of a control law
which drives the states inside xopt = X(O, xopt) from all admissible initial conditions.
Theorem 3 [4} There exists an admissible control strategy such that condition {6}
holds for every initial state x(O) E Xo and every sequence d(t) as in {4}, if and only if
the conditions of Theorem 1 hold, and there exists t > 0 such that
ED+ tZ
c -BU, where
z =X(O,O) =
+ 8-}.
(19)
642
To introduce a control law which assures convergence to the least storage level we
need to consider the set
n( X)
-+
1\
'
uE
u.
(20)
Rq such that
<P(x) E !l(x).
(21)
assures the required convergence. Moreover, denoting by p.l the smallest integer
greater or equal p. E Rn, if a control law of this form is applied, then for all initial
conditions x(O) E X 0 at most
1
T=1+r-l,
f
steps are needed to drive x(t) in xopt and keep it inside for t ~ T. The control
implementation requires solving on-line the linear programming (LP) problem (20).
3: EXAMPLE
Let us consider the example in Section 1, with the following data
f,
u+ = [ 170 50 100 70 f,
d- = [ 15 20 60 0 0 f ,
d+ = [ 25 30 80 20 10 f ,
z+
The vectors
a+
and
6+
so that
xopt
c-
= [ 130
120 150
= [ -15
-20
]T,
-30
6-
= [ -45
-40 -80 ( ,
= 6+ -
6-
= [ 30
20 50 ] T
=z+ -
xopt
= [ 100
100 100
f.
The l~est t: such that ED+EZ C BU with Z = X(O, 0) is f = 0.3. Now r~ l = 4, thus
at mos(5 steps are necessary for convergence. For all x(O) E X0 , the state evolution is
bounded by the following inequalities if the strategy in (21) is applied
0
[ 0
0
l[ l[
~
Zt(t)
z2(t)
za(t)
130- t25]
120- t25 , t
150- t25
= 1, 2, 3, 4.
(22)
643
(23)
-12.50
262.5
Finally the control law is obtained by solving for each x the problem in {20). Note
that this problem is an LP problem in 4 variables and 7 two-side constraints. Problems
of this kind are solved almost immediately, even by a personal computer.
4. CONCLUSIONS
We have considered the problem of driving the storage amount of a productiondistribution system under the minimal feasible level in the presence of uncertain demand, whose bounds are known. A control strategy which assures convergence in the
worst case sense has been proposed. The result described in this paper can be extended
in several directions. For instance, the problem in which failures may occur in ~he system and the cases in which some resources admit integer values only can be cori"sidered.
The reader is referred to [4] to have more details on these further developments.
Acknowledgment. Work supported by C.N.R. under grant 94.00543.CT11.
References
[1] J. E. ARONSON, A survey of dynamic network flows, Annals of Operations Research, Vol. 20 (1989), pp. 1-66.
[2] D. P. BERTSEKAS, Dynamic Programming and Optimal Control, Athena Scientific, Belmont, Massachusetts, 1995.
[3] F. BLANCHINI, F. RINALDI AND W. UKOVICH, A Network Design Problem
for a Distribution System with Uncertain Demands, SIAM J. on Optimiz.ation, to
appear
[4] F. BLANCHINI, F. RINALDI, W. UKOVICH, Least storage control of multiinventory systems with non-stochastic unknown inputs", Submitted.
[5] J. B. ORLIN, Minimum convex cost dynamic network flow, Mathematics of Operations Research, Vol. 9 (1984), pp. 190-207.
[6] E.L. PORTEUS: Stochastic Inventory Theory, in: D.P. HEYMAN, M.J. SoBEL, Ens. Handbooks in Operations Research and Management Science, Vol. 2:
Stochastic Models, NORTH-HOLLAND, 1990.
646
Kruth [3], by Lart [4], the component proposed by Schmidt [5] and the model by Childs
[6].
However, no analysis has been carried out to qualify the RP-prototype in terms of free-form
surfaces and the geometric feature complexity of real components [5 - 6].
Difficulties in establishing the accuracy of free-form surfaces with coordinate measuring
machines (CMM) can be summarised in:
a) no single benchmark can present all geometric features of "real parts" (thin walls, flat
surfaces, holes, etc.);
b) a measuring strategy shall be defined in order to assure the repeatability of the measuring
conditions if a number of real parts, representative for the current production, is selected.
In this paper the authors propose a method, a measuring strategy and a data analysis, as a
base for the design of a comparative benchmark to evaluate the dimensional quality of a
prototype without any limitations of form, dimension and complexity. The universality of
the principle, based on a single parameter to express geometrical accuracy, and the
availability of software and hardware facilities should promote process capability studies of
any Free-Form technology.
2.METHOD
The measurement principle is based on comparing an actual point probed on the surface of
the prototype with the corresponding nominal point belonging to the reference model
surface. Such a measurement should be performed independently from position, extension
and curvature of the surface in order to assure the minimum uncertainty of results and to
compare results of different geometrical features.
The characteristics of the method have been identified as follows:
I) the physical reference model shall be substituted by a numerical model of the surface to
be measured,
II) the direction of probing any surface feature must be normal to the nominal surface,
III) the evaluation of deviation is to be performed by calculating the orthogonal distance
(normal deviation) between actual and nominal points.
Thus no substitute element has to be defined and fitted into the probed points.
Hardware and software facilities based on a Zeiss UMM550 coordinate measuring machine
equipped with the HOLOS software package have made such a method now available [7].
The CAD model is imported via VDA protocol in a graphic interactive user interface,
HOLOS, from where CNC-controlled measurements can be automatically defined. Singlepoint measurements as well as continuous scanning are performed.
Single-point measurements are defined by grid points, PicAD (see fig. I), on the surface of the
CAD model. Spatial coordinates and normal vectors are then established. Measuring paths
are automatically defined to probe the points with respect to the normal vector directions.
After probing, the normal deviation(~) between nominal PcAD and estimated PM (see figs. 2,
3) is evaluated as:
(1)
647
where cos X, cos Y, cos Z are the direction cosines of the normal vector in PcAD (XcAD,
YcAD, ZcAD).
A positive value of this parameter has to be interpreted as exceeding material over the
nominal surface.
The probed point Pr does not coincide with the target point PR, corresponding to PcAD in
the actual surface, or with the estimated point PM (see fig.3). The unknown errore& can be
assumed as a function of: i) the curvature of the actual surface, ii) the angle between the
normal vector in PR and the direction of probing, iii) the radius of the probe sphere and iv)
the friction between the probe sphere and the workpiece surface.
The error e 4 is of great importance in free-form surface measurement as factors i), ii) and iv)
vary continuously in any point probed, increasing the uncertainty of the result and thus
reducing the possibility to compare results relevant to different prototypes as for shape and
complexity [8].
The normal probing represents the condition to minimise and control e4 as it allows to limit
the variability offactors i), ii) and iv).
3. PROCEDURE
The measurement strategy is shown in figures 4, 5 and 6.
The first phase (see fig.4) is aimed at selecting surfaces on the prototype. Here a surface is
established by one or more surface features as defined in the ISO 5459 [9].
Surfaces are classified in three categories in terms of curvature, spatial alignment, normal
deviation and complexity degree: class A, class B and class C. Class A and B surfaces are
used to align the CNC coordinate system to the reference CAD system (:EcAD) via a 3D best
fit of probed points (see fig.5). Depending on the size of the prototype, the number of
probings (single-point) is fixed within the range 500-1000 to limit the required measuring
time to 3 hours; the probings percentage assigned to class A and B surfaces is
approximately 50%.
As a results, form and position errors, generally superimposed, are minimised by
independently setting rotational and/or translational degrees offreedom, during best fitting.
In the second phase all surfaces are measured. Single-point probing is generally preferred as
to assure repeatability of results, when complex free-form surfaces are to be measured, as to
limit processing time. Continuous scanning is applied when the waviness of the surface is
investigated to assess its periodicity and amplitude (see fig. 7).
A preliminary analysis of data is carried out by HOLOS via numerical and graphical
representation (see figs.10, 11, 12). The complete processing of data is performed in a PC
spread-sheet module.
4. NORMAL DEVIATIONS ANALYSIS
The analysis, based on normal deviations of probed points, is aimed at assessing the
statistical accuracy capability of the process. Free curvature of surfaces would extend the
knowledge of in-plane (2D) and 3D-regular form accuracy capability.[1-5]
648
Parameters ll Mean, ll Outer and ll Inner based on normal deviation (ll) are graphically
described in figure 8 for a flat surface. When relating to free surfaces the parameters follow
the definitions in [ 10].
Measurement results are summarised in table 1 and in figure 9.
SURFACE
NAME
S1
S2
S3
S5
S6
S7
S8
S9
S10
S11
Sl2
S1-ALTO
S2-FLANGIA
S3-FLANGIA-inn
SS-BOCCOLA
S6-RETRO
S7-MENSOLA-bordo
S8-FONDO
S9-MENSOLA-int
S10-MENSOLA-est
S11-FRONTE
S12-INNESTO
General
PROTOTYPE
TYPE
t:.Mean
t:.Out
t:. Inn
DEV.st
Nr points
B
A
B
A
-0.410
0.000
0.096
-0.023
-0.839
0.067
-0.237
0.164
-0.070
0.063
-0.037
-0.265
0.083
0.177
0.130
-0.737
0.309
-0.140
0.253
0.094
0.190
0.077
-0.628
-0.072
0.027
-0.182
-0.976
-0.237
-0.360
0.091
-0.158
-0.185
-0.129
0.086
0.029
0.035
0.065
0.066
0.147
0.051
0.035
0.050
0.129
0.051
117
306
100
120
65
168
78
96
96
27
32
-0.078
0.309
-0.976
0.248
1205
c
c
c
c
c
5. DIMENSIONAL ANALYSIS
The dimensional characterisation is aimed at evaluating process capabilities of RP
techniques with respect to the dimensional accuracy.
A scale factor is proposed as dimensional accuracy index, defined as:
Mean actual dimension
SCALE FACTOR= .
. al d"
.
Norrun
tmenswn
This parameter is calculated for Din:ction Surf. Nominal Mean Mean Outer Inner Scale
error
dim.
Ac.dim error
error Fact.
each of the three principal
S2, S3
35.096 0.096 0.261 -0.045 1.003
35
X
directions (X, Y and Z) measuring
y ..
Sll, S6 86.199 85.452 -0.748 -0.522 -1.126 0.991
opposite both regular and freez.. Sl, S8 65.542 64.897 -0.645 -0.404 -0.984 0.990
form surfaces. The mean normal
S9, SlO
0.347 -0.067 1.019
Sx
5
5.094 0.094
deviation (ll) is projected on the
axis directions and added to the
Table 2. Dimensional deviation and scale factors
nominal coordinate of the grid
(**=nominal surfaces are not parallel). Sx
point centre or of other reference
points out the dimensional value of a thin
point. The operation is performed
wall thickness, normal to the X direction.
in both opposed surfaces and the
[mm]
actual dimension is then evaluated.
In this way, the scale factors describe the mean percentage variation of dimensions along the
axis ensuring a repeatable evaluation of the process anisotropy (see fig.lO). Table 2 outlines
a summary of results related to scale factors.
649
6. FORM ANALYSIS
The third phase investigates the process capabilities in
the reproduction of free and regular geometric forms.
The results of this study are shown in table 3.
Furthermore, a continuous scanning approach is used
to evaluate systematic errors (a waviness effect in
fig.7; a shrinkage effect in fig.lO; a bending effect in
fig.11 ), which are previously underlined by singlepoint measuring.
Surface
S1
S2
S3
S5
S6
S7
S8
S9
S10
Sll
Sl2
S1-ALTO
S2-FLANG1A
S3-FLANG1A-inn
S5-BOCCOLA
S6-RETRO
S7-MENSOLA-bordo
S8-FONDO
S9-MENSOLA-int
S10-MENSOLA-est
Sll-FRONTE
Sl2-INNESTO
Fonn error
0.363
0.155
0.150
0.312
0.238
0.546
0.219
0.162
0.252
0.375
0.206
650
Path of
pCAD
I , __ ---- -_-- -.
Nominal CAD
surface
-.... -
Nominal CAD
-----
Actual
surface
surface
...
Actual
surface
"'
deviation
Figure 3. Representation of PcAD, target (PR), probed (PT) and measured point (PM) m
normal path (left) and in workpiece related (right) probing.
SELECTION OF MEASURING SURFACES
'
CLASS A
(Constant curvature)
-Nonnal to an axis of
the reference coord.
system
-Lowest normal
deviation: mean, DEY.
standard
-Wide area
CLASSB
(Low varying curvature)
I
t
CLASSC
-Round edge
-Cantilever
-Approximately normal
to an axis of ref. coord.
system
-Wide area
651
= :EcMM
Transfonnation~o
-I.
-I.
PROBINGS
Axe Y
-2.
Axis alignment
Class C surfaces measurement
~
Surface -i
Table 1:
Name of prototype
Mean normal deviation
1'1 Outer/Inner
DEY. Standard <'.oEV.
Nr of probings
Table 1:
Name of surface -i
Class
Mean normal deviation {I,
1'1 Outer/Inner
DEY. Standard <'.oEV.
Nr of probings
{I,
Chart 1:
Grid cromatography:
Data files
Chart 2:
lstogram
Cumulative
DIMENSIONAL ANALYSIS
General
Table X, Y, Z, s:
Name of prototype
Nominal dim.
Mean actual dim.
Outer/Inner dim.
Mean error
Outer/Inner error
Scale factor
General
FORM ANALYSIS
Table 3:
Fonn error
(flatness, cylindrcity, ... ,
free-form error)
Surface -i
Char/3:
(Continuously
scanned points
Waviness
parametres
652
vector
1 > B.38J0 nm
1 0 . 2540 rrm-
O.O S J/ s S
-1.0
B.38 LB
0.2540
1- 0.1270 rrm -
0 . J2 7B
-0.4
-08
mm -
0 . 1270
-0.2
-0.6
n Inn
- n Mean
Surfaces
DEY.st
A. Sahay
School of Business and Technology, Salt Lake City, UT, U.S.A.
654
A. Sahay
and, in most instances, reduced or limited by controlling the cause. In contrast the random
errors have no assignable cause and therefore would seem to be beyond control.
The major objectives of most of the research in the area of error analysis is to (i).determine
the machining errors on a given machine that cause significant errors in the parts, and (ii) to
determine the accuracy improvements that can be expected by correcting or minimizing these
machining errors.
Error correction technique are divided into two categories based on the error identification
schemes used. These are precalibrated error compensation and active error compensation [1]. In
the first case, the errors are determined or measured before or after the machining process. The
known errors are then compensated for and corrected during subsequent operations. The other
error correction technique is known as active error compensation. In this technique,the
determination of errors and their compensation are done simultaneously, with machining. Inprocess gauging is used for the detection of errors. Mathematical modeling is then used to
establish a relationship between errors and sources. The active error compensation approach
has not been widely used because of its complexity.
This research uses mathematical models to detect the errors but does not use in-process
gauging.All the current methods to improve the accuracy through the error compensation
control one or two dominant deterministic parts of the error for a specific type of machine.
2. LllERATURE REVIEW
Researchers have applied a number of different methods and approaches to the error
compensation problem . The basic principles and the approach of some of these studies are the
same.The methods used for error correction include statistical method [2], methods based on
random process analysis [3], forecasting techniques [1]. computer-aided accuracy control [4],
methods based on mathematical modeling[5]. In recent years, several studies have been done
relating to the error correction problem in coordinate measurement machines. Some of these
techniques, for example [6] have been used by other researchers to study and analyze the error
problems in machine tools.
3. 1HE PROPOSED ME1HOD
The central concept proposed here is that it is possible and desirable to change the machining
specifications so that the parts can be machined within specification and with minimum of
errors. This is done by analyzing the differences between the design specifications and the
corresponding actual measurements and creating a new set of machining specifications.
In this research, the total error introduced in machined parts are determined using mathematical
models. The determination of error consists of several steps. It involves: (I) designing the parts
involving curves and surfaces using appropriate curve and surface designing techniques, (2)
developing the part programs to machine the designed parts,(3) measuring the part in the
coordinate measurement machine and extracting the measured part dimensions, (4) relating
design points to measured points by establishing the point correspondence between the design
set and measured set of data and transforming the measured set over the design set, and fmally,
655
(5) determining the error vectors whose components are the differences between the specified
part dimensions and the dimensions obtained after machining.
The errors determined using the approach discussed above will consist of both systematic and
random errors. Once the errors are determined, this research focuses on finding a better
machining specification by fitting an optimal approximation curve which also minimizes the
random errors to some extent. For this, a model is suggested and discussed. Some parts were
machined and the data were used to test the working of the models and the computer
programs.
4. MODELS AND ALGORITIIMS
This section provides a discussion on the models and algorithms used to solve the error
detection and correction problem. The models and algorithms include:
(1) Curve and surface design algorithms
(2) Part programs for machining the parts
(3) Error determination model in co-ordinate measurement machine (CMM)
(4) Algorithms to determine the point correspondence between the design points and measured
points
(5) An algorithm to calculate the errors based on the transformation of the measured point set
over the design point set
(6) A hypothesis test procedure to test the null hypothesis that the design points and the
corresponding measured points are the same having no significant error against the alternate
hypothesis, that the design points and corresponding measured points are different having
significant errors, and
(7) An algorithm to revise the specification using the least squares parametric approximation.
The sequence of the above models and algorithms is shown in Figure 1. A brief explanation of
the models is also presented.
4.1 PART DESIGN
For designing the part, the technique of Bezier curve and surface has been used because of its
many advantages in design. Also, the computations involved are less complex compared to the
B-spline curves and surfaces which have wide applications in design and manufacturing. Bezier
curves and surfaces are represented using parametric form [7]. Computer programs are written
to design two-dimensional curves, space curve, and surfaces. The program for the surface also
calculates the offset points that are need to machine the parts.
4.2 MACHINING, MEASUREMENT AND MEASUREMENT ERROR
Using the design data, part programs are developed to machine the specified shape. In the
procedure recommended here, parts are then measured using a coordinate measurement
machine (CMM). The measurement of the part on the measurement machine (coordinate
measurement machine) requires that the measurement machine be error free, that is, there are no
inherent systematic errors present in the machine. The systematic errors in the measurement
machine is determined through a model. A rigid body assumption for the components of
coordinate measurement machine (CMM) is used. This assumption means that the errors of the
656
A. Sahay
X,Y, and Z carriages depend on their individual position, and not on the postuon of other
carriages. In any coordinate measuring machine, if the measuring probe is displaced from one
position to the other there is translational and rotational error. Thus, there are six error terms
per axis (three rotation and three translation). In addition to this there are three squareness
errors relating to XY, YZ, and XZ plane. Altogether there are a total of 21 error terms that need
to be measured and corrected. The position error due to the displacement of the tip of CMM is
related to the above error terms.
Check for
Systematic Errors
Revise Spec
Figure I
Fit Approximation
Curve /Surface
When the part is measured in the CMM, the original part axis has different orientation on the
measurement machine table. The point correspondence is also lost. When the part is set on the
657
table of the measurement machine, it is not known which point on the designed part
corresponds to the point on the machined part. In order to calculate the error vectors, the points
on the machined part need to be related to the corresponding design points. This requires a
match between the design points and measured points. Once the match or point
correspondence is determined, the transformation can be performed to map the measured points
over the design space. The problem of finding the point correspondence between the design
and machined shape when both have curved boundaries has two steps: (1) Determine the
feature points or the dominant points for the design and machined shape, and (2) Establish
point correspondence (matching).
There are two widely used approaches to determine the feature points in curved objects. These
are (1) polygon approximation method, and (2) the feature point detection through the curvature
calculations. The frrst method is used here and is explained below.
Out of the many polygon approximation methods, the split method [ 8] is claimed to provide
more accurate polygon representation. The goal here is to obtain a consistent polygon
approximation to both the actual and machined curves (where both the design shape and the
machined shape have point representation).
A slight modification to this algorithm can produce very consistent polygons both for the
original digitized boundary (design shape) and for the boundary points extracted from the
machined object. The method by which this can be achieved is known as iterative end point fit.
The end point fit proceeds as follows. Suppose, a set of n points are given: { P1,
P2, ..... ,Pn}.
(a) Fit an initial line by connecting the end points of the set (the end points might be the left
and right most points in the set, the top and bottom most points, or some other pair of
distinguished points)
(b) Calculate the distances from each point to the line (say, AB)
(c) If all the distances are less than some preset threshold, the process is finished
(d) If not, find the point farthest from the line AB (say, C) and break the initial line into two
lines AC and CB.
(e) The process is then repeated separately on the two new lines, with different thresholds.
(f) The final result is sequence of connected segments AC, CD, DB, ere.
4.4 TRANSFORMATION
Once the match between the design point set and the measured point set is obtained,
transformation can be performed to map the design point set over the measured point set. This
transformation can be done using the homogeneous coordinate transformation.The
transformation of the measured data set over the design set is based on the principle of
homogeneous coordinate transformation . If the design points and the corresponding measured
points are known then the transformation between these two sets of points can be represented
by the following relationship:
[X Y Z 1] [Til
T21
T31
T41
T12
T22
T32
T42
Tl4]
T24
T34
T44
= [X'
Y' Z' 1]
=W'[ L
M N 1]
(1)
658
A. Sahay
In the above equation, (X, Y, Z) are the design points and ( X' Y' Z') are the corresponding
measured points. The matrix elements Tij are the elements of transformation matrix (the upper
3X3 matrix generates net rotation, the 1X3 matrix is for translation, and the 3Xl column matrix
handles the projection effects) and W' is the homogeneous term. Multiplying out the terms in
the above expression, three equations per point are obtained and there are 15 unknowns Tij
The term T44 is scale factor and is considered to be 1. Thus, at least five design points and
five corresponding measured points are required to generate the required number of
equations so that the terms in the [T] matrix can be determined. The system of equation
generated in this procedure has the following form:
A1
m =B1
(2)
Equation (23) shows that the transformation matrix can be set for any number of data points,
and the 15 unknowns of [T] matrix can be determined from the above relationship.
[A] T [A] [11
=[A]T
[B]
(3)
659
(4)
where, n= sample size, tn-l = appropriate t value from the t-table for selected level of
significance, and D-bar =:I: D./
n (D.I are the differences between design and corresponding
1
measured points) , and where SD 2 is the variance of the differences. The average and the
standard deviation of the differences are calculated from the data and equation (4) is used to test
the hypothesis.
4.6 REVISING THE SPECIFICATION
Once it is determined through the hypothesis test that no significant systematic errors are
present, the specifications are revised using the algorithm in this section.This algorithm fits an
optimal approximation curve through the measured set of points. This algorithm is based on a
least square parametric approximation. Through successive approximation, a revised or better
set of points is obtained. This point set is used to revise the part program created earlier so that
the parts can be machined within the prescribed tolerances and with minimum of errors.
Suppose n corresponding measured points Pi (Xi, Yi) or Pi (Xi, Yi, Zi ), i= O,l, ... ,n are
obtained. An optimal approximation curve is to be fitted through these points. The
approximation curve can be a Bezier curve of appropriate degree. In this research,the
approximation curve is the same as the design curve. Suppose, a Bezier curve of degree n is
chosen. This curve can be represented as
n
Y (u)
=1:
b. f.
(5)
1 1
i=O
If the sum of absolute values of error vectors is minimized using the least squares method, the
objective function can be written as;
n
~
E=-'-'
E. 2 =:I: ( P. - Y(u.)) 2
1
1
i=O
Taking the derivative of the above equation and setting it equal to zero,
solve for the unknowns.
(6)
If the error vectors in equation (6) are defined as the distance from the given point Pi to a
corresponding point on the approximation curve Y(ui), these error vectors are not perpendicular
to the approximation curve. The requirement in the above case is to make the error vectors
normal to the approximation curve. This can be done by changing the parameter values u. If the
parameter values are changed by some rule, a change in the original value of point on the curve
660
A. Sahay
Y(ui) will occur which will cause convergence of the error vectors to the normal of the
approximation curve. A change in parameter values will change the approximation curve Y(u),
so each time a change in set of parameter values is initiated, a new approximation curve is
calculated and the angle of the resulting error vectors is calculated. When all the error vectors
are normal or approximately normal to the approximation curve, the process is stopped.Some
methods of changing the parameters are suggested by Strauss and Henning, Hoschek, and
Pratt [9,10].
All the algorithms discussed above will lead to the identification of errors and revision of the
specification.
5.0SUMMARY
This research has (1) suggested models and algorithms to reduce errors ~n machined parts by
creating a better basis for machining specification, (2) provided methods and means to deal
with random errors, (3) provided a method of relating design points to measured points
through point correspondence and transformation (4) investigated the methods of detecting
the feature points in the curved objects which is a major requirement to match the design points
to the measured points having curved boundaries, (5) provided a method that can be applied to
curves and surfaces which enabled the investigation of errors in all three axes of the tool, and
(6) be~n able to integrate the design, manufacturing, and measurement processes, thus
providing an integrated approach to error detection and correction problem.
REFERENCES
LEman, K. F., "A New Approach to Form Accuracy Control in Machining," International
Journal of Production Research. Vol. 24, No.4, 1986.
2.Nevelson, M. S., "Factors Affecting Machine Accuracy and Selection of a Control
Algorithm," Production En~ineerim: Research Association, 1973.
3. Raja, J., Whitehouse, D. J., "An Investigation into the Possibility of using Surface Proftles
for Machine Tool Surveillance," International Journal of Production Research. Vol. 22, No. 3,
pp. 453-466, 1984.
4.Dufour, P., Groppetti, R., "Computer Aided Accuracy Improvement in Large NC Machine
Tools," Proceedin~s of the 21st International MTDR Conference. Swansea, U.K., 1980, pp.
611-618.
5. Kurtoglu, G. and Sohlenius, "The Accuracy Improvement of Machine Tools," Annals of the
Cl&f, Vol. 39/1, 1990.
6. Zhang, G., et al., "Error Compensation of Coordinate Measurement Machines," Annals of
the CIRP, Vol. 34, 1985.
?.Mortensen, M. E., Geometric Modellini. John Willey and Sons, New York, 1985.
8.Duda, Richard 0. and Hart, Peter E., Pattern Classification and Scene Analysis. John Willy
and Sons, 1973.
9. Hoschek, J.," Spline Approximation of Offset Curves," Computer Aided Geometric Desi2n
Science Publishers B.V., North Holland, pp. 33-40,1988.
10. Pratt,"M. J., " Smooth Parametric Surface Approximations to Discrete Data," Computer
Aided Geometric"Desi~n 2, 1985.
~.Elsevier
1. INTRODUCTION
As it is well known, in a turning operation the tool life is limited by chipping or by wear
both originated by the interaction between tool and workpiece or by tool and chip,
evaluated as flank wear and crater wear [1 ]. These wear parameters are efficient to control
both qualitatively and quantitatively the tool deterioration and then, subsequently, the
product quality in rough-turning. In finish-turning operations, instead, it has been already
shown the non-suitability of them for the sake of control the dimensional accuracy and the
662
C. Borsellino et al.
micro geometry of the worked surface; these last are the main factors that determine the
quality of the final product [2,3], in fact, the machine members can work correctly only if
the dimensional accuracy of the different members is match by a particular limit of
finishing of the surfaces that must work coupled. In a turning operation surface roughness
can be considered, from the geometrical point of view, as the envelope of the succeeding
relative positions between tool and workpiece and its value, as theoretical mean value, of
the surface roughness depends on tool geometry, on the tool position with respect to the
workpiece and on feed [4].
The value of the actual roughness is greater than the theoretical one [5] because of several
factors, between them one of the most significant is the wear of the minor cutting edge [6].
In sintered carbides tools these kind of wear manifests itself, as it is known, in nose wear
and groove wear. The first one appears along the contact tool/workpiece during the cutting,
i.e. it interests the active cutting edge and part of the tip of the insert and it has only the
effect to shift the tool profile parallel to itself causing in the meanwhile an increase of the
diameter of the workpiece at increasing time of cutting. The second one starts just at the
beginning of cutting and it concerns the generation and growth of grooves perpendicular to
the secondary edge.
The generation of these grooves is due to the contact between the secondary edge and the
zone of the worked material that has been work-hardened during the cutting; the workhardening is increased by the high deformation and high strain rate and, as known, it
causes a localised increase of hardness and shear resistance so that the tool undergoes a
concentrate wear and on its surface appears a groove which depth grows at increasing of
cutting time.
As wear goes on, the groove reaches a depth such that the crest on the surface of the
workpiece, protrudes in such a way to interest in the subsequent revplution another point of
the minor cutting edge; following this way on the minor cutting edge appears a series of
equally spaced grooves, with a relative distance equal to the feed. The number of grooves
and their depth influence heavily the actual roughness of the workpiece surface.
The survey of the geometrical characteristics of the workpiece or of the tool wear plays a
very important role in the automated machining systems; in fact, the in-process [7] or online [8] measurement techniques employed until now, needs great times and costs for the
measurement of the roughness for the employment of complex software or expensive
instruments such as: profilometers, tool-room microscopes, SEM [9], etc.
In the present paper a technique for on-line survey and analysis of the grooves on the minor
cutting edge in finish turning operations is proposed; it is based on the acquirement and
processing of the minor cutting edge image, then the changing in the number of grooves
has been related, for the different cutting parameters adopted, with the roughness of the
worked material.
2. EXPERIMENTAL TESTS
Several tests of continuous cutting under dry conditions were performed on a lathe (model
663
SAG 12 CNC), working AISI 1040 steel specimens; the properties of the employed
material are reported in Tab. 1.
Chemical composition: C = 0.43 %, Mn = 0.76 %, Si = 0.28%, S = 0.027 %, P = 0.016 %
R = 700 N/mm2
Tensile strength:
HBN(Z.S/lS?.S) = 208
Hardness:
Tab. 1: Characteristics of AISI 1040 steel
Sintered carbide inserts type TPUN 160308 (carbide PlO and P30) were used, mounted on
a commercial tool holder with the following geometry:
- rake angle y = 6
- clearance angle a = 5
- side cutting edge angle tjJ= 0
- inclination angle 'A = oo
The cutting condition employed for each insert are:
-depth of cut d = 0.5 mm
-feed f1= 0.05 mm/rev, fz = 0.1 mm/rev
-cutting speed v1 = 3.3 m/s, vz = 4.22 m/s
After each cut six longitudinal profiles in different radial position were measured by means
of a profilometer (Taylor - Hobson, Series Form Talysurt).
On the insert was measured the flank wear and it was observed with a scanning electron
microscope (SEM) to follow and evaluate the growth of the grooves on the secondary edge
with respect both to their number and their geometry. The insert was subsequently
positioned on support of a optical bench that allows to take the minor cutting edge image
using a television camera CCD (Charged Coupled Device).
Tool holder
Secondary monitor
Video camera
Computer
fig.l -Experimental set-up
The system, shown in fig.l, is completed by an optic fiber light source and by a real time
C. Borsellino et al.
664
video digitizer board (512x480 pixel and 128 grey levels) installed on a personal computer.
Each pixel show a zone of tool of about 1.0 ~-tm along x-axis and 0.7 ~-tm along y-axis. Such
resolution is the best trade-off between the needing of an high resolution and the necessity
to take the image of a zone of the insert that allows to see part of the insert tip and all the
grooves that can be generated during the cutting operation.
3. GROOVES SURVEY METHODOLOGY
A methodology that employ the techniques of acquirement and processing of the images
has been realised for the survey of the grooves in a precise and rapid way.
The part of the insert that is taken interest both the minor cutting edge and a part of the
radius between the edges; the choice of illumination, relized with an optic fiber light
source, is such to have the best contrast as possible.
At the end of every cut on the acquired image of the tool has been done a binarization, that
consist in the ascribing, to the different grey tones of the image, the maximum or of the
minimum value depending on the exceeding or not of a fixed threshold level conveniently
chosen, obtaining the highlighting of the grooves generated during the cutting operation.
Then the image has been enhanced with the median filter.
With an appropriate software, on the obtained image the grooves are counted and their
number is stored and compared with the one of the previous cut.
For a better comprehension of the various steps of the proposed methodology in the
following are reported the images of the secondary edge, the correspondent binarized
images and the step-wise signal generated by the software for the counting of the grooves
for an insert carbide P30; see figg. 2, ...,6 .
fig. 2 - Image of the secondary edge and the correspondent binarized image.
V=3.33 m/s, f=0.05 mm/rev, d=0.5 mm, cutting time= 5 min.
The number of grooves counted by the system is in perfect agreement with the one founded
with the examination of the images obtained with a SEM, as it is shown in fig.7.
fig. 3 - Image of the secondary edge and the correspondent binarized image.
V=3.33 m/s, f=0.05 mm/rev, d=0.5 mm, cutting time= 22 min.
fig . 4 - Image of the secondary edge and the correspondent binarized image.
V=3.33 m/s, f=0.05 mm/rev, d=0.5 mm, cutting time= 35 min.
fig. 5 - Image of the secondary edge and the correspondent binarized image.
V=3.33 m/s, f=0.05 mm/rev, d=0.5 mm, cutting time= 45 min.
665
666
C. Borsellino et al.
fig. 6 - Image of the secondary edge and the correspondent binarized image.
V=3.33 m/s, f=0.05 mm/rev, d=0.5 mm, cutting time= 80 min.
grooves increases, but increases also their size until they begin to interfere each other
leaving a zone of the minor cutting edge heavily weared with chipping, that cause a
massive decay of the surface finishing conditions.
667
5 -
Ra
[~m]
0 0
20
40
80
60
100
[m in)
140
120
t (min)
-------
0 0
20
40
100
80
60
140
120
6 ---
Ra(~m]
'
0
0
20
40
60
80
(min]
100
[min )
120
20
40
80
60
100
120
6
Ra(~m )
Ra [fm]
'
t(min)
0
0
20
80
60
40
100
120
140
Ra (lm)
40
100
80
60
120
140
-~
Ra [~ m ]
20
'
0 '
0
t (min]
t [m in]
0
20
40
60
80
100
120
20
40
60
80
100
\20
668
C. Borsellino et al.
This behaviour, encountered in each couple material-tool for the different cutting
parameters employed, confirm that the proposed methodology is quite suitable to perform
an on-line control system of the out of order of the tool in finish turning operations.
5. CONCLUSIONS
The proposed methodology for the on-line control, by the automatic counting of the
number of grooves, of the tool wear state and of the worked material roughness show to be
reliable for the different couple material-tool and for the employed cutting parameters. The
technique allows to manage an appropriate tool replacement policy in very short times
because the information on the wear state of the tool is obtained in a few seconds avoiding
the long and expensive surveys of the roughness on the workpiece or the measure of the
crater or of the flank wear of the inserts.
The time needed for the control is less than the one for the assembly - disassembly of a
successive workpiece, so this technique of control can be implemented on automated
machining systems. The repetitions done have confirmed the effectiveness of the technique
of on-line control pruposed.
The authors intend to verify the application field of such methodology also for the cutting
with ceramic tools of other materials.
ACKNOWLEDGEMENTS This work has been supported by MURST 60% (Italian
Ministery of University and Scientific Research).
REFERENCES
1. Ente Nazionale Italiano di Unificazione, Rugosita delle Superfici, UNI 3963
2. Galante G., Piacentini M., Ruisi V.F. : Elaborazione dell'immagine dell'utensile per il
controllo della finitura superficiale, X Congresso AIMETA, Pisa, (1990), Vol.II ,501-504.
3. Galante G., Piacentini M., Ruisi V.F.,: Surface roughness detection by tool image
processing, Wear, 148 (1991), 211-220.
4. Lonardo P.M.: Relationships between the Process Roughness and the Kinematic
Roughness in Turned Surfaces, Annals of the CIRP, 25 (1976), 455-459.
5. Lonardo P.M., Lo Nostro G.: Un criterio di finibilita dei materiali per le operazioni di
tornitura, Industria Meccanica-Macchine Utensili, Anno IV no 4, aprile 1977.
6. Pekelharing A.J., Hovinga H.J.: Wear at the end cutting edge of carbide tools in finish
and rough turning, M.T.D.R., 1 (1967), 643-651.
7. Lonardo P.M., Bruzzone A.A., Melks J.: Tool wear monitoring through the neural
network classification of diffraction images, Atti II Convegno AITEM, Padova (1995),
343-352.
8. Giusti F., Santochi M., Tantussi G.: On-Line Sensing of Flank and Crater Wear of
Cutting Tools, Annals of the CIRP, 36 (1987), 41-44.
9. Yao Y., Fang X.D., Arndt G.: On-Line Estimation of Groove Wear in the Minor Cutting
Edge for Finish Machining, Annals of the CIRP, 40 (1991), 41-44.
670
Such a test piece is outlined in some of the national machine tool standards
(British, American, Japanese), where the circular profile is produced by the
simultaneous motion of two linear axes.
An alternative approach to the machining test, specified in recent
British and US machine tool standards, is an emulation by instrumentation
techniques of the circle test. Such an instrument type test is the
CONTI SURE system developed by Burdekin [I ,2].
Although instrumentation techniques generally check the machine in
no-load condition, they offer certain advantages over cutting conditions. In
particular, tools and test specimens are not consumed and the time consumed
in metrologising the test piece after machining is eliminated.
2 .THE CONTISURE HARDWARE AND SOFTWARE SYSTEM
The CONTISURE system is shown schematically in fig . I .
-----~g~~~~-----,
C!~==~~~
:I
I
I
I
I
I
I
I
I
I
I
I
671
The data acquiSitiOn and analysis software offer the user a complete
flexibility. The number of sampled data points can be selected, up to a
maximum of 12000 per 360 degrees scan. An analysis in the form of least
squares best fit circles, can also be perform on data obtained for 360
degrees scans as well as for partial arcs. This feature eliminates the need for
precise set-up of the sphere datum with respect to the programmed circle.
It is also essential that the start and end points of the circular contour
should be selected, so that these do not coincide with the axis reversal
points. The reason for this is that significant lost motion errors may occur at
these points, and additional transients errors, resulting from the servo
control system, may not be detected. In this respect the software is
completely flexible and enables the start and end points to be freely selected.
A start position of 22 degrees from the X axis was therefore used for all
tests.
The approach to the start point of the circular profile should, if
possible, be representative of that used under practical machining
conditions. A tangential approach to the start and exit points on the profile
is therefore assumed by the software.
3. THE INFLUENCE OF POSITION LOOP GAIN AND MISMATCH OF
POSITION LOOP GAINS FOR DIFFERENT MACHINE AXES ON
OPTIMIZING THE CONTOURING ACCURACY OF CNC MILLING
MACHINE
ec= R .IJ- (
v
60RKv
)2 lI. 1o
j
!-!ill
(1)
672
1-.( v ) 2 10 3 J..lm
ec=(2)
2R 60Kv
where: ec-maximal contouring error from the nominal radius J..lm, R-radius of
the circle mm, v-feedrate mm/min, Kv-position loop gain s- 1.
These equations do not take into consideration the influence of
nonlinear phenomena, such as lost motion, stick motion and stick-slip, on
the magnitude of the contouring errors [5]. That is a reason why real values
of the contouring errors are significantly higher. Real contouring errors can
be obtained only experimentally.
Contouring measurements with CONTISURE test equipment have been
undertaken on a FGS32 CNC milling machine with HEIDEHANN 355 TNC
controller, in order to illustrate a methodology ~hich caul~ generally be
applied to any CNC machine. Only two sets of axes have been considered (X
and Y). The same procedure can be repeated for other axes. A relatively
short link of 150 mm, was used for all tests.
In the tests the feedrate was constant v=600 mm/min and the K v
factor in the controller was changed in the range of 4s- 1 to 130s- 1. The
tests .were done in two directions clockwise (CW) and counterclockwise
(anticlockwise) (CCW). The results of tests are given in Table 1.
or
Table 1
Kv s- 1
ec J..lm (CW)
ec J..lm (CCW)
Kv s- 1
ec J..lm (CW)
ec J..lm (CCW)
10
20
28.3 30
32.2 20.7 19.6 16.5
36.2 25.2 22.2 20.3
4
6
46.8 39.5
50.7 45.5
8
38.3
37.5
60
12.1
12.4
90
80
10.8 10.5
10.7 10.4
70
11.1
10.8
100
10".2
10.2
110
10.3
10.4
120
10.4
10.5
40
14.7
17.8
50
13.6
13.3
130
10.5
10.6
CDNTISUHE :
673
I dent! ty : KU2B_ XY
Hac hi ne : FGS32-CNC
Serial No.: 6
JCU28_C.XY1
Clockwise
+X
Eccentrl city :-
-8. ~
y-
Non.
Had.
L.Sqr. Had.
Error Band
i'lin.
Had.
Hax.
Had.
14~.
-2. 7
UPI
'1'mll
14~.~m
27.4
111111
UPI
14~. ~':11!2 M
1Sil.ll176
-7.8 u.,)
+1 ~. 6 u..l
Microns
[Pllotter
lDliiiiP
-lll
18 28
I)
lESCl
Fig.2 The result of a measured circular test (feedrate =600 mm/min, radius
of the circle= 150 mm, position loop gain=28. 3 s -I, clockwise direction)
COHTISURE :
Identl ty : ICU1111_ XY
Hac hi ne : FGS32-CNC
Serial No.: 14
ICU11l8_C. XY1
Cloclcwle
.x
-1. 7
y -
-2. 7
UPI
Had.
14~.~u
L. Sqr. Had.
Error Band
14~.~~4
....
....
Min.
Hax.
14~.~'1111 M
1511.111182
Had.
Had.
17.8
Ull
Dlu11p
-7.6
+18. 2
Ufl)
Ull)
Microns
PH otter
lESCl
-18 5
I)
5 18 +
Fig.3 The result of a measured circular test (feedrate =600 mm/min, radius
ofthe circle=l50 mm, position loop gain=lOO s- 1, clockwise direction)
674
In ref. {6] an analytical equation for estimating position loop gain is given:
(3)
Kv-----------------~
where
4~
2Dm T)
. -+--+2
<Om
(0
2 (2D
~-position
drive electrical parts s- 1 , D-damping of the feed drive electrical parts, romnominal angular frequency of the mechanical transmission elements s- 1 , Dm damping of the mechanical transmission elements and T -sampling period s.
Position loop damping of ~=0.7 is preferable according [7]. That is
the value which gives minimal contouring errors. Other numerical values of
the examined system are: ro =lOOOs- 1 , D=0.7, rom=663s- 1 , Dm=0.17, and
T=0.006s. With the substitution in the equation (3) the position loop gain
value Kv=103.85 s- 1 is calculated. The experimentally tuned value of Kvfactor on the examined machine tool axis was Kv=100s- 1 The difference
between analytically calculated and experimentally obtained value of Kvfactor is around 4%, which is completely acceptable.
Another parameter which influences the contouring accuracy is the
mismatch of position loop gains for different machine axes. This will result
in an elliptical contour path with the major axes lying +/-45 degrees,
depending upon the direction of the scan, and increasing the contouring
errors .
. The results of the experiments with mismatching position loop gains
Kvx-Kvy
100 % are given in Table 2. (Kvx = 30s- 1 and v=600 mm/min
a=
:&vx
are constant.)
Table 2
a%
error band
~-tm
(CW)
21.8
24.3
25.6
25.9
26.2
26.4
28.8
29.6
error band
a%
~-tm
(CCW)
25.5
8
32.4
9
32.8
10
34.9
20
35.9
30
40.7
40
42.6
50
44.9
error band
~-tm
(CW)
31.5
32.7
38.9
77.6
137
224
339
error band
~-tm
(CCW)
48.9
53.2
57.3
104
170
255
372
It is obvious that with increasing the mismatch of position loop gains of the
axes, the error band rises up. The best case is when the position loop gains
are identical (a=O). Figs. 4 and 5 show the results of circular test when the
difference between position loop gains for X and Y axes is a=20%.
675
r ect
:
LEAST
: A211_ XY
: F6S32:-CIIC
: 2:11
R2:11_C. X1
~RES
Cloc... lae
Date : 23-113-%
+X
Eccentricit!f :X
Non.
-1.1
Had.
V -4. 3 UJ11
14\1, 'rmll ..,.
L.Sqr. Had.
Error Band
Hin.
Had.
Hax.
Had.
14\I.'Jm ...
77,6
Ull
14\1. %'1
158.11367
1111 (
1111 (
[Olu..p
-38.' Ull)
+38. 7 u..l
Hicrons
[Pllottel:'
USCl
-412:111121141 +
Fig. 4 The results of the measured circular tests with gains mismatched
a=20% (clockwise direction, feedrate =600 mm/min, radius of the circle= ISO
.
mm, position loop gains Kvx=30 s- 1 and Kvy=24 s- 1)
COIITISUHE :
LEAST
ldenti ty : A211_ XV
Hachine : F6S32:-CNC
Serial No.: 2:11
~S
R2:11____R. X1
Anticlock
Eccentricit!f :X
Non.
-11.7
Had.
L. Sqr. Had.
Error Band
Hin.
Had.
Hax.
Had.
-11.8 .....
14\1. \I'Bl
14\1.\1\1\18
1111
1111
11J4.1
UJII
14\1.\14611 ..,. ( -51.2 u..l
1511.11511\1 1111 ( +52. 'J URl
HicJ:"ons
[PllotteJ:"
lESCl
-511
II
511
Fig. 5 The results of the measured circular tests with gains mismatched
a=20% (anticlockwise direction, feedrate =600 mm/min, radius ofthe
circle=l50 mm, position loop gains Kvx=30 s- 1 and Kvy=24 s- 1)
676
4. CONCLUSION
The work has shown that the contouring errors in CNC machine tool
can be minimized by appropriate selection of position loop gain in the
controller. Criteria used in establishing the optimum Kv value was
minimization of maximal contouring deviation from nominal radius.
The test methodology with CONTISURE system, demonstrated on
FGS32 CNC milling machine with HEIDEHANN controller, offers a general
approach for experimental determining of a position loop gain.
It was shown that the best results in contouring accuracy are provided
when the position loop gains for the two axes are identical.
REFERENCES
1. Burdekin M., Park J.: Contisure-a computer aided system for assessing
the contouring accuracy of NC machine tools. Proceedings of 27th
International MATADOR Conference, April 1988, pp.197-203.
2. Burdekin M., Jywe W.: Application of Contisure for the Verification of
the Contouring Performance of Precision Machines. Proceedings of the 6th
international Precision Engineering Seminar, Braunschweig, Germany 1991,
pp.106-123.
3. Andreev I. G.: The main qualities required of electric feed drives for NC
machine tools. Soviet Engineering Research, Vol.l, No. I, 1981, pp.59-62.
4. Week M., Ye G.: Sharp Corner Tracking Using the IKF Control Strategy.
Annals of the CIRP Vol.39/1/1990, pp.437-441.
5. Tarng S. Y., Chang E. H.: An investigation of stick-slip friction on the
contouring accuracy of CNC machine tools. Int. J. Mach. Tools & Manufact.
Vol.35, No.4, 1995, pp.565-576.
The design drawing has a great integrating potential inside the company as it affects every
function involved in product development. If the design is not clear and understood by
everyone, the organization will not work efficiently. The quality of the drawing affects the
product functionality and cost; flawed drawings are likely to create excess waste, or even
lead to the rejection of parts which would otherwise have been functionally acceptable.
Functional dimensioning is a philosophy of dimensioning and tolerancing based on the
prioritization of part functions. Thinking primarily in terms of the function the designer has
the clear objective to minimize costs without deteriorating the function itself. Furthermore,
understanding part function in drawings enables a better and more timely communication
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
678
between design and other departments, being a key factor in Concurrent Engineering
practice.
The design language based on a "plus and minus" system of dimensioning and tolerancing is
insufficient to consistently convey design intent. For example, the combination of all
locations of the axis for a round hole that will fit over a smaller size round pin is not a
square zone, but a cylindrical one. Moreover, the larger the hole (as long as it is within its
size tolerance) the larger the tolerance that it may benefit whilst fulfilling its function
properly. This is the basic idea underlying the Maximum Material Condition (MMC)
principle.
Geometric Dimensioning and Tolerancing (GD&T) is the new drafting language which
translates the aforementioned conceptual framework through a system of internationally
accepted symbols and notations.
If we consider how a system made of several parts functions, instead of considering each
single part individually, this new dimensioning philosophy prompts us to identify the effects
of each single part tolerance on the overall system; and thus the largest possible tolerance
for the single part dimension can be selected.
Prioritizing the function implies that dimensions should define the part without specifying
manufacturing processes (Process Independence Principle); however clearer evidence .of the
functions helps the process engineer in selecting the most economical methods to process
the parts.
2. INSPECfiNG GEOMETRIC TOLERANCES
Geometric tolerance specifications do not only translate designer's intent, as they embody
precise information for the definition of consistent assembly and inspection plans.
In this section the assessment of how conformance is driven by tolerance specification will
be dealt with and a spectrum of problems arising when CMMs are used for inspecting parts
will be discussed. Two main consequences on dimensional control of parts derive from the
new tolerancing standard [6,7].
1. Functional dimensioning requires that tolerance independence be violated, meaning that
tolerances may interact with themselves (MMC is a typical example); this leads to
additional complexity in the part verification task: measurements sequences are to be
carefully planned and measurement data properly elaborated.
2. The standard can handle unambiguously imperfect shape part verification. This is a
critical guarantee of the interchangeability of parts, because it ensures that dimensional
control done on the same part in different laboratories will deliver the same
acceptance/rejection response.
It is possible to distinguish between a case where a reference system is necessary and where
it is not. Form tolerances need no reference system since the tolerance zone depends on the
feature it-self and a scalar quantity only, consequently no datum appears in the control
frame. Position and orientation tolerances are controlled by constructing a local reference
system, named DRF (Datum Reference Frame), on the basis of the features of physical parts
referenced as datum in the control frame.
679
Note that the order of datum calls is essential for establishing the right system; each datum
constrains one or more degrees of freedom but the set of datum of an individual tolerance
may or may not fully constrain the coordinate system for locating and orientating the
tolerance zone.
One last point remains to be questioned: how can a mathematical representation of a datum
be obtained from the actual feature? Here the genuine functional inspiration of georretric
tolerancing is fully realized. The underlying dictum is: when controlling a feature imagine
its mating counterpart in a hypothetical assembly. This rreans: if the feature is a hole think
of the largest perfect-form shaft that can pass through it, if it is a shaft think of the smallest
perfect-form hole that still fits the shaft, if it is a flat surface think of a perfect-form plane
which makes stable contact on the surface. Thus the actual mating envelope of the feature is
designated to establish datum, as if inspection was a simulated assembly (the order of datum
calls reflects the order of an assembly sequence).
In the context presented above hard gauges appear to be the most natural inspection
technique. Unfortunately, hard gauges are not always feasible (Least Material Condition
principle is a well known example) and sometimes they have a poor cost justification.
In inspection practice CMMs are so widely used in today's industry that it is worthwhile
examining the specific problems which occur when they have to control georretric
tolerances. CMMs provide a sarrtple of discrete rreasurerrents which allows for a
description of the manufactured part georretry. The collected data can be elaborated by
means of software routines to verify the conformance to geometric tolerances. This is the
popular "soft gauges" technique. Sophisticated versions of on-line control are currently
being investigated by directly loading CMM data back into a CAD environment where the
perfect-form model of the part also exists. This might seem a brilliant solution to the
dimensional control problem if the hidden issue is not questioned: to what extent are CMM
measurements consistent with the new tolerancing principles? Or, equally, is the soft-control
of geometric tolerances operated by CMM reliable enough? In this framework two primary
problems cannot be evaded.
1. Relying on a finite set of data, the reconstructed part's geometry as well as its related soft
gauge are only approximations of what they are intended to be. A poor set of
measurements is likely to lead to grossly incorrect results; conversely large sample set
sizes are impractical with the current CMM technology. Hence the problem of the
sufficiency of the sample set has to be addressed in detail; a substantial statistical
appraisal is foreseen to be applied in this subject [2,3]. It was noted elsewhere [1] that if
models of the manufacturing processes involved were available, smaller sample set sizes
would be feasible but the principle of process independence in tolerancing would be
totally disregarded.
2. Even if a sufficient measurement sample is taken the subsequent treatment of data must
be consistent with the aforementioned actual mating envelope principle. Most of the
algorithms implemented in CAT (Computer Aided Testing) packages traditionally use a
best-fit approach based on least-square approximations. This practice clearly defies the
actual mating envelope principle and, ultimately the underlying functional paradigm.
Computational engineers will thus be prompted to renew their software equipment
680
681
682
b) by implementing MMC principle on the fastening holes the tolerance zone nearly doubles
(0.125 instead of 0.070 mm2, Tab. I);
c) the control procedure can be now uniquely defined;
d) the functional need on the circumferential holes has been identified in a minimum distance
to be kept between the edges of two adjacent holes; this has been translated in the adoption
of the LMC principle on each hole.
It can be seen that the new dimensioning system produced two simultaneous advantages: a
smaller angular error and larger tolerance zones. The first result is welcomed by designers,
whilst the second is mostly appreciated by manufacturing engineers.
CRANKSHAFT
Threaded Holes
M16x1.5
Dowel hole 08P7
FLYWHEEL
Clearance Holes
0170.2
Circumferential
holes 08H12
0.022mm
Tolerance zoneJ.maxl
Old
GD&T
dimensioni'!&_
dimensioni'!&.
0.07mm"'
0.570mm"'
0.135 mm"'
0.160mm"'
Angular error
Old
GD&T
dimensioning
dimensioning
---
---
15'
6'
Angular error
Old
GD&T
dimensioning
dimensioning_
---
---
5'
6'
20'
12'
683
no
The inspection of the two parts (i.e. crankshaft and fly wheel) was operated on a CMM. For
the sake of brevity, only the steps followed for the crankshaft inspection are listed below:
A) First DRF building:
Contacting the end face of the crankshaft against a flat surface; definition of the planar
datum A (plane zy in the DRF) through the measured co-ordinates of 6 points on the
flat surface.
Definition of the datum B corresponding to the axis of the smallest cylinder
circumscribed to the centering element on the end face of the crankshaft by touching
50 points on the centering element.
Locating the origin of the DRF on the intersection point of datum B with datum A.
Definition of the datum D corresponding to the axis of the smallest cylinder
circumscribed to the crank pin n6 by touching 50 points on it.
Rotation of the DRF around its X axis so that its Z axis intersects datum D.
B) Verification of dowel hole and second DRF building:
Definition (in the previously built DRF) of the largest cylinder inscribed to the dowel
hole (datum C) by touching 50 points inside the hole.
Comparing the position of the previously defined cylinder with the virtual condition
boundaries. The tolerance is respected if the cylinder contains these boundaries.
684
Definition of the second DRF by rotating the first one around its X axis such that its Z
axis intersects datum C.
C) Verification of the fastening holes:
Definition (in the previously built DRF) of the largest cylinders inscribed to the
fastening holes by touching 50 points inside each one.
Comparing the position of the previously defined cylinders with the virtual condition
boundaries. The tolerance is respected if the cylinders contain these boundaries.
4. CONCLUSIONS
The paper describes the advantages of redimensioning the design of a crankshaft and
relevant fly wheel according to the GD&T principles. Furthermore, the inspection
procedure, operated with a CMM, which conforms to GD&T is reported.
A noticeable enlargement of the tolerance zone as well as an improvement in the functional
capabilities is demonstrated. In this context the possibility of sharing the derived advantages
between design and manufacturing departments is likely to generate negotiations between
them which result in a spontaneous integration process useful for an eventual application of
Concurrent Engineering.
GD&T allows for a unique definition of the dimensional control procedure. However, when
only discrete measurements are available, some restrictions apply. A great deal of
theoretical and computational work is expected in order to adjust the use of CMMs to
geometric tolerance verification. However, a qualitative higher enhancement will perhaps be
triggered by technological advance rather than by conceptual efforts. Progress in optical
measuring technology, which promises faster and denser measurements to be available,
seems the most feasible option for near future.
REFERENCES
[1] Voelcker, H., 1993, "A Current Perspective on Tolerance and Metrology'',
Manufacturing Review, Vol. 6, No.4
[2] Nigam, S. D. and Turner, J. U., 1995, "Review of statistical approaches to tolerance
analysis", Computer Aided Design, Vol. 27, No. 1
[3] Kurfess, T. R. and Banks, D. L., 1995, "Statistical verification of conformance to
geometric tolerance", Computer Aided Design, Vol. 27, No.5
[4] Chirone, E., Orlando, M. and Tornincasa, S., 1995, "11 disegno funzionale e le
tolleranze geometriche", Proc. of IX ADM Congress, Caserta, Italy
[5] Yan, Z. And Menq, C.-H., 1994, "Evaluation of geometric tolerances using discrete
measurement data", Jour. Of Design and Manufacturing, Vol. 4
[6] ASME (The American Society of Mechanical Engineers), "Dimensioning and
Tolerancing" 1982, ANSI Y145M-1982, New York, USA
[7] ASME (The American Soc. of Mech. Eng.), "Mathematical Definition of Dimensioning
and Tolerancing Principles", 1994,ANS/ Y145.1M-1994, New York, USA
1. INTRODUCTION
In our research [3], we were used parametric method in mechanical elements projecting in
aspect of the projection and manufacturing. The primitives are the a base in the research and
primitives connecting are the base elements for the procedure development for modelling
rotational mechanical parts with unrotational deviation of the shape. Primitives are the
scope of geometrical parameters and are technologically oriented, connected with forms
reaching all the information from the higher level. The information from the model is
entrance in module of production system designing. This module is based on the procedure
for automatic process planning and graph representation.
686
WJi
Control ..... dh
marginal
values
Modelling of the
J mechanical
part with
f--------1
of the I
I Control
l c~~:.i:~s of
~.;:!~~~al [
model in I or II
projection
Figure 1: Structure for software module for modelling of the mechanical part
687
In order of general division of mechanical product parts as rotational and non rotational,
within the modeler for rotational parts whit unrotational shapes.
Software determination of the developing modeler [3] is done by the compiler Borland C++
v.4.0 (Fig. 1.). The best solution for icons choosing from the aspect of the screen's
vectoring is the use of matrix whit primitives icons.
The icons are grouped in four matrix, for four different groups of the primitives: basic,
internal, external and external. All the icons together create the user menu (toolbar) in the
interactive graphic editor (Fig.2).
688
- syllable
:field
In the dinamical database, the structure is attached to each primitive. The structure is
formed of geometric parameters which are defining the primitive and its tehnological
characteristics such as dimension tolerance and surface roughness. If the picture of one
mechanical part, includes several primitives, then that parameter could be initialised more
then once with different values, which is provided from the dinamical database.
689
Two basic activities are creating data structures and algorithms which will work with these
structures. There are several ways for structuring the data such as single and double
connected list, binary tree, strings with priority and
other more complicated data
structures.
Storing and sorting the data for the whole product model, in fact defining the dynamical
database is made by creating two multi-graphs linked between each other as a double linked
list (DLL).
In DLL data is stored in nodes, and the connection between them is provided by ribs
(Fig.4.) in different directions. Null nodes are at the beginning and at the end, and between
them data nodes are settled. Two pointers are setting out every node, the first one is NEXT
and it is providing connection with the following node, and the other is PRIOR which is
providing connection with the previous node. At the same time two pointers are arriving
from two neighbour nodes, but exception are two NULL nodes, because from the beginning
node only one NEXT pointer is setting out and only one NEXT pointer is arriving at the
end node.
NULL
Figure 4:
NULL
NEXT
In this research, this kind ofDLL is created for the first group of basic primitives which are
located along the axis of rotation and are implemented by adding them one by one. The
primitives of this group are nodes of DLL. In fact, each node contain data for the
parameters and attributes values for the certain primitive (Fig. 5.).
Model of
workpiece
__,
System of
coordinates
u-
List of
First
primitive,
Last
(])
Primitive 1
Next
Prior
Position
)x(
Primitive 2
Next
Prior
II:
0
0
Position
Identifier
Periferal
connections
Identifier
Periferal
connections
0
0
Figure 5:
..
Primitive n
Next
Prior
Position
Identifier
Periferal
connections
Primitive 1
List of
external
primitives
__
>(
Position in
local system
of
coordinates
Identifier
Primitive n
0
Position in
local system
of
coordinates
Identifier
Structure for sorting the data for primitives from the first group
690
The second group of primitives is systematised in one directed vector DATA [x1, x2, ... ]
which is connected with two pointers "to" and "from" to DLL for the first group of
primitives. DLL is created during the research and is specific type because two pointers are
included, the first is setting out from one DLL node, (node (1) - Fig. 5.) and the other one is
arriving to DLL node (node (2)- Fig. 5.). These additional pointers are providing inserting
data for selected primitives from the second group into certain place between sorted data in
DLL.
This specific type of DLL structured database is formed also for the inside contour where
the nodes of the inside cylinders are connected with the vector which is carrier of the
structured data for possible inserted additional unrotating parts. These two basic DLL for
the inside and outside contour are connected between them with NEXT and PRIOR pointer
in node, which is data carrier for a primitive where the inside contour is perforated.
The model of the product in the base of the modeler can be presented in few ways
depending of the contour (Fig. 6.):
1. One-DLL connected with oriented vectors- external contour.
2. Two-DLL connected with oriented vectors - external and internal contour
(penetrated opening, blind opening).
3. 'Two-DLL connected with (oriented vectors - external and two internal blind
opening).
691
Searching algorithm determinates the method at searching through DLL, and depending of
the way (way- forward or backward) the pointer *info shows to every primitive in DLL,
step by step in forward or in backward.
Subordinate algorithm through the parametar "group", enables subordination of the new
entered primitive according of the type and its connection to svitable primitive in DLL.
Proekcija I
2!X~
Agol=~
.... ''1l
05
OJ
02
04
L__
I
----..
Cl
------!
81
(-
Nl
82
L_-- - - - --:....l
01
:----~
- -- j -,
i
----~:
..... ,_
07- ' _::
r---
-.~
~~---06
.1
692
The topological structure rules for creating the exact structure of the mechanical rotational
part with unrotationaling retreat of the form, are the base for the procedure of primitive's
connecting. The structured rules for formation the model of the mechanical part, are related
on the way of the mutual interconnection of the former entered primitives [4].
The procedure for interconnection of primitives contains algorithm [3] which is based on
few functions which contain rules for different groups and types of primitives.
void Prva-proekcija (void)
{
osnovni I O;
nadvoresni I 0;
leva_kontura I O;
ce/ni I O;
desna_kontura I O;
The methodology of the mechanical part can be presented graphically (by using graph), as
shown on the (Fig. 7.).
6.
CONCLUSIONS
In the research is analysed the creating on the mechanical part, with the purpose - forming
the system for automatically projecting in the process planing for rotational parts. The
model is presented geometrically in two projections; by using product model information, it
is made a methodology for the process planing, rules of the logic decisions and rules for
using technological knowledge base.
REFERENCES:
1. Raeth P. Expert Systems a Software Methodology for Modern Application, IEEE, Los
Alamitos-California, 1991; 325-360.
2. Phillips R. An integrated intelligent design and process planning system, CAE Journal,
December 1994, 19-24
3. Gecevska V. Contribution to development of an automatically process planing systems
for rotational parts, Msc. theses, Faculty of mechanical engineering, University Sv Kiril i
Metodij, Skopje, 1994; 120.
4. Gecevska V., Pavlovski V.; Developing procedure for modelling rotationally
mechanical parts with unrotationalyng retreat of the form, The 13th ICPR, Jerusalem,
Izrael, 1995
5. Chang T. C. Geometric Reasoning- the key to integrated process planing, 22th CIRP
Int. Seminar on Manufacturing Systems, 1991, 1-14
1. INTRODUCTION
By contrast with metals, polymers are endowed with viscoelasticity, and hence the
deformation they undergo when subjected to a load is determined not only by its
magnitude, but also by its rate and duration of application.
Plastic as opposed to steel gears have both advantages:
- less expensive fabrication
- good self-lubricating capacities
- lower mass for volume
- high damping capacity
- high resilience
- corrosion resistance
and drawbacks:
- low strength and modulus of elasticity values
- marked influence of temperature on these figures
- sensitivity to humidity.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
694
Wear resistance is assessed in various ways: from the bending resistance of the tooth, or
from wear itself, especially under dry conditions or during the starting transient.
Six conclusions were reached in a detailed study [1] of the behaviour of plastic cogged
wheels:
a) torque values of20 to 50 Nm can be transmitted by glass-reinforced plastic wheels for
more than 107 cycles;
b) plastic wheels usually break as the result of fatigue, though the effect of wear cannot
be neglected. This depends on the materials in contact and the lubrication conditions;
c) lubrication procedure (grease or immersion in oil) does not seem to have a significant
influence on fatigue resistance;
d) glass-fibre reinforcement may sometimes result in unacceptable wear;
e) wear is reduced by the addition ofMoS 2.
f) even better performance is obtained when carbon fibres are employed.
3. METHODS FOR DETERMINING THE SIZE OF PLASTIC WHEELS
Standards for the design of plastic gearwheels have been laid down in several countries:
VDI in Germany, AGMA in the USA, BSI in Britain. There are also methods devised by
resin manufacturers (BASF, Du Pont, Celanese Plastics, Hoechst), and individual
researchers (Niemannn, Chen). One or more of the following parameters are always
considered:
- the bending stress of the tooth
- the contact pressure
- the tooth deformation.
Only the VDI standard and the Niemann and BASF methods, however, take all three
parameters into account. Niemann's method [2] in particular appears to be one of the
most complete and in good agreement with the experimental results.
The following polymer materials are most commonly used:
a) polyamide: excellent wear, impact and fatigue resistance, combined with a low
friction coefficient; when reinforced with fibres, it offers acceptable dimensional stability,
stiflhess and strength up to 150 C;
b) acetal resin: dimensional stability, stifthess, fatigue and abrasion resistance;
695
In a cogged wheel pump, the volumetric flow rate is proportional to the product of the
number of teeth by the square of the modulus. The use of fewer teeth thus requires a
higher modulus. The drawback of this reduction (which has been taken down to 10
teeth) is that there is interference during engagement that must be prevented by using
corrected toothing. This, however, means that the head of the tooth must be thinner and
hence less practical in terms of tightness and volumetric efficiency.
A detailed account of this question can be found in the literature. All that needs to be
said here is that the Merritt and Enirns methods for the correction of toothing give cogs
with better wear resistance, whereas the Tuplin method results in teeth that are too
pointed, though it has the advantage of creating a large compartment volume.
The pump specifications were:
40 Vmin
flow rate
motor power 4 kW
The size of the gearing was determined by applying three basic criteria to give a total of
23 combinations of the geometrical factors (Table 1):
a) small number of teeth (10-12) and high modulus (4.5 - 5.5 mm) with corrected
toothing (Enims and Merritt) in keeping with the present manufacturing trend for steel
teeth;
b) large number of teeth (30-40) and small modulus (2-3 mm) with normal tooth
proportions;
c) in-between tooth and modulus values (17-20 and 3.5 mm respectively), with Enirns
and Merritt correction.
4.2 Determination of tooth size
The next step was to calculate according to Niemann the strength of a cogged wheel in
PA6 polyamide with 35% glass fibres and supplemented with MoS 2 . The maximum
operating temperature was assumed to be 80 C, which is normal in the tank of an
ordinary hydraulic pump.
696
TABLE I
CASE
No OF
TEEm
MODULUS
(mm)
1 (E)
2(M)
3 (E)
4(M)
5 (E)
6(M)
7 (E)
8(M)
9 (E)
10 (M)
11 (E)
l2(M)
13 (E)
14 (M)
15 (E)
16(M)
17 (N)
18 (N)
19 (N)
20 (N)
21 (N)
22 (N)
23 (N)
12
12
12
12
5
5
4.5
4.5
5
5
4.5
4.5
5.5
5.5
5
5
3
3
3.5
3.5
2
2
2
2.5
3
3
3
11
11
11
11
10
10
10
10
20
20
17
17
40
35
30
30
40
35
30
FACE WIDTH
(mm)
15.72
15.72
19.44
19.44
17.16
17.16
21.14
21.14
15.56
15.56
20
20
26.22
26.22
22.61
22.61
30
34.42
40.18
25.71
13.5
15.3
17.86
(N): NORMAL.
The following fatigue resistance values at this temperature were derived from the results
in [1]:
C ==40Nm
C == 24 Nm
C==9Nm
N ~ 107 cycles
N ~ 108 cycle
N ~ 109 cycles.
The last two values were extrapolated, since the tests were run from 5xl0 5 to 2xl07
cycles. The operation is none the less regarded as acceptable, because the presence of a
possible fatigue limit not reached in the tests would lead to a situation more favourable
than that extrapolated.
697
The results show that the bending stress at the base of a tooth (fig. 1) is never a factor
that limits attainment of its assumed maximum life. The stress induced by contact of the
flanks (fig. 2), on the other hand, prevented it in 21 cases, with the exception of
conditions 17 and 21.
This fact, however, is in conflict with the experimental results, which indicate that the
true critical variable is bending of the teeth. There are two possible explanations of this
contradiction:
- the limit value of the stress due to pressure on the flanks was actually greater than the
calculated value;
- the question of wear may predominate in the case of wheels with few teeth and hence a
high modulus, since they generate higher sliding speed and contact pressure values. The
calculations, in fact, show that the problem raised by this pressure decreases in
importance as the number of teeth increase.
Three conclusions, therefore, can be drawn from these results:
a) all the wheels examined can be used for up to 100 hours without problems;
b) all the wheels can be accepted for up to 1,000 hours, though the wear limits for those
with few teeth are now close;
c) only wheels with a large number of teeth can be adopted for periods of 1,000 to
10,000 hours. Here, too, the limit values for contact pressure and stress at the base ofthe
tooth are nearly reached.
5. ECONOMIC ASSESSMENTS
Costs were assessed for the two geometrical combinations ensuring attainment of the
maximum service life, assuming an output of 35,000 pieces per year and amortisation
over the course of 4 years. The use of a one or two-cavity injection mould was also
considered. The first solution ensures the geometrical identity of the moulded wheels
(within the tolerance limits of the manufacturing process). the second results in a lower
per unit cost (Table II). In order to evaluate the plastic gear cost and to simulate the
injection process, the "Plastics & Computer International" (Milan, Italy) software was
used and expecially the MCO (Moulding and Cost Optimization) program. The
manufacturing schedule envisages comoulding of the plastic cogged wheel on a steel
drive shaft. The wheel is held on the shaft by a groove and knurling. The cost estimates
for the two combinations and the two moulding solutions set out in Table II indicate that
this hydrid polyamide/steel drive offers a 25-2?0/o saving compared with the current cost
of about I 0,000 per piece for an all-steel equivalent.
6. CONCLUSIONS
Two conclusions can be drawn from the calculations carried out in this study:
- replacement of steel by a polyamide resin reinforcement with glass fibres is a feasible
solution within the limits of the requirements imposed, and the tooth life is more than
acceptable;
- this alternative is economically sound, since the higher cost of the equipment needed for
the injection mould is fully offset by a shorter manufacturing schedule.
698
40
30
(J)
(J)
w
n::: 20
1(J)
10
II II II II II II II
II II II II II
tI
I
I
I.
20
15
10
25
CASE
60
50
40
Ill
11
II
II
II
II II
II D
Ill
E!l
1!1
Ill
Ill
~ 30 -'-
1(J)
20
10
oI
0
15
10
20
25
CASE
Fig. 2. Stess due to flanks contact as a function of gearwheel dimensions (limiting value:
34.4 Mpa for 109 cycles).
699
TABLED
CONDITIONS
TOTAL COST
(/piece)
7365
7275
7500
7370
6. REFERENCES
1. Crippa G., Davoli P.: Comparative Fatigue Resistance of Fiber Rinforced Nylon 6
Gears, ASME Sixth International Power Transmission and Gearing Conference, Phoenix,
1992
2. Niemann G., Winter H.: Elementi di macchine, Vol. II, Edizioni di Scienza e Tecnica,
Milano, 1986
3. Favro S.: Progettazione e fabbricazione di particolari meccanici in plastica rinforzata
con fibre, Graduation Thesys in Mechanical Engineering, tutor Augusto De Filippi
4. Crippa G., Davoli P.: Fatigue Resistance of Nylon 6 Gears, ASME Fifth International
Power Transmission and Gearing Conference, Chicago, 1989
5. Du Pont: Design Handbook, 1992.
The Graduation Thesis of dott. ing. S. Favro obtained an award from
ASSOFLUID (the italian association of pneumatic and hydraulic devices builders).
R. Cebalo
Polytechnic of Turin, Turin, Italy
T. Udiljak
NYLTECH Italia, Milan, Italy
KEY WORDS: Tool Wear, Artificial Tool Wear, Cutting Forces, Surface Roughness
ABSTRACT: These work presents the results of investigation conducted at Machine tool department of
Faculty ofMechanical Engineering and Naval Architecture, Zagreb, Croatia. The aim of investigation was
to examine the influence of grinded tool flank wear on cutting forces, swface rouglmess and coefficient of
chip defonnation (compression). The tests were done on lathe for ordinary and orthogonal cutting
conditions, and by applying liDCOated carbide inserts with, and without grinded flank wear. Comparison
between influence of grinded and ordinary tool wear shows similarities when dealing with cutting forces,
while the influence on surface rouglmess is difficult to compare. The tests were done on material Ck 15
(DIN 17006).
1. lNTRODUCTION
The lack of clear and generally accepted physical explanation of tool wear process, continuously
increasing number of CNC machine tools and FMS, same as significant growth in use of other
types of unmanned manufacturing equipment, emphasis the significance of investigation in
machining processes. Stochastic character of cutting process and random deviations of cutting
conditions demands a large number of experiments and application of adequate statistical data
processing. Higher level of automation, number ofnew'materials for production equipment, new
tool materials and tool geometry implies a need for fast, reliable, and economically feastble
procedures for tool life testing and tool monitoring. Such procedures have their significance as
methods for reaching predicted tool life, same as they have importance for validating or
estimating mathematical models necessary for monitoring of cutting tool This article deals with
some aspects of poSSibility to replace ordinary wear with grinded wear in order to make testing
procedures more economical
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
702
IN P U T
I
I
I
I
~~~~~~~~~~~~~-------1
I
I
I
------------1
rr======~~======il
I
I
I
I
I
I
I
I
0 u T p u T
I
L - ------------------------------------~
processes;
nearly 80% ofTCM systems are used for monitoring of tool wear and tool breakage;
all industrial applications of TCM systems are based on indirect methods for TCM;
more than 80% of TCM failures have their cause in wrong selection, operator errors and
interfacing;
today's TCM systems are going in direction of applying multi sensor approach, and a
703
It is one more proof that our ability for exact mathematical description of tool wear process
is limited. At the same time it is one more impulse for further investigation of tool wear, and
cutting process generally.
Drilling
45%
II Turning
Others
6%
Grinding
3%
mMilling
II Grinding
CDrllllng
II Others
Turning
38%
Figure 2.
3. EXPERIMENT
Cutting forces are very often used to indicate the condition of cutting tool They are easy to
measure, and are good parameters for evaluation of cutting process generally. Surface roughness
is also very valuable source of information on cutting process, and it is, together with workpiece
dimensions, also very often used. Trying to reduce the experiment expenses and experiment
time, this article deals with poSSJ.bility to replace ordinary wear with grinded tool flank wear in
order to find the relations between tool wear and cutting forces, and between tool wear and
surface roughness.
Two experiments were conducted, first one for measuring of cutting forces, and second one
for measuring surface roughness. In both cases a central composite experiment with three
independent variables on two levels, "23", was applied. Cutting speed Vc , feedrate j, and flank
wear VB were selected as independent variables, while all three component of cutting force and
some parameters of surface roughness were measured as output variables.
3.1 Experimental conditions
Workpiece:
material: Ck 15
D=40 mm (for cutting forces); D=50 mm (for surface roughness)
diameter:
Cutting tool:
SCLCR 1616H09
a) holder:
insert: CCMW 09T304 (SINTAL, Zagreb)
STGCR 1616Hll
b) holder:
insert: TCMW 110204 (SINTAL, Zagreb)
basic geometry elements of cutting edge:
704
a=?o; A.=Oo.
1 oo;
inserts grade: SV25 (SINTAL-s mark corresponding to ISO P20-P30)
Machine tool:
For applied design of experiment (central composite experiment with three independent
variables on two levels, "23 "), mathematical model according to equation 1 is used.
R=
c. Il.t;X;
(1)
1=1
The values for independent variables are presented in table 1. The minimal value of :flank wear is
0 mm (new insert), what can not be used for calculations, so the value of (1-VB) is used. The
experimental results for cutting forces are compared with calculation of forces according to
Kienzle equation. Applied depth of cut was 2 mm for cutting forces, and 1.5 mm for surface
roughness. Because previous experiments [11] proofs that grinded tool flank wear has little or no
influence on passive force, orthogonal cutting were applied for measuring of cutting forces.
TbllTh
"d1ependent vana
a e . evalues or m
"bles
Cutting forces
Surface rouglmess
Cutting speed
Vc [m/min)
Feedrate
f [nun/r]
Flank wear
VB [mm]
Cutting speed
Vc [m/min)
Feedrate
j[nun/r]
Flank wear
VB[mm]
Maximal value
106.8
0.3
0.5
165
0.3
0.5
Minimal value
51.5
0.16
83
0.16
Mecliwn value
75.4
0.22
0.3
118
0.22
0.3
4.RESULTS
4.1 The influence of flank wear on cutting forces
After processing of experimental data, the parameter values, for applied mathematical
model (2) are presented in table 2.
(2)
705
. f1orces
Tab1e 2. Parameter vatues f1ormathematical model of cutting
Cutting force Fr
Cutting force Fe
c
3688.08
c
2953.13
p3
-0.0803
p3
-0.0976
With data in table 2, the difference between feed force for VB=O and feed force for VB=0.5 mm
is 5.7% for insert CCMW, and 7% for insert TCMW. According to extended Kienzle equation,
the influence of tool wear on cutting forces goes up to 30%, what have not been proved with this
experiment.
Ff
( N]
900
800
700
600
500
60
Figure 1.
70
80
90
lOO
1\0
vc[m/min]
706
----
0
,......,
------------- =-----------.
L.t"")
C"'-1
:::>
C"'-1
------------------
L.t"")
---Rmaxe
----- Rmaxm
- - - - Rmaxt
0,3
0,5
R.n.,. =106.69 f
R,.,.
llll
8 r&(1- VB tga)
09126
(3)
(4)
707
5. CONCLUSION
This article investigate some aspects of possibility to replace ordinary tool flank wear with
grinded tool flank wear for orthogonal turning.
Experiments showed that for cutting processes with dominant flank wear, obtained results
are mostly in good agreement with results obtained with ordinary weared cutting tools, ahhough
the influence of tool wear is less then for non orthogonal cutting. For conducted experiment
grinded flank wear was of no influence for the coefficient of deformation or the chip forms.
Comparison between influence of grinded and ordinary tool wear shows similarities when
dealing with cutting forces, while the influence on surface roughness is difficult to compare. The
flow for R.nax calculated according to mathematical model and theoretical model is very similar,
while the experimental results are different. The replacement of ordinary flank wear with
grinded flank wear could be recommended for cutting processes when subject of
investigation is the influence of flank wear on feed force F1 (and partly cutting force Fe ).
For more reliable application, the experiments should be performed for variety of workpiece
materials, tool geometry, and tool materials.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Byrne, G.(2), Dornfeld, D.(2), Inosaki, 1.(1), Kettler, G., Konig, W.(1), Teti, R.(2):
Tool Condition Monitoring (TCM)- The Status of Research and Industrial Application,
Annals ofThe CIRP, 41(1995)1, 541-568
K1uft, W.: Tool monitoring systems for turning (in german), doctoral thesis, 1H Aachen
1983.
Udiljak, T.: Tool wear- necessity to predict and to monitor, 6th DAAAM Symposium,
Krakow, 1995, 335-336
Waschkies, E., Sklarczyk, C., Hepp, K.: Tool Wear Monitoring at Turning, Journal Of
Engineering for Industry, Vol. 116, 1994, 521-524
Chen N.N.S., Pun W.K.: Stresses at the Cutting Tool Wear Land, Int. J. Mach. Tools
Manufact. Vol. 28 No. 2 1988
Cebalo, R.: Identification of materials and automatically determination of cutting data for
turning (in croatian), Suvremeni trendovi proizvodnoga strojarstva, Zagreb, 1992, 15-20
Kuljanic, E.: Effect of Stiffuess on Tool Wear and New Tool Life Equation, Journal of
Engineering for Industry, Transact. of the ASME 9, Ser. B, 1975, 939-944
Colgan, J., Chin, H., Dana~ K., Hayashi S.R.: On-Line Tool Breakage Detection in
Turning: A Multi-Sensor Method, Journal of Engineering for Industry, 116, 1994, 117123
Ravindra H.V., Srinivasa Y.G., Krishnamurthy, R.: Modelling of tool wear based on
cutting forces in turning, Wear, 169, 1993, 25-32
Damodarasamy, S., Raman, S.: An inexpensive system for classifying tool wear states
usingpattemrecognition, Wear, 170,1993,149-160
Udiljak, T.: On possibilities for investigating the influence of tool wear on cutting
forces, 4th DAAAM Symposium, Bmo, 1993, 355-356
7(1)
KEY WORDS: Tube Bulging, Tube Forming, Upper Bound, Normality Rule
ABSTRACT: Based on fundamental plasticity (the normality rule, the convexity of the
yield function and material incompressibility) it becomes clear that there is a possibility to
locate a distinctive loading path on the plastic yield function at which bulging of tube (by
internal fluid pressure along with axial wall compression) can be performed without
thinning of the tube wall. At this loading point the principal stress ratio remains f3 =-I (
representing a pure shear) and consequently the original thickness is forced to remain
unchanged throughout the bulging process. Testing this hypothesis is the aim of this
paper. The experiments were done on Aluminum (AI 5052-0) tubes, using a speciallydedicated machine.
1. INTRODUCTION
The bulging of thin walled tubes can be considered as a typical example of sheet forming
process. Its uniform deformation is limited by the eventual occurrence of plastic instability
(like localized wall thinning, rupture, etc.). There are various ways by which such
phenomena in sheet forming are analyzed, depending on the criteria used. For instance,
Swift's 'diffuse necking' criterion [1] is based on the point at which the overall stretched
sheet starts to unload. Hill's localized necking of rigid-plastic solid [2] is based on a
bifurcation stress with a conjugate orientation of an extension-free line in the plane of the
sheet (if exists). Marciniak and Kuczynski's [3,4], (shortly M-K), postulated the
existence of a 'band of imperfection' across the sheet inside which strain rotation to plane
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
710
2. DESCRIPTION
The schematic bulging process is shown in Fig.l.
Initial slag~
Fjno/ s1agt
F
F
ZM~I
,
I
.. . . ...
f--
P;n
I,
~'" 1 9
r-1'
~
...,, i
f--
F
F.....!!._
r,
"-
Fig. 1: The general scheme of bulging before and after the operation.
The Von-Mises yielding is formulated in terms of the two in-plane stress components
(drawn in the above figure), which reads
(1)
The point (d) on the associated ellipse represents the desired stress ratio where, by the
normality rule, the two acting strains are equal in magnitude and opposite in sign, namely
the stress ratio ~-1 leads to the strain ratio of p =-1 which is;
(2)
In view of the material incompressibility , E_, + E8 + E, =0, the third strain component E,
(the strain across the tube thickness) remains automatically zero. It means that a loading
path which can be sustained at point (d) (given in Fig. 2) assures that no thinning will
arise and the bulging can proceed, essentially without limits. Such a peculiar path is
feasible if. we can actually produce simultaneously positive hoop stress a 8 (by internal
fluid pressure) and meridional compressive stress a_, (by external compressive load) in
a ratio of ~=-1.
711
as
Fig. 3: The two admissible deformation modeling of the tube bulging process: The
continuous and non continuous models.
712
-{z,M... h
1~)]
(3)
~ co{ 21~)
(4)
It turned out, as shown in Figs 4a and 4b below, that both presentations (3) and (4)
agree quite well with the measurements of bulged specimens (with some superiority to
the plastic flow with hinges described by (4)
~r---~,----,-----,-----.----,
anal
line : YSI!
poin~ -experirotlnt
~----~
~- -~z~~
~~~
~N&-----~
b)
a)
Fig. 4: a) Comparison between the simulated configuration and the actual shape
b) Comparison between the geometrical changes in the two deformation models
Either of them can suit the purpose of devising an admissible velocity field. The outcome
admissible velocity field is:
u =
r
continuous model
u; =O
u:
u;
(5)
z = 0.
713
( but due
to symmetry it is just
definition and the plastic incompressibility one gets the other strain rate components as
(6)
By using the above velocity field one can reach the upper bound expressions for the
internal fluid pressure Pin and the axial compressive pressure, pax, (the associated algebra
is in Ref.8), as;
+4n"O'to(ro +
I0"0(~ ~P2
+ p+ 1
to)tan(a)2
2
+ p+ 1 +e0
)"e;dv+
I Uof3a
f3 + 1
1/ {3 2 -
(7)
.<
0(
/Uof3a
)tan( a)27rl/{3 -f3+1
2
2
+20'to(ro +to
(8)
t{U ~(
+~---~-----~~-~----~--~
r0
The geometrical notation is given in Figs. I and 3. The material parameters are defined in
the next paragraph. However the physical meaning of the above solutions (7 ,8) as to how
the above pressures should vary simultaneously in order to follow certain stress ratios
are given in Fig. 5
Fig. 5: The theoretical requirement for the magnitude of the two power sources of the
bulging machine in order to provide a bulging operation with a constant principal stress
ratio~ employing the non continuous model (4))
714
4. EXPERIMENTAL STUDY
The prime intention was to examine the technological benefit (if at all ) which may arise
from working at a negative strain ratios of Emin!Emax =-I (or at least beyond Emin!Emax
=- 112 ). A series of experiments were conducted on a specially built machine capable of
supplying both internal fluid pressure and external axial load, independently. The
material of the tubes was aluminum (AL.5052-0), with a length of 50mm, external
diameter of I" (25.4 mm) and thickness of 0.035" (0.889 mm). The material properties
were: Yield stress: 90 MPa, ultimate stress: 195 MPa, maximum elongation of 25
percent. By using the power low hardening, the constitutive equation becomes:
(j
(9)
To facilitate the strain measurements, the tubes were imprinted with small circles and/or
squares by photo chemical plating . The lubricant along the sliding interfaces was the
commercial Molykote (a paste of oil with M0 S 2). Each bulging process followed a precomputerized path for the internal pressure and the axial load, to provide a constant stress
ratio. The stress ratios which were tested are: f3 =-1, -1/2, 0.
~=-112 .
It seems, though, that thicker tubes could have delayed the occurrence of early buckling,
but the associated limit analysis, given here, would have then became less precise by
deviating from the plane stress condition on which the analysis is based.
2) The aspiration for reaching 'infinite strain' was invoked from the normality rule at
point (d) on Von -Mises locus shown in Fig. 2. Only at this point, due to
incompressibility, the strain in the thickness direction should stay null. This peculiar
condition has its equivalence in Hutchinson and Neale's bifurcation analysis [6,7] where
the strain which minimizes the bifurcation stress was found to be
715
l+p
(10)
Therefore, in point (d), where p =-I, the predicted strain according to ( 10) is indeed
infinite.
3) The limit strain 'near ' the stress path of p =-112 (which is P =0) was terminated by
strain localization followed by rupture ( 'near' means that the unmeasurable frictional
shear factor 'm' was assumed to be m=0.05 ). This is shown in Fig. 7 below:
Fig. 7: Strain localization before ensuing of rupture at P=O and a non-failed product.
The measured strain in sound products had reached the level of 9 max =52% . It is
about I 0 percent above the numerical value calculated by (I 0) based on the strain
hardening exponent of the tested tubes (n=.22 for AI- 5052-0).
4) In spite of our experimental effort to reach 'infinite strain', it is seen (from the limited
data collected till now in Fig. 8) that the recommended 'working zone' for bulging of
thin tubes is the zone subtended in the vicinity of the path of p = -112 ( f3 =0 ).
'!---'-...--
% Majqr _
stram
0
-+-----,, 0.9
...
Working zone
Buclcling data
Fig. 8: The recommended 'working zone' (bright white) for bulging processes (with AI
5052-0. )
716
5) The same conclusion was reached by a similar work in Japan (as reported by Kenichi Manabe at al [II]) using aluminum tubes having somewhat different properties (AI
1050-HI8, AI 1070- HI8) and different sizes . The results of both works are given in
Fig. 9
Fig. 9: The optimal path for maximizing the forming limit strain happens to be near
~=0.
6) Apparently, there is no reason (beside the technical risk of premature buckling) why it
is not possible, in principle, to overpass p = -112 and to get closer to the ratio of
p =-1, at which 'infinite' strain is a theoretical outcome.
REFEREN CES
I. Swift, H.W., "Plastic Instability Under Plane Stress", J. Mech. Phys. Solids, vol.
I,pp. I-I8 , 1952.
2. Hill, R., "On Discontinuous Plastic States, with Special Reference to Localized
Necking in Thin Sheets", J. Mech. Phys. solids, vol. 1, pp. 19-30 ,1952.
3. Marciniak, Z., and Kuczynski, K., "Limit Strains in The Processes of StrechForming Sheet Metal", Int. J. Mech. Sci., vol. 9, pp. 609-620, 1967.
4 . Marciniak, Z., Kuczynski, K. , and Pokora, T., "Influence of The Plastic Properties
of a Material on The Forming Limit Diagram For Sheet Metal", Int. J. Mech. Sci.,
vol. 15, 789-805 ,1973 .
5. Marciniak Z., and Duncan, J., "Mechanics of sheet metal forming", Edward Arnold
Pub. I992.
6. Hutchinson, J.W., and Neale, K.W., "Sheet necking: Validity of Plain Stress
Assumption of the long Wave length Approximation" In "Mechanics of Sheet Metal
Forming",(Ed.Koistinen, D.P. and Wang, N. M.), Plenum Press, pp.I1I-150, I978.
7. Hutchinson, J.W., and Neale, K.W.,"Sheet necking: Strain Rate Effects" In
"Mechanics of Sheet Metal Forming", (Ed. Koistinen, D.P. and Wang, N. M.),
Plenum Press, pp. 269-283, 1978.
8. Tirosh, J., Neuberger, A. and Shirizly, A. "On Tube Bulging by Internal Fluid
Pressure with Additional Compressive Stress " to appear in Int.J. Mech. Sci., 1996.
B. Smoljan
University of Rijeka, Rijeka, Croatia
One of the most effective methods of the prior austenitic grain size refinement is heat
cycling. Grain size refining of steel, by heat cycling is based on repeated alpha gamma
phase transformations [2][3].
To obtain better effects of structure refining and greater cumulation of single cycle effects,
a sample can be fast heated without austenite homogenization withholding at maximal
temperature. This implies extra effects. At fast specimen heating, beyond the phase
transformation, phase and temperature micro-deformation and thermal-diffusional
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
718
B. Smoljan
719
temp.
19/oc
temp.
11max
19/oc
time t/s
time t/s
a) Izotermal annealing
b) Cyclic annealing
time t/s
c) Cyclic heat treatment
No
Heat
treatment
Maximum
temperature
bmaPC
Time at
tmax,
t/min
Minimum
temperature
bm;PC
Heating
rate
v,,;oCmin-1
Cooling
rate
VjCmin"1
Cycles
number
n
Isothermal
annealing
830
60
600
Cyclic
annealing
780
680
Cyclic heat
treatment
800
650
300
80
B. Smoljan
720
where:
a - true stress, Nm"2
- true strain
K- strength coefficient, Nm2
n- strain hardening exponent
The increase of steel formability ~eleased by cyclic heat treatments was estimated by
comparing with the increase of steel formability released by isothermal annealing. The steel
formability has been estimated on the base of true strength-strain curve. Stress-strain
curves are determined by compression test. The true stress-strain curve has been
determined by using the specimen with circular cross section (Fig. 2). Friction forces were
reduced by filling-up holes with paraffin.
20
Fig. 2. Specimen
Results of strength coefficient and strain hardening exponent released by different heat
721
No
Heat treatment
Strength coefficient
K/Nm- 2
Strain hardening
exponent, n
Regression
coefficient, r
Isothermal annealing
(lA)
915
0.18
0,962
Cyclic annealing
(CA)
844
0,225
0,966
806
0,19
0,953
CCI
.:
8
z
"'....___
,J
800
(CHTJ
700
(lA)
600
(CA)
,j,
lfl
"
-'
(JJ
Q.>
..::
40 0
300
20 0
JOO
0. 1
0,2
0,3
0,4
0,5
0,6
true strain, [
0,7 0,8
722
B. Smoljan
4. CONCLUSION
The increase of steel plasticity is directly in function of structure fineness. By application
the cyclic heat treatment plasticity and strain-hardening exponent are increasing and
strength coefficient is decreasing. The strain-hardening exponent increasing is lesser than
decreasing the stress coefficient. Better formability properties of steel were achieved by the
application the cyclic heat treatment than by conventional procedure of steel softening.
REFERENCES
[ 1] Anashkin, A. et all: Heat Cycling of Carbon Steel Wire, Metal Science and Heat
Treatment ofMetals, 1987, vol. 2, pp. 10-14.
[2] Fedyukin, V.: Cyclic Heat Treatment of Steels and Cast Irons, Leningrad State Unv.,
Leningrad, 1984, pp. 51, (in Russian).
[3] Konopleva, E.: Thermal Cycling Treatment ofLaw-Carbon Steel, Metal Science and
Heat Treatment ofMetals, 1989, vol. 8, pp. 617-621.
[4] Smoljan, B.: Contribution on Investigation of the Thermal Cycling Treatment of Steel,
9th International Congress on Heat Treatment and Surface Engineering, Nice, 1994.
[5] Kleemola, H.: On the Strain- Hardening Parameters, Metallurgical Transactions, 5,
1973, p. 1863-1866.
[6] ... , ASTM E646-84, Tensile strain-hardening exponents of metallic sheets materials.
[7] Rose A. et al: Atlas zur Warmebehadlung der Stahle I, Verlag Stahlesen, Duseldorf,
1958, pp. 128-134.
[8] Barsom J. and RolfeS.:, ASTM STP 466, American Society for Testing and Materials,
1970, pp. 281.
[9] Smoljan B.: Thermal Cycling Treatment of Steel for Quenching and Tempering,
Amst'93, Udine, 1993, pp. 183-189.
A. Strozzi
University of Modena, Modena, Italy
KEY WORDS: Circular Plates, Deflections, Non Symmetrical Supports, Integral Equations
ABSTRACT: A mechanical analysis is presented for a solid circular plate, deflected by a transverse
central force, and simply supported along two antipodal periphery arcs, the remaining part of the
boundary being free. By exploiting a Green function expressed in analytical form, the original
problem is formulated in terms of a Fredholm integral equation of the first kind, where the kernel is
particularly complex. This initial formulation is then simplified, and two descriptions of this problem
in terms of integral equations are achieved. In the first description, this plate problem is reformulated
as an integral equation encountered in wing theory. Then, the same problem is expressed as a
Fredholm integral equation of the second kind. Preliminary approximate analytical results and
experimental measurements are also reported.
1. INTRODUCTION
The problems of thin circular plates deflected by transverse loads may be classified into four
groups. The first set comprises circular plates axisymmetrically loaded and supported, a
problem for which several closed form solutions are available, which are applicable to a
variety of load distributions and types of constraining, [1]. The second class includes
circular plates axisymmetrically constrained, but non symmetrically loaded. Particularly
studied is the situation of concentrated loads, for which a solution technique of general
applicability, based upon a series expansion of the load, is available, [2-4]. The third group
encompasses circular plates loaded and constrained non symmetrically, a problem examined
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
A. Strozzi
724
sporadically and only for particular situations, [1]. Finally, the fourth category includes
circular plates loaded axisymmetrically, but non symmetrically constrained. Only a few
papers dedicated to this problem are traceable in the technical literature, and often based on
numerical treatments more than on analytical approaches, therefore furnishing results which
are valid only for particular geometries. A solid circular plate supported along edge arcs has
been studied in [5] with a numerical approach, and in [6] with a series expansion technique.
In [7] an approximate series solution is obtained for a plate sustained by angularly
equispaced supports. Finally, in [8] similar problems are treated from a mathematical
viewpoint.
In this paper a situation falling into the fourth group is examined, namely a solid
circular plate deflected by a transverse central load, and bilaterally simply supported along
two antipodal edge arcs, Figure 1 . The Green function for this problem is connected to a
circular plate deflected by a central transverse load and by two antipodal, equal forces
which equilibrate the central load. The corresponding anruytical exprt?ssions of the boundary
deflections have been obtained in [9]. By exploiting this analytical Green function, the
original plate problem is formulated in terms of a Fredholm integral equation of the first
kind, where the kernel is particularly complex and therefore not suitable for an analytical
solution. By analytical treatment, this original formulation is then simplified, and an integral
equation with Hilbert kernel, similar to an equation encountered in wing theory is obtained.
Starting from this result, two additional integral equations are derived, the first equation
coinciding with that of wing theory, and the second equation being a Fredholm integral
equation of the second kind. Although no closed form solutions could be obtained so far for
the problem under scrutiny, the relative simplicity of the integral equations here derived
justifies efforts in this direction.
Preliminary approximate analytical results and experimental measurements extracted
from [10] are also reported.
2. THE FORMULATION IN TERMS OF INTEGRAL EQUATION
The edge deflection, w(9) , of a solid circular plate loaded by a central transverse force, P ,
and by two antipodal boundary loads P/2 equilibrating the central load is (Dragoni and
Strozzi 1995) :
;,;)(1-
w (9) 48 1t D (3
(I
725
(The maximum value for a is rrJ/2 , for which the plate becomes simply supported along its
whole edge.) In fact, the distribution of the reaction force along the edge supports can be
interpreted as an infinite series of infinitesimal edge forces. The reaction force distribution is
correct when the edge deflection remains constant along the edge support, which is
assumed as rigid. It is therefore possible to express this problem by superposing the various
effects of the infinitesimal loads, and by imposing the constancy of the plate edge deflections
along the supports. The unknown function is the reaction force distribution. The
superposition of the effects leads to an integral equation.
In the following, the reaction force distributed along the edge supports is denoted
with F(e) . As already mentioned, the Green function describes a plate loaded by two
antipodal forces and by an equilibrating central load. The infinitesimal force corresponding
to the angular extent de is therefore F(e) r 0 de , which in the Green function model is
associated to an antipodal force of the same intensity and to an equilibrating central load of
double intensity, 2 F(e) r0 de . A unity edge load is therefore connected to a central force
of intensity 2 . In other words, the resultant force of each support equals P/2 . As a result,
in the integral description of the title problem, number 48 in the left hand-side of equation
( 1) must be replaced by 24 . The integral equation is :
w(O)
247tD(3+v)(1-v 2 )
r;
J+a{48(1+v) [ln(2sin(O-ro))-cos(O-ro)ln(tan(O-ro))]
2-a
(2)
In order to simplify the integral equation (2) , it is useful to compute its derivatives with
respect t9 e . The first derivative of the left hand-side, which represents the normalised
deflection of the supported edge with respect to the plate centre, is :
[}s
(I+ v) [sin(
a-
24(1+v) 2 (e-ro)}F(ro)dro+
12 (1 + v) 2 7t
(3)
2 1t
The second derivative of the normalised deflection of the supported edge with
respect to e is :
726
A. Strozzi
dz w(8) 24nD(3+v)(1-v 2 )
= -a
r;
d 82
(4)
toe is:
247tD(3+v)(1-v 2 )
-d3 w(e)
3
3ro
de
(e-ro)) +
I+a{48(1+v) [-sin(e-ro)ln(tan-.
2
-a
( 1 )]} F(ro)dro+
tan e-ro
(5)
+a{48(l+v)
-a
tan
(1
e-m
) -24(1+V} 2 (0-ro)}F(ro)dro+
(6)
Since on a physical basis function F(8) is symmetric with respect to 8 , some further
simplifications are possible. In particular :
(7)
f~a (9 - ro) F(ro) dro =9 f~a F(ro) dro - f~a ro F(ro) dro = 9 f~a F(m) dro
(8)
The resultant of each boundary reaction force F(8) equals half the central load, P ,
supposed known :
. Ja F(m)dro =-p
-a
(9)
727
In addition, since the support is assumed rigid with respect to the plate flexibility,
the left hand-side of equation (6) vanishes. After simplifying the various constants, the
simplest form of the integral equation is :
+<X
-(X
tan
)F~)dm=
e- m
(1 + v) { p
8--n~F~)~
(10)
~ F(m)dm=G (S) ;
f: F('t)d't=G( m)
d G(m) = F(m)
dm
(11)
+a
-(X
(12)
(X
tan( e -
d G(m) dm=
(1)) d (1)
Since:
1
tan(e- m)
----+
a(
d G(m) dm (13)
) -~~
1
+
d (1)
tan( e - (1)) . tan( e + (1))
2 sin(28)
1
=---~~-tan(S + m) cos(2m)- cos(28)
(14)
+a
d G(m) dm = (1 + v) {e P _ n G(e)}
sin(28)
2
4
cos(2m)- cos(28) d m
By putting:
(15 )
728
A. Strozzi
cos(2co)=t ; cos(29)=x
d t =- 2 sin(2co) d co
one obtains :
cos
a_1_ d G(t) dt = (1 + v)
t-x
dt
{n G(x)- a cos x P}
2
(17)
To obtain a standard form for the integral equation ( 17), it is necessary that its
integration limits are -1 and + 1 . To this aim, the new integration variables are employed :
t =
1 - cos(2a)
1+ cos(2a)
"f +
2
2
'
'
X=
1 - cos(2a) A 1+ cos(2a)
p +
2
2
(18)
_!_J
-1
_1_ d G(y) dy =_ p (I+ v)(l- cos(2 ex)) a cos (1- cos (2ex) ~ + 1 +cos (2ex))
y- ~ d y
128 7t
2
2
(19)
5. THE REDUCTION OF THE INTEGRAL EQUATION (10) TO A FREDHOLM
EQUATION OF THE SECOND KIND
For the Fredholm integral equation of the first kind :
+a
-(.(tan
e-
(J)
) L(m)dm = R(e)
L (e) =
kacose
2
+
n~sin 2 a-sin 2 e n~sin 2 a-sin 2 8
(20)
-ex
sin(e-m)
where k is a generic constant. By treating the right hand-side in equation (10) as known,
one obtains :
d G{8) =
k a cos8
+
(1 + u) P
d8
1t~sin 2 a-sin 2 8 21t~sin 2 a-sin 2 8
(1 + u)
sin(S-ro)
_
(22 )
729
Fig.l. Circular plate deflected by a central transverse load and simply supported along two
antipodal edge arcs.
0.111
0.72 r--....
0.64
b..
tHrlco eaatto
--- teorlco approes
D
aperl.,.tale
O.!il
II
0.48
D.l
0.~
0.24
0.16
J_2CX
'(~
!1'1
--t-I
0.111
0.00
0.0 0.1
1.0
2~/nFig.2. Preliminary analytical results and experimental measurements for the plate central
normalized deflection 'N(2rcD)/(Pr0 2>with respect to the arc supports, versus the normalized
angular semiwidth of the supports, 2alrc .
730
A. Strozzi
where the first integral from the left could not be computed analytically so far. By
a Fredholm integral equation of the second
integrating both members with respect to
kind is finally obtained.
e,
6. CONCLUSIONS
The problem of a solid circular plate deflected by a transverse central force, and simply
supported along two antipodal periphery arcs, the remaining part of the boundary being
free, has been mechanically analysed. By exploiting a Green function expressed in analytical
form, the original problem has been formulated in terms of a Fredholm integral equation of
the first kind, where the kernel is particularly complex. This initial formulation has been
simplified, and two descriptions of this problem in terms of integral equations have been
achieved. In the first description, this plate problem has been expressed as an integral
equation encountered in wing theory. Then, the same problem has been reformulated as a
Fredholm integral equation of the second kind. Preliminary approximate analytical results
and experimental measurements are also reported.
REFERENCES
l.Timoshenko, S.P., Woinowski-Krieger, S.: Theory of Plates and Shells. McGraw-Hill,
London, 1959
2. Rei~ner H.: Uber die unsymmetrische Biegung diinner Kreisringplatten. IngenieurArchiv, 1 (1929), 72-83
3. Strozzi A.: Mechanical analysis of an annular plate subject to a transverse concentrated
load. J. Strain Analysis, 24 (1989), 139-149
4. Ciavatti V., Dragoni E., Strozzi A.: Mechanical analysis of an annular plate transversely
loaded at an arbitrary point by a concentrated force. ASME J. Mech. Design, 114 (1992),
335-342
5. Conway H.D., Farnham K.A.: Deflections of uniformly loaded circular plates with
combinations of clamped, simply supported and free boundary conditions. Int. J. Mech. Sci.,
9 (1967), 661-671
6. Samodurov A.A., Tikhomirov A.S.: Solution of the bending problem of a circular plate
with a free edge using paired equations. P.M.M, 46 (1983), 794-797
7. De Beer C.: Over cirkelvormige platen, aan den omtrek in de hoekpunten van een
regelmatigen veelhoek ondersteund en over de oppervlakte rotatorischsymmetrisch belast.
De Ingenieur, 59 (1947), 9-11
8. Gladwell G.M.L.: Some mixed boundary value problems in isotropic thin plate theory.
Quart. Journ. Mech. and Applied Math., 11 (1958), 159-171
9. Dragoni E., Strozzi A.: Mechanical analysis of a thin solid circular plate deflected by
transverse periphery forces and by a central load. Proc. Instns Mech Engrs, 209 (1995), 7786
10. Strozzi A., Dragoni E., Ciavatti V.: Analisi flessionale di piastre circolari semplicemente
appoggiate lungo tratti del contorno. Congresso in onore del prof. E. Funaioli, Bologna,
1996
R.M.D. Mesquita
Instituto Nacional de Engenharia e Tecnologia Industrial, Lisboa,
Portugal
P .A.S.
Louren~o
KEY WORDS: Machinability, Copper Alloys, Unleaded Brass, Chip Forms, Cutting
Forces
ABSTRACT: Trends on materials development are reflecting the need for new specifications of
materials composition considering the environmentally related behaviour of both, the
manufacturing processes and material/product performance, disposal and recycling during the life
cycle of the product. In particular, cast cooper- zinc- lead alloys, used in components for potable
water systems and containing a significant level of lead are natural candidates to a reengineering
process. The addition of lead is considered to be a key factor in order to improve the machinability
of the alloys used in components machined in large batches. It was showed that contamination of
potable water with lead occurs in these systems. The contamination of potable water with lead has
a deleterious effect on the nervous system, mainly during ~he early stages of human development.
Consequently, new and more stringent standards, governmental and community directives (EEC
directive 80/778), together with the pressure of consumers, are pressing the development of alloys
with a reduced lead contents or even lead-free copper alloys. Reducing the contents of lead
decreases the machinability of brasses. This paper presents the results of a research project aiming
the establishment of the minimum lead content of a copper alloy that can be used for watermeter
body manufacturing, together with the machinability reference data for ecological copper alloys
development.
732
1. INTRODUCTION
Copper alloys and in particular brasses (high zinc-copper alloys) are selected for the
manufacture of water valves and fittings, due to the combination of mechanical strength
and corrosion resistance. Their excellent castability and low cost make it the dominant
material for watermeter manufacturing. Free-cutting copper alloys are known to present an
exc'ellent machinability mainly due to the lead-contents (up to 3%). The lead is almost
completely insoluble in the solidified alloys. It appears either as interdendritic islands or as
discrete globules of pure lead surrounded by the corrosion-resistant lead-free copper alloy
matrix [1]. It is generally recognised that there are three mechanisms by which lead
addition can promote the free-cutting properties: brass embrittlement, lubrication at the
tool/chip interface and cutting edge temperature reduction [2]. The interdendritic islands or
the discrete globules of pure lead account for the introduction of heterogeneity's in a
ductile matrix, providing conditions for shear instability and shear rupture (ductile fracture)
as a result of the nucleation of voids at second phase particles within the shear band
instability [3]. The same author (P. K. Wright), shows that the lead globules are deposited
on the rake face of the tool. The main functions of the lead in the secondary shear zone is
to provide regions of low strength providing internal lubrication at the tool/chip interface
(reduced forces). Consequently a reduction in the cutting edge temperature is expected.
Together with the improvement of machinability, lead additions impart the castability of
brasses, reducing the porosity levels. However, it can increase the incidence of short
running [4].
Subsurface lead is protected from corrosion, and it is only the lead on the castings surfaces
that can contaminate the water supply. Surface lead is found mainly on machined surfaces
as a smeared layer, but also on the internal surfaces of hollow castings (valve and
watermeter bodies) where metal in contact with hot cores solidifies last. Surface lead
removal is carried out by a leaching process and it causes potable water contamination.
U.S. Environmental Protection Agency (EPA) together with state regulatory bodies and
European Union Directives (80/778) are setting limits on the amount of lead any plumbing
product may contribute to the water. Some regulations place a maximum allowable lead
content of 15 parts per billion (ppb) on US potable waters. Dresher et al. [5] showed that
the leachate lead content produced after a 14-day exposure time of a copper-base alloy with
a 1 to 1.5% lead content in water (pH8) can be as high as I 00 ppb. When the lead content
is reduced to 0.5%, the leachate lead content decreases to a 25 ppb level.
Pressures to eliminate lead-containing materials from potable water systems, will rise and
eventually, only lead-free materials or lead-free like will be allowed in the near future. The
development and economic use of these new alloys (ecobrass), can be achieved through the
reengineering of the conventional copper-zinc alloys together with the reengineering of the
part manufacturing process. In this context, an industrial research project is under
733
development at.INETI - Instituto Nacional de Engenharia e Tecnologia Industrial and 1STInstituto Superior Tecnico, on behalf of B. Janz (watermeter manufacturer). The final scope
of the project is the development of a die casting 60/40 unleaded brass with improved
machinability and reduced dezincification. Preliminary work tried to establish the
minimum lead content of the alloy that could be used for watermeter body manufacturing,
together with the machinability reference data for further alloy composition refining. The
second phase of the research project, in what machining behaviour is concerned, will
establish the effect of bismuth on the machinability and is targeted towards further
reductions of lead contents.
2. EXPERIMENTAL WORK
Three die cast alloys were produced with 1.997%, 0.877% and residual (0.059%) lead
contents, in the form of cylindrical rods. A wrought 60/40 brass bar with 2.668% lead
content was purchased and used for comparison purposes. All four materials were
characterised by chemical analysis, physical and mechanical testing. The results of this
characterisation is presented in table 1 for all the materials used in the machinability
evaluation process.
Table I - Chemical, physical and mechanical properties of leaded and unleaded copper alloys
734
Louren~o
every material, 5 specimens were tested. Specimens of the cast materials were removed
from the outer diameter of the billets, in order to avoid the defects that usually arise in the
central sections of the cast bars. The mechanical properties obtained are within the values
that one can find in the literature, both for the wrought material C35600 (ASTM)- material
Ml and for the cast material C85800 (ASTM), which is comparable to our M2 material. As
expected, increasing the copper contents, increase the volume of the a phase, decreasing
the strength of the alloy. However, ductility increases with the copper content together with
the reduction of the lead contents. Conse(;)_uently, from these mechanical properties one can
expected significant differences on the machining behaviour of the copper alloys.
The machinability analysis consisted primarily in the evaluation of the effect of lead
contents and, secondarily, of the machining parameters, on the machinability ratings,
considering three machinability criteria - cutting forces, chip forms and surface roughness.
Machining tests were carried out according to the procedures established by IS03685
(1993), when applicable, on a Gildmeister CTX 400 CNC turning centre, with 22 kW
power and 5.000 rpm maximum spindle speed. Cutting forces were measured with a
Kistler 9121 piezoelectric dynamometer. The measuring chain included also the charge
amplifiers (Kistler 5011), a data acquisition board Metrabyte DAS1601. Data acquisition
and processing was carried out using the Keitley EASYEST LX software. Chip forms and
chip classification were evaluated using the method proposed by Zhang [6]. The level of
chip breaking is assessed by an index- CPDI (chip packing density index) which is defined
as a ratio between the chip packing density to chip material density. Surface roughness was
measured with a Phertometer S4P, a drive PGK and a RFHTB-250 pickup. Table 2
presents the ranges of cutting conditions used during the machining experiments together
with the characterisation of the geometry of the uncoated carbide cutting tool.
Cutting Parameters
Bar Diameter [mm]
Cutting Speed
[mlmin]
Feed
a [mmlrev]
Depth of Cut [mm]
Cutting Tool
Too/holder
Insert
Material M1
100
150,300,400
500,600
0.05, 0.1, 0.15
0.20, 0.25, 0.30
2.5
ISO Code
MateriaiM2
48
150,300,400
500,600
0.05, 0.1, 0.15
0.20, 0.25, 0.30
2.5
a[oJ
0
20 (19.68)
Material M3
62
150,300,600
Material M4
62
150,300,600
(J)
[OJ
0
25 (19.65)
z [OJ
-3
-3
(1
[OJ
7
7
735
400
300
(..)
200
100
0
M1
M2
M3
M4
Workpiece material
Fig. 1- Effect of feed and workpiece material on Chip Packing Density Index (cutting speed 150
m/min)
From the figure, it is clear that for every tested material, increasing the feedrate increases
the chip packing index, as expected, although the increasing shear angle, as determined
from chip thickness measurements. Lead content has also a strong influence on chip form
and dimension, characteristics that are assessed through the CPDI value. The unleaded
material M4 has a poor machinability rating according to this criteria, being completely
unsuitable to be used at l9w feeds, as required by the finishing process of the meter bodies.
There is a correlation between the CPDI value of the tested materials and their ductility as
measured through the elongation values presented in table 1. When one considers the chip
form and dimension criteria, together with the empirically established critical value and the
results obtained with material M3 (0.877% Pb) it can be concluded that an additional
reduction in lead content can be considered in the design of the ecological alloy.
Increasing cutting speed decreases the CPDI value. Fig.2 presents this effect for a constant
feedrate. This behaviour can be explained by the observed increase on shear angle with
cutting speed. According to the theory of metal cutting when the shear angle increases,
shear strain decreases. Consequently, chip forms can shift from discontinuous and broken
types to continuous and snarled types. However, all the tested materials, produced
acceptable chip forms at 0.15 mm/rev feedrate, when cutting speed was increased by a
736
factor of 4. When we compare the effects of both, the feedrate and the cutting speed we can
conclude that cutting speed is not a significant constraint on the design of the ecological
alloy. The unleaded alloy can withstand further increases of cutting speed giving rise to
acceptable values of the CPDI.
300
Critical Value= 80
I
200
Vc=150 [m/min]
o Vc=300 [m/min]
o Vc=600 [m/min]
100
0
M1
M2
M3
M4
Worl<piece material
Fig. 2- Effect of cutting speed and workpiece material on Chip Packing Density Index (feed0.15mm/rev)
Fig. 3 presents the results obtained for the tangential cutting force. It is shown the effect of
lead on machinability, as assessed through the cutting force criteria: for every feedrate,
decreasing the lead contents, cutting forces are increased.
1000
---- Material M1
-o- Material M2
~
.$2
,g
C)
800
......,__ Material M3
--o- Material M4
600
400
200
0.05
0.1
0.15
0.2
0.25
0.3
Feed [mmlrot]
Fig. 3 - Effect of alloy composition and feedrate on tangential cutting force ( cutting speed: 150
m/min)
Mechanical and physical properties of the copper alloys presented in table 1, clearly show
that the tensile and yield strength, together with Brinnel hardness decreased with the lead
content. The percentage of the harder beta phase decreased also. However, cutting forces
increased. This fact can be explained by the analysis of fig. 4. The figure presents the
materials flow stress, as determined from the results of the compression test (refer also to
data in table 1) .. The flow curve was expressed by the simple power curve relation and the
flow stress was calculated considering the strain determined from chip thickness
737
measurements at the particular cutting conditions. Plane strain conditions and the Von
Mises yielding criterion was assumed.
--+-- Flow stress
1.6
1.2
C\i'
E
...
CD
c::
.!l1
Q.
0.8
lii
0 + - - - - - + - - - - - + - - - - - + - - - - + 0.4
M1
M2
M3
M4
Workpiece material
Fig. 4- Effect of lead content on flow stress, shear force and shear plane area (feed:0.15 mm/rev,
cutting speed: 150m/min)
The experimental results show that material flow stress decreases with the lead content
(from material Ml to M4). However, strain hardening increases (in particular for materials
M3 and M4), inducing a departure from the single shear plane model to the more wider
shear plane area. Chip thickness measurements showed that the average shear plane angle
decreased and the shear plane area increased. Consequently, although the reduction on
shear stress with lead contents, the required shear forces on the shear plane increased.
Together with the effect of strain hardening on the orientation of the shear plane, the
reduction of lead contents contributes to the increase of frictional forces on the rake face
and, according to the Ernst & Merchant theory, to the decrease of shear angle and the
increase of the cutting forces.
No significant effect of lead contents on surface roughness of machined parts was found.
Roughness is determined by the feedrate, tool nose radius and the dynamic behaviour of
the system. Consequently, surface roughness criteria was not used to qualify the
machinability ofthe alloys.
The steep increase of cutting forces for the M4 alloy, together with the values of the CPDI,
particularly at low feeds, does not allow its use for the manufacture of watermeter bodies.
Alloy M3, although with a lower machinability, can be used as an alternative to the higher
lead contents alloys (Ml and M2), provided that a 20% increase on cutting forces and
power, at high feeds, is allowable. However, considering the constrains of the machining
operations required for finishing the watermeter bodies, being the maximum depth of cut
(doc) determined by the cast process and the feedrate limited by the forces allowable in the
738
Louren~o
thin cast walls, the increase of machining power can be even smaller. The analysis of figure
1 and 3 indicates that the lead content can be further reduced.
4. CONCLUSIONS
Cutting forces and chip form related criteria can be used to qualify the machinability of
copper-based alloys. Feed, lead contents and cutting speed, in this order, were identified to
be the major parameters controlling the types of chips as measured with the chip packing
density index. Cutting forces increase together with the feed, but it was found that lead
contents has a significant effect on cutting forces. They can increase up to 82% when the
lead contents reduces from 2.668% to a residual value. Cutting speed does not influence
significantly neither the cutting forces nor the roughness, being the latter almost totally
determined by the selected feed.
It is concluded that cast Cu/Zn alloys, with a lead contents of 0.877%, although with a
lower machinability, can be used as an alternative to the alloy currently in use (1.997%
lead contents), in order to reach a more selective and wider market. It is considered to be
possible the development of Cu-Zn alloys having an adequate machinability and lead
contents less than 0.877%, if state-of-the-art tooling and suitable control of cutting
conditions is used.
REFERENCES
1. Dresher W.H., Peters D.T.: Lead-free and reduced-lead copper-base cast plumbing
alloys - Part 1, Metall, 46 ( 1992) 11, 1142-1146.
2. Stoddart C., Lea C., Derch W., Green P., Pettit H.: Relationship between lead content
of Cu-40Zn, .machinability and swarf composition determined by Auger electron
spectroscopy, Metals Technology, 6 ( 1979) 5, 176-184.
3. Wolfenden A., Wright P.K.: On the role of lead in free-machining brass, in "The
Machinability of Engineering Materials", ASM, 1983
4. Staley M., Davies D.: The machinability and structure of standard and low-lead copperbase alloys plumbing fittings, Proceedings Copper 90, Refining, Fabrication, Markets,
The Institute ofMetals, Sweden, 1990,441-447
5. Dresher W.H., Peters D.T.: Lead-free and reduced-lead copper-base cast plumbing
alloys- Part 2, Metall, 47 (1993) 1, 26-33
6.
Zhang X. D., Lee L.C., Seah K.: Knowledge base for chip management system, J.
Materials Processing Techn., 48 (1995) 1, 215-221
L. Filice
University of Calabria, Cosenza, Italy
L. Frantini, F. Micari and V.F. Ruisi
1. INTRODUCTION
Cold forming processes are always more used in the mechanical industries since they
permit to obtain near-net shape forged parts characterised by shape and dimensions very
close to the final desired ones, thus requiring little or no subsequent machining operations.
The major problem to be faced by the cold forming designers of processes is represented
by the choice of the most suitable operations sequence and, for each operation, by the
selection of the operating parameters. Generally a cold forming sequence comprehends
upsetting, forward and backward extrusion processes; moreover nowadays the geometry of
the forged parts is more and more complicated since not only axisymmetrical components
are forged, but also parts characterised by a complex fully threedimensional shape.
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
740
L. Filice et al.
In the past the choice of the operations sequence and of the operating parameters has been
carried out just following the skill and the experience of the process designer and
subsequently testing the validity of the selected sequence by means of a time consuming
and expensive set of experimental tests. More recently the availability of powerful
computer aided process planning techniques, based on expert systems or on neural
networks, as well as the capabilities offered by the modem numerical codes, able to
analyse in detail the process mechanics have suggested a different and more suitable
approach to the design of a forming sequence. Such an approach is based on the
preliminary application of a knowledge based expert system founded of a large number of
simple technological rules, aimed to determine the set of technologically feasible forming
sequences among all the possible ones, and on the subsequent finite element analysis of
this reduced set of feasible sequences. The numerical simulations supply all these
informations concerning the required loads and powers, the stresses on the dies, the
distributions of the plastic strains and consequently the final ijlechanical properties of the
forged part, which will represent the elements on which the final choice of the best forming
sequence has to be carried out.
The above considerations show the central role of the knowledge based expert system, the
capability of which to define the set of admissible sequences strongly depends on the
number and on the goodness of the technological rules which represent the acquired
knowledge.
In particular the production of fully threedimensional forged components frequently
requires an upsetting process on a square hollow part of the component, obtained, in a
previous step of the forming sequence, by means of a backward extrusion operation.
Consequently a set of rules for the upsetting process of square rings has to be searched,
since, depending on the geometry of the ring and on the frictional conditions at the punchworkpiece interface, the process could be carried out successfully or buckling problems
could occur yielding to the practical unacceptability of the forged component.
In the literature only few studies on this subject can be found: Aku et al [1] carried out a
series of experiments on the compression of prismatic blocks of plasticine and in particular
on the case of square ring compression, observing different deformation modes depending
on the aspect ratio of the workpiece and on the frictional conditions. Andrews et al [2]
performed an experimental investigation of the axial collapse modes in the compression of
aluminum cylindrical tubes. Park and Oh [3] tested a threedimensional commercial finite
element code the upsetting process of an AA1100 square ring with a ratio of the height to
the wall thickness equal to four. The numerical simulation suggested the insurgence of a
buckling phenomenon in the ring walls. Finally Tadano and Ishikawa [4] carried out a
numerical simulation of the upsetting process of thick cylinders.
In this paper a systematic study of the square ring compression process is carried out,
taking into account two different materials, namely the AA6062 Aluminum alloy and a
commercially pure Copper; a shape coefficient able to describe the geometry of the square
ring is proposed and a set of numerical simulations has allowed to determine a "safe" and
an "unsafe" zone in the shape coefficient vs. friction factor plane. Furthermore the validity
<;>f the numerical simulations has been confirmed by several experimental tests, carried out
741
with different geometries and two lubricating conditions, which have highlighted a
significant fitting of the numerical and the experimental results.
2. THE NUMERICAL SIMULATIONS
The complexity of the analysed problem has required the use of advanced and powerful
numerical techniques: reliable algorithms, able to take into account the material
nonlinearity, as well as variable contact and frictional conditions, are in fact required.
In particular in order to best analyse the proposed forming process two different models
have been taken into account, namely an implicit and an explicit one. Following the
former, a rigid plastic formulation has been employed which takes to a nonlinear system of
equations due to the nonlinear material behaviour during plastic deformation. As a
consequence the solution of such a system requires an iterative procedure in order to reach
the convergence; in this case, the CPU time becomes very large and in particular it
increases at increasing the number of nodes with an exponent between two and three.
On the other hand in the explicit approach a final set of linear equations is obtained,
starting from the dynamic equilibrium equation of the assembled structure of finite
elements and employing a proper time integration scheme. Moreover, if a lumped mass
matrix is used, independent equations are obtained which can be solved one by one [5] [6].
In this way a great CPU time reduction is reached, since no iterative procedure is employed
at each step; in particular the computational weight assumes a linear trend with the number
of nodes. Anyway the integration scheme is conditionally stable and requires a time
increment lower than a threshold which depends on the mesh density (i. e. the dimensions
of the single element) and on the modelled material (i.e. the Young modulus and the
density. As a consequence a very large number of steps would be necessary to follow the
deformation path, losing the computational avantages upon the single time increment. The
speed of the simulation is then increased for istance by artificially increasing the die
velocity [6].
Both.of the models have been employed in order to simulate the square rings upsetting
with the aim to investigate the process mechanics; actually some considerations should be
done as regards the application of the models. First of all, as expected, the explicit
approach has resulted faster than the implicit rigid plastic formulation with a ratio 7 : 1;
this is much important since complex 3-D simulations have been developed with high CPU
times.
On the other hand the implicit approach has shown a better overlapping between the
numerical results and experimental verifications since the metal flow has been followed
much more faithfully. In this way the occurrence of shape defects has been highlihted as
reported in the subsequ~nt sections.
As far as the process mechanics is regarded, it should be observed that if the friction factor
were equal to zero the ring would deform with each element moving radially outwards at a
rate proportional to its distance from the centre; for low m values an outwards flow takes
place at a lower rate, while for m larger than a critical threshold it is energetically
favourable for only part of the ring to flow outwards and for the remainder to flow inwards
742
L. Filice et al.
towards the centre. As a consequence, depending on the the frictional conditions, the
contact surface between the workpiece and the punch may have a region in which no
relative motion occurs. For this reason the frictional tangential stresses are calculated as
[7]:
where
fv pw Iis the sliding velocity at the punch-workpiece interface, m is the friction factor,
k is the local shear stress of the material and a is a small positive number.
a = 592.0.161
[N/mm2]
(Alun~.inium
a = 350 01
[N/mm2]
(Copper)
Alloy)
743
A set of numerical simulations at varying the shape coefficient and for four assumed values
of the friction factor has been carried out. Table 1 reports the investigated geometries; the
meaning of the symbols evident from the figure next to the table.
The simulations have been carried out up to a total punch stroke equal to 50% of the initial
height for the Copper specimens, while they have been stopped at a stroke equal to 30% of
the initial height for the Aluminum ones; the experimental tests, in fact, have indicated that
after this stroke ductile fractures could occur on the outer lateral surface of the ring,
depending on the frictional conditions at the tool-specimen interface.
b[mm] B[mm] H[mm] A[mm1
256
25
20
12
300
25
20
10
432
12
25
24
476
25
24
10
640
25
28
12
684
25
28
10
p[mm]
6,73
6,45
7,75
7,51
8,79
8,58
sc
152
186
2 23
254
2 91
318
B
Fig. 1 Deformation modes
744
L. Filice et al.
A 0.00
B 0.14
c 0.28
D 0.43
E 0.57
F 0.71
G 0.85
H 0.99
I 1.14
A 0.00
B 0.18
c 0.36
D 0.54
E 0.73
F 0.91
G 1.09
H 1.27
I 1.45
B-B-B-B-C-C
I
I
I
I
I
I
B-B-B-B-C-A
B-e-e-c-c-e
I
I
I
I
I
I
B-B-C-C-C-A
I
I
I
I
I
I
B-B-C-C-C-A
B-C-C-A-A-A
B-B-B-B-B-A
B-B-B-A-A-A
sc
sc
Fig. 3- Shape coefficient vs. friction factor planes: a) Aluminium alloy; b) Copper
SC=l.52
SC=l.86
SC=2.23
SC=2.54
SC=2.91
SC=3.18
745
746
L. Filice et al.
The obtained results are summarized in the fig.3a,b where in the shape coefficient - friction
factor plane three different areas corresponding to the previously described deformation
modes can be easily distinguished. The figures show the combined effect that the geometry
of the specimen and the frictional conditions play on the process mechanics and
consequently on the quality of the forged component, affecting the possibility of flow
defects. For this reason they represent an effective knowledge base to be used in the
plan~ing stage of a forming sequence.
The goodness of the numerical simulations has been confirmed by means of a set of
experimental tests, carried out both on Aluminum and Copper speciemens for the six
analysed SC values and with two frictional conditions, namely dry contact and a film of
teflon at the punch-workpiece interface. The ring test has suggested a value of m equal to
0.1 and 0.4 for the above frictional conditions respectively. The comparison between
numerical and experimental results is shown in fig.4 where the sections obtained cutting
the rings with one of the vertical symmetry planes are reported. The results are referred to
the dry friction condition (f=0.4) both for the Copper (on the left) and the Aluminum (on
the right) specimens. A good overlapping between the numerical predictions and the
experimental verifications can be observed.
ACKNOWLEDGMENTS
This research has been carried out using MURST funds.
REFERENCES
1. Aku, S.Y., Slater, R.A.C., Johnson, W., The Use of Plasticine to Simulate the Dynamic
Compression of Prismatic Blocks of Hot Metal, Int. Jnl. Mech. Sci., 1967, 9, 495-525.
2. Andrew~, K.R.F., England, G.L., Ghani, E., Classification of the Axial Collapse of
Cylindrical Tubes under Quasi-Static Loading, Int. Jnl. Mech. Sci., 1983, 25, 687-696.
3. Park, J.J., Oh, S.l., Application of Three-Dimensional Finite Element Analysis to Metal
Forming Processes, Trans. of NAMRI/SME, 1987, 296-303.
4. Tadano S., Ishikawa H., Non- Steady State Deformation of Thick Cylinder during Upset
Forging, Advanced Technology of Plasticity, 1990, 1, 149-154.
5. Rebelo N., Nagtegaal J.C., Taylor L.M., Passmann R., Comparison of Implicit and
Explicit Finite Element Methods in the Simulation of Metal Forming Processes, Proc. of
Numiform, 1992, 123-132.
6. Alberti N., Cannizzaro L., Fratini L., Micari F., An explicit model for the analysis of
bulk metal forming processes, Trans. of NAMRI/SME, 1994, 11-16.
7. Chen C. C., Kobayashi S., Rigid Plastic Finite Element Analysis of Ring Compression,
Applications of Numerical Methods to Forming Processes, ASME, 1978, 28, 163-174.
8. Barcellona A, Filice L., Micari F., Riccobono R., Neural Network Techniques for
Defects Prediction in the Ring Upsetting with Different Geometries, accepted for the
publication on the proceedings of the 12th International Conference on CAD/CAM and
Factories of the Future.
748
In this work monolitic layered ceramics formed by two external layers of pure alumina
alternate by a inner layer of alumina containing Ce-PSZ has been produced. The increase of
the mechanical properties due to the residual stresses as a function of the inner layer
composition has been evaluated and discussed.
ESPE~ENTALPROCEDURE
Ceria partially stabilized zirconia (12Ce-PSZ Tosoh) and a.-Al203 (AKP-15 Sumitomo)
were used as starting powders. The pure powders or their bleds were first flo-deflocculated
and then milled as reported elsewhere(1,2). In this work a high quantity of binder (4%wt)
and a longer milling time (2hrs) were used. After milling powders were dried and sieved.
The high amount of binder is necessary to assure a good plasticity of the green samples and
to build up a sufficiently strong interface between the layers during the pressing.
In this step of the production, the required amount of alumina necessary to realize the first
layer, was introduced in a WC mould whose surfaces were polished down to 6J.Ull diamond
paste; at this point a wery soft load was applied, in order to form the first interface, but not
to press the powders; than the second layer of zirconia-added-alumina and the third of pure
alumina were formed repeating the same procedure above described; finally the layered
composite was uniaxially pressed at 120 MPa. A particular care must be kept in the
extraction of the sample wich was then isostatically pressed at 200MPa.
The layers thickness was mantained constant, but it was changed the composition of the
inner layer, adding different amonts os zirconia to alumina. In this comunication the
thickness of the inner layers was 1.5 mm while that of the outers was 2. 7 mm.
All the samples were fired in air for 1 hr at 1550C with a heating rate of 10C/min and
cooled in the muffle.
Elastic modulus was measured by the resonance method with a self-made equippement.
Flexural strength was determined by the four points procedure with a crosshead speed of
0.2mrnlmin on the average of 5 determinations. Toughness was evaluated by the ISB
technique and hardness by a Vickers indenter applying a load of 200N.
Thermal expansion coefficient was measured with an alumina dilatometer up to the
temperature of 1400C with a heating rate of 10C/min.
RESULTS AND DISCUSSION
In table I are reported flexural strength, toughness, hardness, elastic modulus and
thermal expansion coefficient values measured on mobolithic materials which are important
to evaluate and compare the mechanical properties of the layered composites.
Zr02
%vol
0'
Klc
(MPam112)
(MPa)
3.31
238
0
5.01
240
20
6.11
380
40
7.88
60
500
10.24
80
500
9.85
100
509
Table I. Mechamcal properties, and thermal
materials
Hv
(GPa)
E
(GPa)
<l20-1400
(oc-1)
16.5
326
6.1353 w-o
12.6
273
6.6505
11.8
252
7.3228
11.6
230
8.0510 w-o
11.0
220
9.1031 w-o
10.5
186
10.2890 10-o
..
expanston coefftctents of the monohthtc
w-t>
w-t>
749
It is worth to point out that strength and toughness increase with the amount of zirconia, but
hardness is, at the sarnle time, reduced, because hardness of zirconia is lower than that of
alumina.
Al203
AI203/Zr02
Al203
In fig. 1 Schematic representation of the samples 1s reported
A schematic representation of the samples is reported ig fig. 1. Recently, for a similar
configuration, Virkar et al.(3) proposed the following equations to determine the residual
stresses in the three layers:
Where the symbol 1 refers to the outer and 2 to the inner layer, E is the elastic modulus, d
is the thickness, v is the poisson ratio and aeo must be calculated using the following
equation
<XI and <lz are the average thermal expansion coefficients of the three layers which can be
expressed as:
a=-Ja(T'1/T {4)
aTr.0
In fig. 2 is reported the trend of dQ as a function of the inner layer composition while fig.
3 shows the correspondent residual stress in the outer pure alumina and ineer aluminazirconia layers.
750
0,003 "T"""-----------....,EI=-----.
w
<1
0,002
Iii
0,001
Iii
0,000 +-.........---.-....--,-........-.,-,"T""""""T"---y-.,--,-~
10
20
30
40
50
60
70
Zr02 % vol
Fig. 2. LlE() as a function of the inner layer composition being constant the thickness of the
three layers.
600
400
200
o+-~~~~--~r-~~~~~~
10
20
30
40
50
60
70
Zr02 % vol
Fig. 3. Tension.residual stresses in the inner (1) and compression in the outer (2) layers as
a function of the inner layer composition, being constant their thickness.
Although these values can be considered known , nevertheless it is not possible to derive
from these values the mechanical properties of the whole composite because some surface
and edge phenomena, not yet well understood, compromise the mathematical model that
have been used to to develope such equations. Therefore, actually, it is not possible to
751
predict strength, toughness and hardness of the layered materials and it is necessary to
verify their values by the traditional methods.
.
.
. .
In literature some works reports strength and toughness of layered ceramtcs havmg Stmtlar
geometry and sometimes an increase of about 300% or more with respect to the monolithic
materials have been measured (4,5).
In our case we have not been able to reach so high values, but significant enhancements
have been noted as it can be seen in fig. 4 where the strength as a function of the inner layer
composition is reported (the strength of pure alumina is used as a comparision value).
Composites containing more than 40% vol Zr02 showed diffuse cracks in the inner layer,
because the residual stress exceeds the rupture strength of this material which breaks on
cooling after sintering therefore, in this work, they were not tested.
360
340
320
300
280
4
240+-~~~~~-r~-r~-+3
10
20
30
40
50
Zr02 % vol
Fig. 4 Strength (a) and toughness (b) of the composite as function of the inner layer
composition, being constant the layers thickness.
Fig. 4 also shows the toughness as a function of the inner layer composition and, as
above, the value of pure monolithic alumina is referred.
It is possible to see that in samples having the inner layer containing 20% vol Zr02 the
improvement of the mechanical properties with respect to pure alumina is limited, but it is
not negligible with increasint the amount of zirconia.
Comparing values reported in fig. 2, 3 with those reported in fig. 4 it is possible to assume
that high mechanical properties improvements in three layered ceramics can be obtained if
LlEo is higher than 1.5 Io-3 and cr1 is higher than 400 MPa respectively because under such
conditions the residual stresses develope a significant improvement of strength and
toughness of the composite. The not linear trend of strength and toughness with the inner
layer composition will be furtherly studied.
As a final remark it must be noted that this class of composites associate to better strength
and toughness a high harness (in our case 17GPa) which is one of the most important
parameters that must be considered when cutting steel.
AKNOWLEDMENTS
The italian CNR is greatfully aknowledged for the financial support.
752
REFERENCES
1. M. Burelli, S. Maschio and S. Meriani, submitted to the J. Mat. Sci., (1996)
2. M. Burelli, S. Maschio and E. Lucchini, submitted to the J. Mat. Sci. Lett., (1996)
3. A. V. Virkar, J. L. Huang and R. A. Cutler, J. Am. Ceram. Soc., 70(3)164-70(19 87)
4. D. B. Marshall, J. J. Ratto andF. F. Lange, J. am. Ceram. Soc., 74(12)2979-87( 1991)
5. R.Lakshminara yanan, D. K. Shetty and R. A. Cutler, J. Am. Ceram. Soc.,79(1)7987(1996)
D. Franchi
T.T. Ferioli & Gianotti, Division Genta-PLATIT, Caselette, Italy
F. Rabezzana
Metec Tecnologie S.n.c., Turin, Italy
H. Curtins
PLATIT AG, Grenchen, Switzerland
R. Menegon
STARK S.p.a., Trivignano Udinese, Italy
ABSTRACT: Today, thin film hard coatings are an indispensable element in the production of
high quality tools, dies and mechanical components for various fields of industry. In the
metalworking industry, they are applied as standard practice to a broad range of cutting tools and
dies. Available hard-coatings options have increased dramatically over the last few years. In the
production of hard coatings for the tool industry, three technologies are primarily used : High
Temperature CVD, PVD and Plasma Enhanced CVD.
In the past decade, PVD (Physical Vapor Deposition) Methods have gained greatly importance, and
in particular the more important industrial PVD Methods: electron-beam Evaporation, magnetron
sputtering and Arc Evaporation.
Cathodic Arc technology has, in the past, suffered from a number of severe problems when
implemented in industrial production environments, despite its indisputable basic Physical
advantages for the Deposition of functional hard coatings.
Within the PLATIT concept a new type of Arc source has been developed, capable of overcoming
most of these limitations. The PLATIT system has proved to satisfy the requirements of high quality
combined with high productivity for cutting tools as well as the criteria of high density and low
droplet number for mould injection applications.
The aims of the paper is to present some data related to the characterisation of innovative hard PVD
"PLATIT" coatings for cutting tools, punchs and dies and to present the results of machining tests
and forming tests performed with different tools and dies coated with these innovative PVD layers in
comparison with std PVD coatings (TiN, TiCN).
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
754
D. Franchi et al.
I. INTRODUCTION
Today, thin film hard coatings are an indispensable element in the production of high quality
tools, dies and components for various fields of industry. In the metalworking industry, they
are applied as standard practice to a broad range of tools. Available hard-coatings options
have increased dramatically over the last few years.
Moreover the needs for greater productivity or to machine difficult to-cut workpiece
materials call for the development of new, higher performance machine tools and cutting
tools capable of higher tool life or cutting conditions, and in recent years considerable
developments have taken place in the area of HSS and WC cutting tools coated by
innovative hard thin films.
In the production of hard thin coatings for the tool industry, three technologies are primarily
used : High Temperature CVD, PVD and Plasma Enhanced CVD.
In the past decade , PVD (Physical Vapor Deposition) Methods have gained greatly
importance, and in particular the more important industrial PVD Methods: Electron-beam
Evaporation, Magnetron Sputtering and Arc Evaporation.
Cathodic Arc technology has, in the past, suffered from a number of severe problems when
implemented in industrial production environments, despite its indisputable basic Physical
advantages for the Deposition of functional hard coatings.
Within the PLATIT concept a new type of Arc source has been developed, capable of
overcomtng most of these limitations. The PLATIT system has proved to satisfy the
requirements of high quality combined with high productivity for cutting tools as well as the
criteria of high density and low droplet number for the more critical applications.
In this paper the performance of different HSS and WC cutting tools, punchs and dies
coated with innovative mono and multi-layers PLATIT Arc cathodic PVD films are
presented.
The paper is organised as follows. In the section 2 the coating characteristics are described.
The machining test conditions and the performance of std and innovative coated cutting
tools are discussed in section 3 and finally the performance for coated punching and forming
tools are discussed in section 4.
2. PVD COATINGS CHARACTERISTICS
The innovative coatings used for this study are obtained by the PLATIT cathodic arc PVD
process.
The PLATIT coating system has been designed from the point of view of the user working
under industrial conditions: easy operation, high coating quality combined with high
productivity, complete automatic control and negligible maintenance work and standby
(unloading-loading) times. The door to door cycle times are reduced by fast IR. heating up
and fast cooling down system. The Deposition rates have been optimised for different
applications (3-6 microns/h) while retaining a high standard of film quality The computer
controlled system permits the operation either manually or fully automatic mode through a
touch-screen interface.
One ofthe outstanding characteristics of the PLATIT coating system is the possibility to
mix up a variety of components with different dimensions. In fact, in most coating systems
755
one is limited to mixing small and large components because of the risk of over-heating and
over-etching. In this system it is absolutely possible to coat together tools and parts with
diameters ranging from diameter 10 mm up to 100 mm of higher, if similar coating thickness
are allowed. This feature represents a considerable advantage for operating an industrial
coating system economically.
The innovative PLATIT PVD method work with a rectangular large area source of
dimension 150mm x 800 mm, with the goal to obtain the following advantages and
characteristics [ 1-2]:
A- Poisoning of targets : is a problem strongly related to the size of the target and for
increasing target area the problem becomes more and more difficult to control, this in
particular when reactive gases are used. In the PLATIT method, this problem is solved by
way of a special Arc source with a specially developed magnetic-field control system, so
that the poisoning cannot get started in the first place, the Deposition rate remains constant
within +/- 5% for the duration of a given coating process, and finally the Deposition rate
likewise falls within+/- 5% throughout the service life of the target (200-300 batches).
B-Homogenous erosion: one of the clear advantages of long rectangular sources is that
merely one source is able to deliver a large homogenous plasma (along one dimension) and
a good thickness uniformity over a large distance (height). An important condition for the
rectangular source is that a continuous Arc trace along the surface of the target is provided:
a continuous movement of the Arc on the target is quite a necessary condition. Changes in
direction of the Arc spot are in general inducing a higher emission rate of droplets. It is
important to obtain a horizontal magnetic field distribution for guiding the Arc spot on the
target having different strength: from the practical point of view an uniform erosion and
good distribution is highly convenient because it assures the user a long lifetime and high
yield for his target.
C-Magnetic field configuration : the PLATIT method use a Magnetic Arc Confinement
(MAC) system and the task of the MAC is the generation of an adequate magnetic field
configuration which can be varied and adapted for the individual situation (target material,
Arc current, process parameters). The desired magnetic field configuration is established
and controlled by a combination of permanent magnets and coils. Coils currents and
monitoring of the source impedance are provided through power supplies and
microprocessor. An optimal control leads to a situation for which a zone as large of possible
on the target surface with a constant horizontal magnetic field strength is created.
The MAC control parameters allow, by means of changing the magnetic field strength and
shape, to adjust the source impedance, i.e. to adjust the voltage for a given condition of Arc
current and process parameters. Fig.l shows schematically one configuration implemented
in a PLATIT cathode.
The Ti-based hard coatings obtained by this type of Cathodic Arc source show good
adherence and can be controlled with respect to the hardness-tenacity behaviour and the
droplet emission characteristics.
A particularity of the PLATIT process is the low internal stress level of the coatings
produced. This makes it possible to deposit TiN or TiCN PVD films with thickness over
the typical 3-4 microns without any risk of them peeling off or cracking. The thickest TiN
coatings deposited so far were at 15-20 microns on HSS and WC substrates, and in fact
D. Franchi et a!.
756
there is basically no limitation, with the PVD Arc technology, to the Deposition time
provided that the target is thick enough.
r:,x
r:,x
l<i- - - - - -x
) ( -- - - -)(
Fig.l Basic layout of magnetic Arc spot guiding system implemented in cathode of type
PLAT.IT: (1) Permanent magnet, (2) and (3) coils, (5) target, (6) magnetic field lines
generated by (1 -3), (7) coil supplies, (8) microprocessor control.
The low internal stress levels of the coatings are achieved mainly by reducing the interfacial
stress substrate-coating by adequate transition layers and optimising Deposition parameters
and Arc source control during Deposition. In particular, no inert gases such as Ar are used:
the use of such gases was found to increase considerably the internal stress level and
therefore limit the thickness of the coatings.
Table I shows the characteristics of Ti-based coatings produced with the PLATIT
technology.
Coatings properties
Optimal Thickness
(microns)
Hardness
(HV 0,01)
Critical Load on HSS
(N)
Friction Coefficient
against iOOC6
Oxidation resisiance
(T C, 1 hour in air)
TiN
1-20
Coatings material
ITiCN
ll-8
ITizN
11-5
2.200-2.400
13.000-4.000
12.400-2.700
60-80
150-70
150-70
0,67
Io,57
1-
450-500
1450-500
1450-500
757
D. Franchi et al.
758
Cutting conditions
a.=10 mm
a,=2 mm
vc=67 m/1'
f=241 mm/1'
fz=0.06 mm/gz
z=2
10
15
20
Cutting conditions
a.=12 mm
a,=3 mm
vc=55 m/1 '
f=419 mm/1'
fz=0.06 mm/gz
z=4
10
15
20
25
30
Cutting conditions
a.=12 mm
a,=2mm
vc=70 m/1'
f=443 mm/1'
fz=0.05 mm/gz
z=4
Fig.2 : Industrial finish and rough milling tests ( HSS mill diameter = 10 mm,
work material
UNI CK45): comp~,trison between std TiN and TiCN "market leader" coating
s, with TiN
and TiCN PLATIT coatings
759
0 fIt
If
t I
...
1400
1200
"5
Cutting conditions
0
-S 1000
~ 800
Saw width
I-
= 2.5 mm
600
Feed== 80 mm/min
Lubricant: emulsion 5%
400
200
0
Black
Oxide
TiN
TiCN
Ti2 N
Fig.4 Disk saw laboratory tests (saw material : AISI M2, work material: UNI
39NiCrMo3): comparison between std surface treatments and TiN PVD coatings, with
TiCN and ThN PLATIT coatings
D. Franchi et al.
760
P~r~ching
400000
~~
:~ 300000
lu
~~
! ...
,......
200000
i t1 100000
iC.
Fig.5 APM 23 punching and deep-drawing dies coated with 8 microns Beta PLATIT (TiN)
coatings with different parameters(Beta I and II), compared with std dies.
For mould injection tools the low roughness and the compactness of the coatings is of key
importance. With the PLATIT system we have adjusted the Deposition parameters (low
Arc current, high partial pressure and no macro-poisoning) and the coating design so as to
obtain optimum conditions.
The following roughness before coating, after coating and after weak polishing is obtained
for a 3 microns thick coating on punchs for the mould injection of PET bottles
-Roughness ofpunchs before coating: Ra = 0,01-0,02 microns;
-Roughness ofpunchs after 3 microns TiN coating Ra= 0 12-0.18 microns;
-Roughness of coated punchs after polishing: Ra= 0,02 microns.
It is important to state that for this special type of application of PET mould injection even
very small coating errors have to be avoided, given the fact that an about SOx magnification
of the error will result during the last phase of production of the PET bottles.
REFERENCES
1.H. Curtins, W Bloesch, "A new industrial approach to cathodic arc coating technology",
Proc. ICMCTF 1995 Conference, S.Diego 1995.
2.D.Franchi, F.Rabezzana, H.Curtins "New generation PVD PLAT!T coatings for cutting
tools, punchs and dies", Fourth Euro-Ceramics Conference, Riccione, 1995
ABSTRACT: The quality of cutting surface is measured depending upon the percentage of carbon
and material structure. Examples from the following three groups of materials have been examined:
-carbon steel plates with a low percentage of carbon
-unalloyed carbon steel plates (0, 19%C)
-alloyed carbon steel plates (0,48%C).
-nonferrous plates (AI 99,5; EBl-Cu; CuZn 37 ).
1. INTRODUCTION
Fine blanking of steel plates is the working process that results with high quality of the
cutting surface. In this paper are treated three materials with corresponding thermal
treatment.Influence of the material structure before and after the thermal treatment to the
quality of cutting surface has been examined.
The convenience of treatment by fine blanking is given by the relation
762
K=Sg/S,
where:
Sg - thickness of the high quality cutting surface, and
S - thickness of the material (Fig. I)
II
(J)
ui
~~------------~~
Fig. I
Fig.2
The size of the granule is 6-8 by ASTM and separated carbide is C-2 by ARMCO. The
figure shows ferrite structure with a low level of separated perlite. To the boundary of the
granules we can see separated tertiary cementite in a chain form .. Tertiary cementite is
separated by the ageing process of material or by the termical treatment (glowing) of
material.
Tertiary cementite has no influence on the cutting surface quality, because its granules are
very small. That is why the thermal treatment for this material is not necessary.
763
Fig.3
For this particular case, K=0.91, and the appropriate quality of the cutting surface is
N8. The value of the relation K, shows that the quality of cutting surface depends of
favorable mechanical characteristics as well as structure of the material. The presence of
tertiary cementite into the carbon steel plates with low percentage of carbon, decreases the
quality of cutting surface. Therefore, it has to be avoid, especially for more responsible
parts.
The second group of the materials are unalloyed carbon steel plates, represented by
R St 42-2 (DIN).
The third group of alloyed carbon steel plates is represented by 50 Mn 7 (DIN).
Their structures contain ferrite and perlite with laminated cementite (see Fig.4).
a) R St 42-2
b) 50 Mn 7
Fig.4
764
Structure like this is not suitable for fine blanking treatment. The cutting edge of the
tool is breaking the laminated cementite, which is extremely hard (750 HV). That is why the
cutting surface have high edges ( Fig.S).
b) 50 Mn 7
Fig.S
Our research shows that for the such as material , there is necessary to conduct
thermal treatment with soft heating (glowing) for transformation of laminated cementite into
globular cementite. Thermal treatment with soft heating has been done periodically (Fig.6).
a) R St 42-2
l fY.l
Fig.6
With this kind of thermal treatment, steel plates are cooling down very slowly
from the temperature higher than Al point (Al=723 C). During this process, cementite is
separated into globules on the existing austenite crystals. This process is very fast, so we
avoid the appearance of the laminated cementite. The results of this thermal treatment are
given on Fig.7, where 100% coagulation have been done.
765
b)50Mn7
Fig.7
The cutting surface after the thermal treatment is given on the Fig.8.
a) R St 42-2
b) 50 Mn 7
a) R St 42-2
Fig.8
3. FINE BLANKING OF NONFERROUS PLATES
The fourth group of materials was represented by Al99,5 ( 99,5% AI), EBl-Cu (
99,9% Cu) and CuZn37 ( 63,4% Cu; 36,4% Zn ). These materials have been used in the
electrical engineering ( like bases, contact etc. ).
Structures of nonferrous plates have not been examined. Therefore the materials have
been cut by fine blanking only, with prescribed regime.
The cutting surfaces are given in the photographs, (Fig. 9 ):
a) with conventional cutting, and
b) with fine blanking .
766
Al99,5
EBI- Cu
CuZn 37
Fig.9
767
The materials Al99,5 and EB1-Cu have 100% smooth cutting surfaces, and the
quality of the cutting surfaces is N8. The results are favorable for good formability with fine
blanking.
From the Fig. 9 can be noticed a small breaking layer on the spaceman. F - .11 diagram
of this material shows that this material has required plastic characteristics ( ek = 28% ),
with higher hardness ( 141 HV ). That significantly influences the quality of cutting surface.
Beside that, the material contains 37% Zn, which is the highest boundary for good
formability with the fine blanking.
Alloys with higher percentage of Zn, are harder and brittle , while the material with
50% Zn has strain, ek = 0. The reason for this phenomenon is the new phase appearance,
Cu5Zn8 , which has higher brittle.
According to the results of measurements, the relation K = Sg/S = 89%.
4. CONCLUSION
The results of this research show that the material with higher percentage of carbon
with laminated cementite structure, needs thermal treatment (soft heating) for transformation
of laminated cementite into globular cementite. This treatment has given the quality of
cutting surface N8 and relation K for material R St 42-2 is K=0,98, and for material 50 Mn
7, K=l.
Nonferrous materials can be successfully treated with the fine blanking, as well.
REFERENCES
1. V. Strezov, J. Lazarev:
Technological problems of fine blanking process, Faculty of
Mechanical Engineering, Skopje, 1990
2. V. Strezov, J. Lazarev:
Technological problems of fine blanking and influence of
geometrical and dynamically factories to the effects of fine blanking, Faculty of Mechanical
Engineering, Skopje, 1989
3. J. Caloska: Examination of the formability to the materials with the fine blanking process,
Skopje, 1993
determine the temperature profiles in nodular iron and the depth of the modified layer. The
temperature distributions and temperature gradients are presented for the studied depths. The
mathematically obtained results are critically assessed and compared with the results in the
experiments. The microstructural changes which can be predicted from the temperature profiles on
heating and cooling are confirmed by microhardness measurements. The newly formed hard, fme-
structured surface significantly increases corrosion and wear resistance of the material.
1. INTRODUCTION
In heat treatment with a laser beam, we have to achieve rapid heating and cooling rates of
the surface layer. These are achieved already by rapid heat transfer into the remaining part
of the cold material. By a correct choice of power density or energy input it is possible to
achieve rapid local heating on to the temperature of austenite phase or even the
temperatures of workpiece surface melting, which after cooling enables the formation of a
modified layer of desired depth. The technology of laser remelting surface hardening
provides a possibility of creating a hardened surface layer of greater depth, which makes the
products more suitable for higher loads and raises their wear resistance. The depth of the
remelted and hardened zone depends on the parameters of the laser beam which is
defocussed and has in our case a Gaussian energy distribution. It also depends on workpiece
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 199o.
770
travelling speed, and its material properties defined by heat conductivity, specific density,
specific heat, austenitization temperature or melting temperature. Our aim is to develop a
mathematical model for practical purposes, describing the temperature evolution in the
material by which it will be possible to predict the depth and width of the remelted and/or
hardened trace. The calculated limiting temperatures, such as the temperatures of melting
and austenitization, can be experimentally verified by measuring the size of particular traces,
by microhardness measurements and microstructure analysis.
2. DETERMINATION OF TEMPERATURE DISTRIBUTION
Equations describing heat transfer in the laser remelting surface hardening process can refer
to one, two or three dimensions. Ashby [ 1] presented a simple equation for the
determination of temperature evolution T(z,t) which describes the thermal conditions in the
workpiece material around the laser beam axis. The equation considers the laser beam
power P of the Gaussian source, its radius rb on the workpiece surface, workpiece or laser
beam travelling speed yt,, absorptivity of the material A, distance from the workpiece
surface z, time t, ambient temperatur To and the thermal diffusivity of the material a and
thermal. conductivity of the material K:
t)== '[
T(z
'
AP
2xKvb~t{t+t,)
[ e-l(z+zo)'l]
4at
(1)
Variable to represents the time necessary for heat to diffuse over a distance equal to the laser
beam radius on the workpiece surface [ 1]:
[s]
(2)
The variable zo measures the distance [ 1] over which heat can diffuse during the laser beam
interaction time
2rb
= --
'
vb
xarh
Zo2eCvh
[m]
(3)
some extent in order to reach an agreement between the calculated values and experimental
results. We introduced certain physical facts about thermal conduction through the material
reported als.o by some other authors [2,3,4]. Thus we obtained a relatively simple physico-
771
mathematical model describing the temperature evolution T(z,t) in the material depending
on time and position, where we distinguished between the heating cycle and the cooling
cycle.
1) The heating cycle conditions in the material can be described by the equation:
T(z,
for
t)= To+
A. P
2 n K vb ~t
(t + tJ
[e-((z:!oJ) + e-((z~:.~Jl]erfc(~)
(4)
4 a t
0 < t < ti
2) The cooling cycle conditions in the material can be expressed by the equation:
T(z,t)=To+
for
A. p
2nKyb
)t(t +t)
z+ )
.[ ei(z+ZoJl
~ +e-l(z-ZoJl
~ -e-[(z-zoJJ]
4ahJ erfi{~
4at
(5)
t > ti
By equations (4) and (5) we can, in sufficient detail, describe the temperature evolution
during the heating or cooling cycle of the material surface layer. Once, by means of limiting
temperatures, the case depths or widths of the characteristic layers have been calculated, we
can calculate the heating rate and especially the cooling rate in the material, according to the
equation:
dT
dt
TI-To
tl- t
(6)
3. EXPERIMENTAL RESULTS
3.1. Temperature distribution
772
diffusivity a= 9.5610~ m2/s were taken from professional literature considering the carbon
and silicon content in the iron, and the type ofthe basic microstructure of the iron [6].
In Figure l.a temperature profiles on the surface of nodular iron 400-12 at different
workpiece travelling speeds are presented. We can note a decreasing trend of maximum
temperatures if workpiece travelling speeds are increasing. If the workpiece travelling
speeds increase or if interaction times are shorter, the maximal temperatures on the
workpiece surface are reached earlier, and after the beam interaction time, there follows a
phase of rapid cooling ofthe heated material. Figure 1.b shows the temperature gradients or
heating and cooling rates in the given conditions of heating. We can see that if the
workpiece travelling speeds are increasing, the maximum surface temperatures achieved are,
to be precise, lower, still the temperature gradients on heating and cooling are higher, which
is followed by shorter completion times of martensite transformation.
1600 , - - - - - - - - - - - - - - - - - - - - - - - - - ,
1~.------------------------.
~1400
1400-121
~ 1400
~ 1200 ~~;;....__,__ __
~ 1200 ~.-.~------
1000
1000
0.1
0.2
Time
(s)
0.3
0.4
c)
a)
1.4
1&n00.-----------------~==~
1400-121
v = 36 mm/s
~ 1.2
6
6 1
---o.8
1400-12 1
0.6
e.o.4
0 0.2
0
..c
-100000
b)
0.1
0.2
Time
(s)
0.3
0.4
d)
10 15 20 25 30 35 40
Workpiece travelling speed (mm/s)
45
Fig. 1. a) Temper~tture profiles on the surface for various workpiece travelling speeds.
b) Surface heating and cooling rates at different workpiece travelling speeds.
c) Maximum temperature drop as a function of depth.
d) Comparison of experimentally measured remelted and hardened zone depths with
those calculated with the physico-mathematical model.
773
Another significant piece of information on the achieved effects of heat treatment is the
depths of the remelted and hardened zones. This is defined on the basis of limiting
temperatures on heating and under the assumption that sufficiently high cooling rates
necessary for the formation of martensite structure are achieved. Figure I.e shows the
maximum temperature drop after the laser beam interaction time in the nodular iron at
different workpiece travelling speeds. Knowing the melting and austenitization temperatures
of nodular iron, we can successfully predict the depth of the remelted and hardened zone,
see Fig. I.e. Considering the fact that on the basis of limiting temperatures it is possible to
define the depths of particular zones and that these can be confirmed by microstructure
analysis, we can verity the success of the proposed physico-mathematical model for the
prediction of remelting surface hardening. From Figure l.d we can see that there is a good
correlation between the calculated depth of the modified layer and the results of
experimental measurements for the workpiece travelling speeds vb ~ 6 mm/s. At smaller
workpiece travelling speeds deviations appear between the calculations according to the
physico-mathematical model and the experimental results due to higher heat losses, namely
due to stronger radiation of heat into the environment and stronger influence of protective
gas on workpiece surface cooling.
3.2. Microstructure
Once the optimal heat treatment conditions had been defined, the rest of the laser heat
treatment was done so that a 30% overlapping of the remelted zone width was ensured (Fig.
2). The heat treatment was performed on roller specimens with a diameter of37 mm and a
width of I 0 mm.
melted zone
hardened zone
base material
base material
melted zone
hardened
zone
774
From Figure 3 we can see that in the modified trace of the laser machined nodular iron it is
possible to distinguish two main zones:
1) Remelted zone consisting of austenite dendrites, ledeburite, individual coarse martensite
needles and undissolved graphite nodules.
2) In the hardened zone only solid-state transformations take place. The hardened zone
consists of martensite, residual austenite, ferrite and carbon in the form of nodules
surrounded by martensitic shells. Some pre-conditions for the formation of the
martensitic shells are the existence of a ferrite structure around the graphite nodules,
suitably high heating rate beyond the austenitization temperature, and sufficiently high
cooling rate. Since the whole process is running very fast, it is likely that only a smaller
part of the austenite structure will become carbon-rich, getting the carbon from the
graphite nodule through the diffusion process. Because of very rapid cooling rates, the
carbon-rich austenite shell crystalizes into the martensite structure.
b) remelted zone
a) modified layer
Fig. 3. Microstructure of the laser modified layer.
c) hardened zone
3.3. Microhardness
Microhardness measurements confirming the structural changes in the material have shown
that laser remelting surface hardening is a successful method. The results of the variation of
microhardness in depth for nodular iron 400-12 are presented in Fig. 4. The arrows in the
figure show the visually assessed microstructure transition zones between the remelted zone
and hardened zone and between the hardened zone and the base material. From Fig. 4 we
can see that:
775
1. The highest hardness (1000 HVtoo) is achived in the surface layer to a depth of 100 ~min
a single laser trace. Then it falls slightly and varies very uniformly with the depth of the
modified layer (800 - 900 HV too). On the bottom boundary of the hardened zone in the
depth around 550 J.lm, the microhardness falls to a value around 250 HV10o, which
represents the hardness of the base material.
2. The microhardness profile is much different in a process with a 30% overlapping of the
remelted zone width. On the surface the microhardness is lowered and amounts in the
entire depth of the remelted zone to about 800 HV too. Another lowering of the
microhardness values happens in the hardened zone down to 550 - 650 HV10o. The
lowering of the micro hardness values is effected by microstructure annealing caused by
the repeated heating of the already modified trace. A characteristic in this case is a
notable continuous transition of microhardness between the remelted and the hardened
zone as well between the hardened zone and the matrix.
3. From the data on the micro hardness variation it is possible to make a successful
assessment of the depth of the remelted and hardened zones when the process is
performed with a 30% overlapping of the remelted zone width.
01000
0
800
::X::
>
~."'-.
v"'
s::
tl:l
600
.... 400
"0
"'....0
.c
(.)
200
0
0
100
200
300
Depth
I single
trace
400 500
( ~m)
o
overlapping
600
700
Fig. 4. Results ofmicrohardness measurements in the modified layer for a single trace and
for overlapped traces, vb = 12 mm/s.
4. CONCLUSIONS
The mathematical model can be successfully used for the description of temperature
conditions occurring in material after laser heat treatment. Deviations occurring between the
physico-mathematical model and the measured depths of the remelted and hardened zones
at lower workpiece travelling speeds can be attributed to an increased heat radiation into
the environment and cooling effects of the protective gas which is supplied axisymmetrically to the laser beam onto the workpiece surface. At lower workpiece travelling
speeds and due to constant flow rate the protective gas has a greater influence on cooling or
it enables higher transfer of heat from the workpiece surface into the environment. The
experimental results have confirmed that even with a low laser source power it is possible to
achieve a sufficient thickness of the modified layer if hardening is done by remelting the
776
surface layer. The increased hardness of the remelted zone and hardened zone (up to 1000
HV10o) largely increases the wear resistance ofthe surface. On the basis ofthe results, it can
be concluded that laser remelting surface hardening can be regarded as a highly successful
method for increasing the hardness and wear resistance of nodular iron.
REF&RENCES
1. Ashby M.F., Easterling K.E.: The Transformation Hardening of Steel Surfaces by Laser
Beams- 1., Hypo-Eutectoid Steels; Acta Metall. Vol. 32, No. 11, 1984, p. 1935- 1948
2. Gregson V.G.: Laser Heat Treatment; Laser Materials Processing, ed. by Bass M.; Center
for Laser Studies, University of Southern California, Los Angeles, California, USA;
Materials Processing- Theory and Practices, Vol. 3, Chapter 4, North-Holland Publishing
Company, 1983, p. 209-231
3. Breinan E.M., Kear B.H.: Rapid Solidification Laser Processing at High Power Density;
Laser Materials Processing, ed. by Bass M.; Center for Laser Studies, University of
Southern California, Los Angeles, California, USA; Materials Processing - Theory and
Practices, Vol. 3, Chapter 5, North-Holland Publishing Company, 1983, p. 236- 295
4. Carlslaw H.S., Jaeger J,C.: Conduction of Heat in Solids; Second Edition 1959, New
York, Clarendon Press, Oxford, 1986
5. Hawkes I.C., Steen W.M., West D.R.F.: Laser Surface Melt Hardening of S.G. Irons;
Proceedings of the 1st International Conference on Laser in Manufacturing, November
1983, Brighton, UK, p. 97- 108
6. Smithelss C.J. ed.: Metals Reference Book; 5th Edition, Butterworths, London &
Boston, 1976
7. Grum J., Sturm R.: Laser Surface Melt-Hardening of Gray and Nodular Irons;
Confetence "Laser Material Processing", Opatija, Croatia, 1995, p. 165- 172
8. Grum J., Sturm R.: Laser Heat Treatment of Gray and Nodular Irons, Journal of
Mechanical Engineering, Ljubljana, Vol. 41, No. 11-12, 1995, p. 371-380
9. Grum J., Sturm R.: Laser Surface Melt-Hardening of Gray and Nodular iron; Conference
MAT-TEC 96, Pariz, France, 1996, p. 185- 194
10. Grum J., Sturm R.: Mathematical Prediction ofDepth of Laser Surface Melt-Hardening
of Gray and Nodular Irons, to be Published in Applied Surface Science, Proceedings of EMRS Spring Meeting: Strasbourg, France, June 1996
P. Monka
TU Kosice with seat in Presov, Presov, Slovakia
KEY WORDS: Cutting Tool, Linear Cutting Edge, Surface Roughness, Geometrical
Characteristics
ABSTRACT: The paper contains the results which have been achieved by cutting tool with linear
cutting edge not paralel with the axis of the workpiece. The gained results of the measurements
show that the investigated cutting tool enables to secure the same values of surface profile
charakteristics of bearing steel 14 109.3 according to STN 414109 and corrosion resisting steel
17 241 according to STN 417241 as a classical cutting tool at finishing with the significant
increase of the feed per revolution. It directly influences on the length of the technological
operation time which is several times shortened.
NOTATION:
Yo
Yo'
Tool major cutting edge (it's linear cutting edge not paralel with axis of workpiece)
Tool minor cutting edge
Arithmetical average deviation from a mean line
[Jlm]
[0]
Tool major cutting edge inclination
[0]
Tool minor cutting edge inclination
[0]
Tool orthogonal clearance of major cutting edge
[0]
Tool orthogonal clearance of minor cutting edge
[0]
edge
cutting
major
of
rake
Tool orthogonal
[0]
Tool orthogonal rake of minor cutting edge
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
P. Monka
778
Corner radius
Rounded major cutting edge radius
Rounded minor cutting edge radius
Cutting speed
Feed per revolution
Slovak Technical Norm
[mm]
[mm]
[mm]
. -1]
[m.mm
[mm]
l.INTRODUCTION
Despite of opinion of some experts, that the manufacturing of the workpieces by the
machining is not perspective and that it's necessary to substitute it by chipless methods, is
this technology irreplaceable by other technological methods in planty of cases.
The demanded accuracy and the quality of machined surface is not possible to achieve at
the manufacture of the workpieces by machanical working or casting process, because the
skin is adversely affected. That's why the machining for giving precision to the sizes is
inevittable.
There are evident the productivity increase in machining technology during last years
owing to automation of production , multitool machining and intensification of cutting
conditions by means of new tool materials. The effectiveness of production will be
increased by development of work organization in future time to the purpose of lowering
of energy and material costs and unloading of living environment. [ 1]
The advancing of machining productivity, at the cutting speed fixed from the look of
optimal durability, is possible by using of faster feeds. This is hindered by limiting
conditions at the classical tools, which are given by required surface roughness.
s'
Fig. 1: The working cutting tool with linear cutting edge not paralel with the axis of the
workpiece.
Designation: A - cutting tip working by linear cutting edge
B - machined workpiece
779
The cutting tool with linear cutting edge not paralel with the axis of the workpiece (Fig. I)
enables to achieve better values of surface roughness at multiple magnification of the
feed.[2]
The most important problems, which are necessary to solve from the standpoint of
machining technology with regard to tool equipment, are [1]:
aJ the right selection of the tool types and the number of tools.
The requirements for the tools are much major in automatized production as in convential
production. The tools along with various tool holders must to enable good position of
cutting edges with regard to the workpiece. Their basic signes are [3]:
- it's posible to set up of them outside of working place.
- they are quickly-clamping.
- the structural solution warrants their automatic manipulation not only in technological
working place, but in system of intermediate manipulation, too.
- the tools are usable in various technological working place.
- tool units are unambiguously identified in the whole manufacturing system.
The tool used in automatized manufacturing systems must be constructed so, that the tool
unit was staticaly and dynamicaly rigid for the all combinations of clamping and
lengthening parts and they must to quarantee repeated true clamping of tool unit. [3]
It's typical for the present time, that the machining of some kind of the workpieces is
realized by means of several tens, quite several hundreds of tools. The number of tools is
possible to lower by standardization and unification of the working surfaces, what enables
better manipulation with the tools and the reduction of the claims for informative system.
b/ automatical following of the cutting tool state during of the machining.
The most of present machines included in automatized systems are not provided by
automatical identification of tool chipping and tool wear. That's why the additional
meassuring operation is needed, what increases the dependence of the system on the man.
Therefore future trend must to be directed to adaptive systems of the check of the
workpiece sizes at the machining and to the equipments for compensation of tool wear.
c/ the optimalization of cutting conditions
The special problem in the machining technology is to specify of technological regimes
what means to solve the technic-ekonomical optimalizing task. The calculation of cutting
regime includes the appropriation of tool angle parameters, cutting speed, feed, depth of
cut at the keeping of required accuracy, quality of machined surface and reliable operation
of technological sistem at minimum costs for the machining. The use of the tool with
linear cutting edge enables to increase machining productivity at the keepinng of good
surface roughness quality.
The linear cutting edge paralel with workpiece axis is inclined in tool cutting edge
inclination A.g, what reduces the vibration evoked at the machining with this tool.
780
P. Monka
2.CHOICE OF PARAMETERS
The parameters of tool angles determine the shape of cutting part of tool and its position
with regard to a workpiece. It's necessary to choose the true geometric parameters,
because they very affect productivity and the quality of machined surface.
The individual parameters must to be chosen (at the planning of geometry of tool cutting
part) so to be guaranteed:
a/ the good strength of cutting edge - it's mainly important at the material with lower
bending strength, at the machining of high-strength materials and at intermittent cutting.
During of the experiments with linear cutting edge was this conditions observed so, that
the cuting edge S' (fig. 1) (which doesn't correspond strength conditions respect to the
little cutting angle) doesn't take part in the machining.
b/ the maximum cutting edge life - it's necessary to choose the geometric parameters so,
that the life is maximum at the keeping of all needed charakteristics. Therefore in practical
tests the tool angle parameters were chosen according to [4].
c/ at once the minimum expenditure of energy and the suitable ratio of dimensions of
cutting force components.
d/ the stability of cutting proces - mainly at the tools, which the rigidity respect to their
construction is deficient in some of direction. This was in experiments achieved
- by the choise of the angle A.s according to results published in [5] (where the
vibration is minimum)
- by the designing of rigid tool holder
- by the choise of more rigid kind of the clamping from two suggested variants.
e/ the accuracy of workpiece dimensions and the quality of machined surface. On the basis
of preliminary results published in [2] is this kind of the tool very suitable for the
achievement of high quality characteristics of machined surfaces.
3.PREPARATION OF EXPERIMENTS
The designing of suitable cutting tip was the first move before realization of experiments.
The cutting tip for the tool with linear cutting edge was made from the cutting tip - type
SNMN 12 0415 FR.
The tool cutting edge inclination was suggested on the basis of results published in [5] so
to be kept the conditions a/, b/ and d/ of previous section. The used tool angle was defined
by these following angles:
oo , a.o' =
10
The cutting tips for tool with linear cutting edge were spark erossion worked and next
their skins were ground by diamond grinding wheel. They were lapping by diamond
781
lapping compound with the grit size M3 and concetration S. It was done the measuring of
the surface roughness on the tool face and the tool flanks after working.
The average values of surface roughness were:
- on the face
Ra = 0,21 J..Lm
- on the major flank Ra = 0,21 J..Lm
- on the minor flank Ra = 0,20 J..Lm
The measuring were done
- on the flanks - in the direction perpendicular on the primary cutting motion
- on the face - in two mutually perpendicular directions - in direction of the feed and in
direction of the infeed.
The average values of radiuses on these cutting tips were:
r8 = 0,18 J..Lm
rn = 0,21 J..Lm
rn'= 0,08 J..Lm
The designing and the manufacturing of tool holder, which enables the good clamping of
the cutting tip and the rigidity during of the machining (according to paragraphs dl and e/
of previous section) were the next steps before realization of experiments. This toolholder was milled from the tool shank with section 25x25 mm. The first suggested variant
clamped the cutting tip by means of taper-head screw, but this fixing didn't assure good
rigidity of the machining. Therefore it was used the second suggested variant of tool
holder with the clamping by two-arms clamp.
The methodology of experiments was chosen so to be continued in the development of
achieved results, which were published in [1].
4.CONDITIONS OF MACHINING AND MEASURING
The chemical composition of workpiece materials are.sh_own in Table 1. [6, 7]
The practical authentication of the relation of Ra on the cutting speed and on the feed per
revolution was done under the conditions shown in Table 2.
S.RESULTS OF EXPERIMENTAL VERIFICATION
There was done following shapes of chips at the machining:
a/ a long helix conical chip at the machining by means of tool with linear cutting edge in
the whole extent of cutting conditions
b/ a long continuous, helix annular chip at the machining by cutting tip WNMG 080416NG in the whole extent of cutting conditions
The processing of measured values was done by means of software STATGRAFIC.
782
P. Monka
c
0
~'E
~.=:.
"C
.,a:
"'"'
:u=
-;"'u e'"
"' c
10 , ---------------------,
9
-'"
"' E
E .o
0.3
0.6
0.9
0+-----t----+---+----1
1.2
20
- . . .....
8
7
6
> c
=-=
~
7 .--- - - - - - - - - -- - - - - - - - - - - ,
60
40
80
100
__j
Fig. 2. Grafical relations of the arithmetical average deviation from a mean line Ra on the
feed per revolution and on the cutting speed for bearing steel14 109.3 .
'"E
g
c:
...,a:
~'E
:t
>
"'"'
'" c
~~
'" "'
-"'
> :t
2.5
"C
.,a:
2
1.5
1.5
Qj
0.5
0.5
0
0.1
0.3
0.6
_j
0.9
was constant
edg~
.,
80
40
1.2
120
160
"'"'
'"
c
~=
> c
'" "'
-"'
-~
Qj
.:t
2.5
c:
~'E
-~ E
E
:
3.5
"C
"'
E
- - - - Classical tool
~========~
~------------------
Fig. 3. Grafical relations of the arithmetical average deviation from a mean line Ra on the
feed per revolution and on the cutting speed for corrosion resisting steel 17 241.
Tab. I: Chemical composition ofworkpiece material [%]:
Mn
Si
14109.3
0,90- 1,10
0,30-0,50
0,15- 0,35
17 241
max. 0,12
max. 2,00
max. 1,00
Cr
Ni
Surface roughness
measuring equipment:
Depth of cut :
Cutting material:
Classical tool
Tool with linear cutting edge not
paralel with the axis of workpiece
I
steell7 241
f=O,l47mmp.rev.
steell4 109.3
f=0,72 mmp.rev.
0,5mm
steell7 241
v = 75 m.min 1
steel 14 109.3
v = 28,15m.min' 1
surfaces were done by means of profile meter HOMMEL TESTER T 1000 in parallel
direction with the axis of workpiece. The totallenght of measuring section L1 was 4,8 mm.
The measurements of surface roughness R,. on the cutting tips and on the machined
when the relation of the R,. on the feed per revolution was found out
steell4 109.3
v = 28,15m.min 1
steell7 241
v = 75 m.min 1
steell7 241
f=l,75 mmp.rev.
when the relation of the R,. on cutting speed was found out
steell4 109.3
f=0,72 mmp.rev.
EmulsinH
The cemented carbide composition coated by The uncoated cemented carbide composition S30
TiN-TiC-TiN similar S45 according to STN
according to STN (similar as material P30
according to ISO or similar as material C6
(similar as material P30-P50 according to ISO or
according to ANSI).
similar as material C5 according to ANSI)
s:::
-..1
00
OQ
e.
=
e;
c:
~
ET
OQ
e:e.
=
784
P. Monka
The relationship, betwen experimental obtained dependences for the tool with linear cutting
edge and for the series productioned cutting tip WNMG 080416-NG (corner radius r8 =
1,6mrn) with a cover IC635, follows from the grafical relations of the arithmetical average
deviation from a mean line on the feed per revolution or on the cutting speed - Fig. 2 and
Fig. 3.
It is evident from these grafical relations that the tool with linear cutting edge achieves the
values of the arithmetical average deviation from a mean line Ra some-times lower as a tool
with corner radius 1,6 mrn at the same conditions. The tool with linear cutting edge unite
roughing and finishing cut, what enables to reduce the number of manufacturing operations
and the direct manufacture time.
REFERENCES
and its
5. Vasilko, K.: Nove geometricke vzt'ahy medzi rezn:Ym nastrojom a obrobkom a ich vplyv
na mikrogeometriu obrobeneho povrchu. Preprinty vedecky-ch prac, No. 1/92, SjF TU
Kosice, 1992
6. STN 414109
7. STN 417241
1. INTRODUCTION
The important skill to realize global quality management systems in industrial production
leads to the demand of a quantifiable control of the quality characteristics in addition to the
established methods. Therefore, the requirements for an effective strategy to control the
quality in a closed loop structure are formulated and the special conditions in a common
cutting manufacturing system are discussed. At the example of manufacturing shafts the general procedure to realize a model-based quality control is shown. At first, a detailed analyPublished in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
786
sis of the manufacturing process (turning operations) is necessary in order to evaluate the
relevant process errors, aiming in a relationship between process errors and geometrical
deviations. In a further step, the geometrical deviations and the investigated process-behaviour lead to a process model based on the well known fundamentals of manufacturing
technologies. This model connects the geometrical deviations with internal process parameters. A control strategy is developed, which influences the geometry of the (subsequently
produced) workpieces by extracting internal process parameters out of the the measured
geometrical deviations of the (already produced) workpieces. Using the estimated modelparameters, the actual process inputs, which guarantee the claimed tolerances can be determined. At last, the results of experimental tests are presented to demonstrate the effect of
the closed-loop quality control.
2. QUALITY CONTROL STRUCTURE IN CUTTING MANUFACTURING SYSTEMS
Up to now, the most common method to observe a given quality characteristic in cutting
manufacturing systems is the SPC-method. But in fact, the SPC based on a statistical test
related to the describing parameters of the assumed probability distribution, e.g. the expected mean value or the standard deviation, which depend on the investigated quality features. But the reason for the deviation of a quality characteristic cannot be determined [ 1].
In order to complete the quality control system, a quantitative conclusion backwards from
the measured deviations to the optimal values of the process parameters is necessary.
Therefore, the trend analysis of the SPC-results is not sufficient, as the reaction on process
disturbances appears too late.
A possible alternative with quite reasonable results is in practice the analysis by a human expert. But, the results of this strategy depend on the subjectivity of the expert leading to a
lower efficiency, reproducability and difficulties in documentation.
Further, the main objectives consists of a theoretical design of the system structure for the
turning process, using the cutting process as the "plant", the common measuring devices (in
separate air-conditioned measuring rooms) as the "measuring unit", and a model-based
control-strategy (fig.1 ). The inputs of the control-strategy are the measurement results and
the claimed tolerances. As the output, the controller generates the process inputs optimized
in terms of the set tolerances and related to the corresponding NC-program. The control
structure evaluates the update of the process inputs following the step of the produced
workpieces or samples.
Figure 1:
Control structure
for the model-based quality control.
~--------------pr~
I
I
processquality :
toler&rlefS ... model
I
I
I
I
~~
workpl
Inputs:
La. NC-control
i ..
I
I
I
I
I
I-+
cutting
process
geometr;r
deviations
[____________~!l_!e nominal
:modelI
g~etry
! based
estimated
~ measurement
process
! control
optimization
device
parameters
I
i strategy
_ ____________________________ j I
I_ __
+--
._
The control structure meets the two main purposes of a quality control. Due to the process
model it is possible to generate-starting values aiming at the given tolerances assuming ideal
787
process conditions. The second purpose is to keep the quality characteristics of a currently
produced object (and therefore the corresponding running production process) within defined tolerances by updating the process inputs, identical to the output of the controller.
The model itself describes the manufacturing process statically depending on the geometry
of the workpiece. The control strategy is based on the estimation of process parameters by
solving a nonlinear optimization problem. The dynamical effect of the control results in i)
the calculation of the process inputs using the estimated process parameters of the last production step and ii) the application of these corrected process inputs for the next production
step. In this sense, The method deals with an adaptive control structure based on the parameter scheduling idea [2]. To integrate the quality control in established manufacturing systems some additional aspects have to be considered. At first, the method must be independent of the used machine. The deviations of the geometry should be determined by the
common measuring-equipment, i.e. no additional sensors should be necessary. At last, the
common NC-controlled cutting machines are used. The quality control unit is directly connected to the NC-control of the machine without any additional actors.
3. PROCESS ANALYSIS AND MODELLING OF THE TURNING PROCESS
As an example for the development of a model-based control strategy the longitudinal outside turning process of solid shafts was chosen, controlling dimensional quality characteristics. To construct the process model, the main errors of turning operations and their effect
on the process itselfwere considered (table 1).
Process error
effectontheprocess
control errors
spindle errors
elasticity of the machine
chattering
eccentricity of clamping
wear of tool
temperature
cutting forces
tool material
workpiece material
cooling
changing geometry of
the cutting edge
788
Wear of the tool, deviation of workpiece and tool materials, change of the cutting-tool
geometry and different cooling conditions: they all result in changed cutting-forces. Thus,
these errors can all be summarized in the flexion deviation. The constant offset deviation
covers the errors due to the wear of cutting edge, path errors of the NC-contouring control
system and wrong settings of the tool reference points. The next step will be the modelling
of these deviations by the equations of manufacturing engineering technologies.
For modelling the flexion, the passive cutting-force Fp is the relevant cutting force, which
points nearly in the normal direction of the worlfpiece surface (fig.2). To describe the
machining operation of turning, the cutting-force law of Kienzle [3] and Meyer [4] is used,
adding some necesssary modifications. An extension of the cutting-force law leads to the
following equations.
F =-1-k Rfl-x(~~(2-~))2-x
R
R
2- X al.l
P
Fp
= kauf 1-x(-1-R(sinK) 2 -x +
2- x
(1)
kal 1 is the cutting-force coefficient, R the radius of the cutting edge, K is the cutting edge
angle, f the forward feed rate,a the depth of cut, 1-x the logarithmic slope. Experiments
showed a good agreement between eq. (I) and measured data.
Figure 2:
Flexion of the workpiece in
the clamping.
clamping
zforce
The flexion can be described with the following differential equation depending beside the
cutting force on the workpiece geometry and the clamping.
d2Axflex- Fp(Zrorce- z)
EJ(z)
dz 2
(2)
~Xfl.ex is the flexional deviation of the workpiece-axis, Zforce the z-coordinate of the ef-
fecttve interaction point of the cutting force, E the elastic modulus and J(z) the geometrical
moment of inertia, which depends on the workpiece-radius r(z). The analytical solution of
eq. (2) is only possible for very simple geometries like shafts with constant diameter. For
shafts with cone or spherical contours eq. (2) is solved numerically. In experiments, the flexion model was validated. The modelling of the flexion connects the flexional deviation with
the passive cutting-force and, furthermore, the passive cutting-force with the process parameters depth of cut a and the forward feed f. The offset error is directly compensated by
789
changing start- and end-point ofthe path control. Based on these relationships between errors, deviations, and process parameters, a control strategy will be implemented.
4. DESIGN OF A CONTROL FOR MEASURES AND FORM OF SHAFTS
In this paper, only outside machining operations with a one-sided clamping are considered.
The workpiece geometry is defined in sections of basic geometry elements (cylinders, cones
and spherical surfaces). Each section can be tolerated differently.
The shafts .are tolerated with the geometric characteristics in the different sections of the
workpiece, i.e. the size feature "diameter" and the form feature of the "cylindricity" (in case
of cones and spherical surfaces including the profile form deviation).
In a first step, the workpiece is turned roughly. The control only considers the last plane
cut.
The expected geometry of the workpiece will be overlapped by a flexion and an offset deviation (see section 4.). The measured radius rmeas at the z-coordinate Zk will be
{3)
rnom is the nominal value of the radius at the coordinate Zk, Lhflex the flexion deviation
and L1Xo:tfset the offset deviation. Since a passive cutting-force does always exist, an allowed fleXIon deviation is introduced relating to the set cylindricity (or profile form deviation). In addition, the allowed flexion deviation is compensated by cutting along the inverse
(interpolated) line of the flexion curve (fig. 3). The resulting contour is showed in fig. 3,
which is defined in the following as the ideal geometry of the workpiece section.
Figure 3: Example for the
r
t-----------------------------------------------------
meas i
upper tolerance
compensation of the flexion
''
with an interpolated line and
resulting contour
resulting ideal geometry in a
centre of
cylindrical workpiece section.
~~~~----------~---=~.--~m~~~
............ :.::.. ... -...
z1nmm zone
--"'....,__ /
16,872
-------------------------------------------------------------..::<._,__ _
lower tolerance
Based on the allowed flexion deviation and depending on the workpiece geometry, the numerical solution of the flexion curve eq.(2) yields sectionswise an allowed passive cuttingforce Fpall With eq.(l) the starting values for the process parameters ao and fo are determined r>y a Newton gradient method using the secondary condition of equal differential
coefficients in the operating point. The cutting-force coefficients are set to their theoretical
values.
Using the starting values, the first workpiece is produced. After its manufacturing it is
measured on a conventional CMM (Coordinate-Measuring-Machine). Results are the actual
radii of the measured circles at several z-coordinates. After the inverse of the flexion line is
compensated and the nominal value of the radius is subtracted, a residual deviation Axmeas
from the expected value is left. Regarding the error model, this deviation consists of a tie-
790
xion and an offset contribution. The residual flexion error lixflex itself depends on the passive cutting-force FP., which again depends on the process parameters a and f
This leads to an optimization statement that is used to estimate the coefficients of the cutting-force law ka.1 1 and 1-x and the offset error Lix0 ffset They are calculated by finding
the minimum of the objective function
n
Q= L(Lixmeas(zk)-(Lix flex(zk)+Lixoffset) ) 2
(4)
k=l
liXflex itself depends on the optimization parameters ka1.1 and 1-x. n is the number of
measured circles in the investigated workpiece section. Tlie equation (4) is linear in terms
of the parameters ~ffset and ka1.1 and nonlinear to 1-x.
For the solution of this nonlinear optimization problem a sequential-quadratic-programming
algorithm (SQP) is used [5]. It is a numerical, overlinearly converging method, which approximates the objective function locally by a quadratic function [6]. To decrease the computing time the problem is solved in two steps, where the nonlinear problem depends on
the direct solution ofthe linear subproblem. In addition, the objective function (4) is overparameterized. That means, ka1 1 and 1-x are not estimable in the same step. Additional
measurements of other production steps with changed process parameters a and fare necessary to estimate both cutting-force parameters. Fori production steps eq. (4) is then
n
Q=
k=l
l.meas.
k=l
2.meas.
... +
k=l
Figure 4:
Structure of the quality
control for production of
shafts with defined
measures and form.
i.meas.
(5)
cuttingforce-law
exlon Pall
odel
numerical
calculation
of process
cuttlngproc:eee
parameters
1-x k a1.1
AXoffMt,l
eas geometry
r nom (z)
com pen-
utlon
rmeas
791
g is a nonlinear function depending on the logarithmic slope 1-x, the calculated (allowed)
flexion curve, the allowed passive cutting-force, and the actual process parameters lli and fl
in the production step i.
With the estimated cutting-force coefficients and the actual process parameters, the differences between the allowed and actual passive force can be evaluated. Thus, the new
process parameters for the next production step can be determined. They are calculated like
the starting values using the cutting-force law. But now, the estimated force coefficients are
inserted. The process parameters are returned to the NC-control for the next production
step. The estimated offset deviation is compensated directly. Therefore, the start- and endpoint of the cutting track is changed. Figure 4 explaines the structure of the quality control.
5. EXPERIMENTAL RESULTS
Fig. 5 shows an expemplary workpiece of a shaft with two different diameters and an intermediate cone section. The following figures 6 to 8 demonstrate the effect of the model-based control. Fig. 6 to 8 are restricted to the most difficult cone section of the workpiece.
The tolerance leads to an allowed flexion deviation of27.5 J..Lm (allowed profile form deviation, see fig. 5) and to an allowed passive cutting force of 200 N. Inserting the theoretical
quantities in the cutting force law, the starting values of the process parameters are calculated to a=0.35 mm and f-=0.36 mm (fig.8).
.
100
Figure 5:
Workpiece for the subsequently experiments.
...T
Axmeasin mm
rESUlting
-0."
,aeaJ cc 1mour
4
0
-0.0
-0. 150
-4
60 zlnmm 70 40
1\
~f
mur~ data
40
production step 2
T. .
/'\\
1--.L
50
i\ I
V_
v"
Figure 6: Form deviation of the cone section in the first and second production step.
In the first production step, the cone section is overlapped by an extreme offset deviation
due to contouring errors of the NC-control (fig. 6). In the example, the flexion deviation is
too large as well. For the next production step the controller compensates the offset deviation, and the flexion deviation meets the predicted amount (fig.6) due to changed values of
a and f (fig. 7). The remaining deviations from the ideal geometry mainly results from the
792
measuring incertainty of the CMM (fig. 6). The good control characteristic is expressed by
the difference between the calculated passive-force from the allowed passive-force (fig. 7).
Fig. 7 also shows a similar behaviour of the process parameters a and f In the third production step, all changes are nearly 0. The process parameters tend to stable values.
After the second or third workpiece, the process runs for the next 50-60 workpieces with
this set of process parameters until the tool wear affects the production result. The next
workpieces should be measured and the results should transferred to the controller to readjust the process parameters. Especially this procedure allows a easy adaption of SPC-methods.
Figure 8:
Estimated
passive-forcedeviation,
process parameters depth
of cut a and
forward feed f
-., .-------
''\
"'A FD
~\:.1
..
-100
-200
0.4 a,fin mm
'\',,, .
I .
I
........................ ............................
~-
----- ------
0.3
.a
-----~---
______....,
0.2
0.1
5 production step i
6. SUMMARY
The shown quality control system is able to calculate the optimal process parameters in relation to the set quality level within a few production steps, resulting in the predicted ideal
geometry of the workpiece. To start the process based on the theoretical values, the initial
process parameters can be determined. Process deviations and -errors can be compensated
during the running manufacturing process. The model for the control of the geometric quality characteristics (measures and form) at the turning process can be extended to control
the roughness as well. With this method, rotationally symmetrical workpieces of any regular
contour including inside or outside machining operations can be treated in the case of a onesided clamping. The two-sided clamping needs only a few modifications. Basically the
procedure of the modelling is also applicable to other cutting manufacturing processes. At
last, the SPC-method can be integrated. With only a few modifications the advantages of a
closed quality control loop can be used without any changes of the manufacturing hardware
equipment.
REFERENCES
[1] Rinne H., Mittag H.J.: Statistische Methoden der Qualitatssicherung, Hanser Verlag,
Miinchen, 1995
[2] Unbehauen H.: Review and Future of Adaptive Control Systems; in: Popovic D. (Ed.):
Analysis and Control oflndustrial Processes, Vieweg Verlag, Braunschweig, 1991, 3-22
[3] TonshoffH.-K.: Spanen, Springer Verlag, Berlin, 1995
[4] Meyer K.F.: Der EinfluB der Werkzeuggeometrie und des Werkstoffes auf die Vorschub- und Riickkrafte des Drehens, Industrie-Anzeiger, Essen, 86 (1964), 835-844
[5] Sachs E.W.: Control Applications ofReduced SQP Methods; in: Bulirsch, R., Kraft, D.
(Ed.): Computational Control, Birkhauser Verlag, Basel, 1994, 89-104
[6] GroBmann Ch., Terno J.: Numerik der Optimierung, Teubner Verlag, Stuttgart, 1993
ABSTRACf: In today manufacturing scenario the trend to produce with lower and lower tolerance
and continuously decrease the defectives rate seems to be irreversible. Then the evaluation of
system performances is essential to determine if the process is able to respect our tolerances. With
the classical approach this evaluation is realised by considering the process in the in control state
and evaluating the capability indices and, consequently, the defectives rate. In this paper the
evolution of the process is considered in terms of the mean out-of-control-time, and the real
defective rate is evaluated by proposing a new capability index.
1. INTRODUCTION
The modern quality methodology considers prevention as a fundamental aspect because it
reduces quality costs greatly. In manufacturing this means to establish a priori if the
production process is able to manufacture the product, i.e. if the nonconformity rate is
acceptably low.
In other words two important aspects must be considered:
the process variability;
the product tolerance range.
794
The output of any process is not deterministic but it is affected by various causes of
variability. At the same time any product has a nominal dimension and a tolerance range.
CP index, the ratio between tolerance range and natural variability, expresses the process
capability to respect the tolerance range. This approach is equivalent to compute the
minimum number of nonconformities the process can produce; the index can be calculated
only if the process is in an in-control state.
The process variability has two different aspects [1 ]:
the natural or inherent variability at a specified time, that is "instantaneous
variability";
the variability over time.
In fact any process is dynamic in nature because the parameters of the process tend to
change for different reasons such as raw materials, human errors, environmental
conditions, etc.
The cp index takes into account only the first process variability cause. The aim of this
paper is to propose a capability index that takes into account both of the causes.
When the out-of-control state is reached, a greater rate of nonconformities is produced. In
other words the CP index is meaningful only when hypothetically the mean time to out-ofcontrol is infinite but, in the real case, a more complete analysis has to be realised.
In the present paper starting with the calculation of the real nonconformities rate during a
production cycle, a new index Cpd is calculated. This new index is always less than CP and
depends on the mean time to out-of-control (Tm) and on the rapidity to detect the out of
control state.
The results are presented for different values of Tm, of the control chart parameters and of
the shift value.
2. THE CAPABILITY INDICES
In literature it is possible to find various indices to quantify the capability of a process. The
more used index is C that is equal to
p
..!_
60'
standard deviation of the process. A process with an higher CP has better characteristics
because it produces a lower defective rate. But the knowledge of CP is not enough to
consider acceptable the process. In fact with CP we have only verified if the instantaneous
variability of the process is not too large to retain a process not adequate to respect the
tolerance range. A further analysis must be conducted to verify if the process mean is
different from the product mean and, then, if it is useful to recenter the process.
This is realised using the Cpk index defined as
. {(x
Cpk = mm
-Ls) ; _;__
(us-x)}
_ __;_
30'
30'
795
where LS and US are the lower and the upper specification values.
The correspondence between CP values and defective rate is explained in the following
table. The process is retained adequate if CP>l.33.
Process--+
Characteristic
Excellent
Cp-Cpk
Defective rate
Adequate
Cp- Cpk
!
Critique
Important
Secondary
Defective
rate
0.0063%
0,07%
0.27%
Adequate with
reserve
Defective
Cprate
Cpk
0!: 1.00 0.27%
0.5%
0!: 0.94
0.71%
0!: 0.71
Not adequate
CpCpk
Defective
rate
<1
< 0.94
<0.71
>0.27%
>0.5%
>3%
Before calculating capability an analysis of the values obtained from the process must be
realised. The various steps are described below.
First of all the in-control state is verified by using control charts. The first chart verifies
that the mean of the process has not changed during the time. Instead the second chart
verifies that the variability of the process has not changed.
A second analysis concerns the normality of data that can be verified, for example, using a
normal probability plot.
2.
To calculate the real defective rate it is useful to define the quality cycle, i.e. the time
between two successive in control period.
When the process goes out of control, it cannot return to an in control state without
intervention.
The control chart methodology consists of sampling from a process over the time and
charting some process measurement. If the sample measurement is beyond a specified
value, the process is supposed to be out-of-control.
The cycle time is the sum of the following: (a) the time until the assignable cause occurs,
(b) the time until the next sample is taken, (c) the time. to analyse the sample and chart the
result, (d) the time until the chart gives an out-of-control signal and (e) the time to discover
the assignable cause and repair the process.
For simplicity b, c and e are supposed equal to zero.
The readiness to single out the out-of-control state depends on the control chart parameters
that are: the sample size n; the sample period h; the control limit L. In particular, ARL is
defined as the average number of samples to point out the out of control state for given
shift value and chart configuration. The product of ARL for the sample period is ASN
(average time to signal) that is the mean time to find the out of control state.
The first step is to calculate the defectives rate during a cycle.
796
A product is considered defective if the measurement is beyond the tolerance range. During
the in control state the defectives rate is:
P(
J1
z < LTLa- J1) equal to zero the defectives rate during the
(c)h/Tm
P(z_ > 3Cpk)
d - _2 P(z
__:_____> 3CP)+
__!:...:._ _ARL
____
_ _ ___:____;_
___:_
1
1+ ARL (c)h/Tm
This last value depends on: the capability of the process in the in control state (Cp); the
shift value (Cpk); the control chart parameters (hand ARL) and the mean time to out-ofcontrol (Tm)
3.
797
The total defective rate can be used to propose a new capability index. Starting from the
total defective rate the cpd is defined as:
Cpd
invnorm( 1-
~)
= - - --'----'-
where invnorm is the inverse normal distribution and d, is the total defective rate. This
index is always less or equal than CP. In fact, in the hypothetical case, when the mean time
to out of control tends to infinity, Cpd tends to CP and if Tm tends to zero, Cpd tends to Cpk.
The values obtained for different combination of Tm, CP and !l/o are reported in the
following figures from which it can be pointed out, as is obvious, that Cpd increases with
1.4
1.3
1.2
1.1
.9
.8+-------~---------r------~
1.00
1.20
1.40
Cp
1.60
Instead figure 2 confirms that it is not possible to select the control chart parameters n and
h without an a priori knowledge of Ilia values. In fact, the figure shows that the optimal
values of n and h are different when Ilia changes.
798
n=8 h=2
Cpd
1.2
1.15
1.1
n=2 h=0.5
1.05
1
0.00
1.00
0.50
Finally, figure 3 shows that the Cpd is an increasing function of Tm that, starting from Cpk
for Tm equal io zero, tends to CP while the Tm value tends to infinity.
1.24
1.22
1.2
Cpd
1.18
n=2 h=0.5
1.16
1.14
n=4 h=1
1.12
1.1
10
20
30
40
50 Tm 60
Even if Cpd values can be easily calculated knowing control chart parameters and system
evolution, an expression, able to give a good approximation, is proposed.
In fact considering the trend of the curves in the figures and with a statistical software the
approximations of C pd are listed below:
for n=2 and h=0.5
799
Cpd
In figures 4-5-6 the error curves as a function of CP, Tm and Ala and for various
combination of control chart parameters are reported.
l.S
1
Err%
0.5
0
-0.5
-1
-1.5
n=2 h=0.5
-2
-2.5
10
20
30
40
50 Tm 60
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-2.5
-3
1.00
1.10
1.20
1.30
1.40 Cp 1.50
800
Err%
-2
0.00
0.50
1.00
1.50
6/a 2.00
The approximation can be considered good enough because the error is always less than
3% and then the proposed functions can be used to estimate the real capability of a
production process with the change of CP, Tm, A!o and control chart parameters.
The approximation is not true if Tm assumes too much low or too too much high values.
In the developed example the cost of sampling has been maintained constant.
The equation shape is also valid if this parameter changes; in fact simulation analysis has
shown that the errors are always less than 3%.
Considering the aim of the paper no more analysis has been considered necessary to
improve the precision of the interpolation curves.
4. CONCLUSIONS
The modern quality methodology considers prevention as a fundamental aspect. In such a
condition it is necessary to evaluate a priori if a production process is capable that is if the
defective rate is low enough. The commonly used indices can only give a partial answer to
this question because the real dynamic behaviour of the process isn't taken into account.
In this paper a new capability index has been proposed and an investigation of its values
for various parameters combination has been realised. The aim is to propose a simple
approximation to estimate real capability able to assume decisions with greater safe instead
of finding a precise mathematical formula.
REFERENCES
1. Douglas C. Montgomery "Introduction to Statistical Quality Control"
2. T.J Lorenzen L.C. Vance, "The Economic Design of Control Charts: A Unified
Approach", Technometrics, 1986, Vol.28, No.1
ABSTRACT: The cost aspect of heat treatment quality management has been presented. Quality
management cannot be successful without quality cost management. The activity of heat treatment
quality system as cost centers have been defined. The heat treatment quality costs have been
classified in categories.
1. INTRODUCTION
Quality management includes all activities of overall management function that determine
the quality policy, objectives and responsibilities, and implement them by means such as
quality planning, quality control, quality assurance, and quality improvement within the
quality system. Total quality management is management approach of an organization
centered on quality, based on the participation of all its members and aiming at long-term
success trough customer satisfaction, and benefits to all members of the organization and
to society [1].
Quality management cannot be successful without quality cost management. The costs of
quality are substantial and a source of significant savings, although most firms spend about
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
802
20 percent of annual sales on costs of quality. The 2.5 percent benchmark is the amount
quality experts estimate should be spent at the optimal quality level [2].
First step in any quality improvement program is realistic cost/benefit analysis in terms of
the effort needed or justified obtaining the desired levels or assurance of quality. This
analysis can be successfully done on the base of: true factory man hour rate in terms of
overhead and salary, quality/quality control activity costs both in terms of man hours and
money and quality related costs of any care study item. The quality cost figures are quite
straightforward in respect of: prevention costs, appraisal costs, internal failure costs,
external failure costs in any cost center [3].
Companies on different ways use quality cost variables in their performance measures of
managers. For these purpose managers have to explicit goals in their performance
measures for reducing the number of products rejected for quality reasons and savings in
quality costs.
I QUALITY
r MArJ1ISE
UTY POLICY
DESI
--------------+------
1-j
.:-<
~1.-ITEfli,\L SELECTIO:-!
I PROCE. DE
==t i
DE I GN
"JL:)(
803
I
1
~------<
JL_..._
-...
L-----E-.P-T-~-~-C-E_O_F_I_V_O_I!_~
~
~~-RECE I\"J)(G 1,\ISPI::C TIO\
PRE-TRE ITME:\T
HE,\T TREADIE\T
POST-TJ<J::,IT.IIE\T
---
1:\SPI::f"TI O"\
---::::o )(CO\f"OIUIITY
Dl r'OS I"J"IO,"T'
"
---;=~~-:-:-7:-=:-:c:7l
.~I'O I DAI'C E OF
STO I!.~t;E
Ql Al.ITY RECORD
FEED
B.~CK -
------~-----------
804
805
-quality of design;
-quality of conformance.
Quality of design is a function of products specifications. Quality of conformance is a
measure of how a product meets its requirements or specifications. If the product meets
all of the designed specifications, it is fit for use. Of the two types of quality, quality of
conformance should receive the most emphasis.
4. COSTS OF HEAT TREATMENT QUALITY
The costs of heat treatment quality are the costs that exist because poor heat treatment
quality may or does exist. Quality costs are the costs associated with the creation,
identification, repair, and prevention of defects. These costs can be classified into four
categories: prevention costs, appraisal costs, internal failure costs, and external failure
costs. Sum of all these costs have to be minimal.
Prevention costs are incurred to prevent defects in the products or services being
produced. Prevention costs are incurred in order to decrease the number of
nonconforming units. These costs can be recognized in any part of quality system in heat
treatment: quality design, machine design, material selection and process design.
Appraisal costs are incurred to determine whether products are conforming to their
requirements. These costs include acceptance of works, receiving inspection and testing
materials, supervising appraisal activities, process acceptance, supplier verification and field
testing. Process acceptance, very important in heat treatment, involves sampling post
treatment material while in process to see if the process is in control and producing
nondefective material.
Internal failure costs are incurred because nonconforming products are detected before
being shipped to outside parties. These are the failures detected by appraisal activities.
Internal failure costs are scrap, rework, downtime, reinspection and retesting.
External failure costs incurred after products are delivered to customers. These costs
include lost sales, returns, allowances, warranties, repair, and complaint adjustment.
External failure costs concern to whole company and they are not recorded at heat
treatment cost centers.
Nonfinancial quality costs in heat treatment are: percentage of products passing quality
tests first time, outgoing quality level for each product line, percentage of shipments
returned from next production phase because of poor quality, and percentage of shipment
made on the scheduled delivery date or percentage of delay. At some circumstances
average delay costs have to be calculated, but in heat treatment it is complicated to
806
calculate and some assumptions have to be made. It comprises the loss of use of productive
manufacturing resources because a product is taking longer to manufacture than planned,
i.e., loss of potential revenue.
The report of quality cost has to be used to examine interdependencies across the four
categories of quality costs. On the base of a quality report can be examined both, how
investment in heat treatment prevention is associated with reductions in appraisal, internal
failure, or external failure costs realated with heat treatment and how increase expenditures
in product design, one of prevention costs, is associated with decrease expenditures on
customer service warranty costs, an external failure cost category.
4. CONCLUSION
Quality management cannot be successful without quality cost management. The quality
cost information is quite straightforward in respect of: prevention costs, appraisal costs,
internal failure costs, external failure costs in any cost center.
Quality management of heat treatment consists on quality policy and quality planing,
quality control, quality assurance and quality improvement. Every activity participates in
a cost sum. Heat treatment quality costs can be classified into four categories: prevention
costs, appraisal costs, internal failure costs, and external failure costs. Sum of all these costs
have to be minimal.
Nonfinancial quality costs in heat treatment are important, and they have to be monitored
to.
REFERENCES
[I] ... , ISO 8402: 1994.
[2] Hansen, D. and Mowen, M.: Management Accounting, South Western Publishing Co,
Cincinnati, 1992.
[3] Horngren, C.: Cost Accounting, Prentice Hall, London, 1991.
[4] Kanetake, N.: Total Quality Management is the Key Word in Heat Treatment, 5th
World Seminar on Heat Treatment and Surface Engineering IFHT'95, Isfahan, 1995.
[5] Rooney, E., Measuring Quality Related Costs, CIMA, London, 1992.
[6] MrSa, J.: Reporting and Using Quality Cost Information, 30th Symposium HZRFR' 95,
Zagreb, 1995. (In Croatian)"
G. Meden
University of Rijeka, Rijeka, Croatia
1. INTRODUCTION
808
G. Meden
products, even with vastly different complexity, must be designed to create ultimate
satisfaction when used for the intended pmpose. The only practical information available to
a producer is usually the customer's requirement.
Production processes and operations that are predetermined by the design influence, to a
large extent, the actual quality of products and services. Quality of design permeates through
the entire production process and further into external supplies of material, production
equip:m,ent, labor, technological knowledge, and so on. Therefore designing and assuring
quality must extend as a managerial activity into the manufacturing and other spheres too.
Modem quality assurance management implies that the product or service is designed for
attainment of the required quality.
A regimented program of control, inspection, and certification, further enhanced by timed
and random audits, helps meet the specification, and assures the customer that a superior
quality product or services has been purchased. Besides, the regulatory environment may
require to have an internationally standardized and certified quality system implemented.
In the meanwhile, the most powerful is an intense new global competition in quality. This
competition has produced a major shift in world economic priorities. So, while the twentieth
century has been the Century of Productivity, the 21st century will be the Century of
Quality [ 1].
2. QUALITY APPRAISAL
Quality is a very popular subject of discussion in today's world. Indeed, many of us think of
quality only when we purchase something, and we expect top value for our money.
The written quality specification are as old as recorded history. In the absence of
measurement standards, the quality characteristics were descn'bed in words. The rule was
"caveat emptor" ie. let the buyer beware [2].
What and how quality is measured depends upon how quality is defined. Hopefully, these
definitions are suitable to quantification and are broad enough to encompass the concepts of
quality being "a degree of customer satisfaction" and "a degree offit'ness for use" [3].
The competition is going to force everybody to give the customer a positive experience
compared with the products and services quality he expects. Price has no meaning without a
measure of quality being purchased and, consequently, without adequate measures of
quality, business drifts to the lowest bidder, low quality, and high cost being the inevitable
result.
One way to control and guide operations in a way that guarantees the right quality is to
create and introduce a quality assurance system. Introducing a quality assurance system is
one of the most important, and the most profitable investment, because the right-first-time
quality avoids waste, saves time and money.
Industry is strivmg to improve quality in all of its products, and the dynamic nature of
certification and accreditation schemes throughout the world is a reason for concern as well.
In many industries customers and regulatory bodies demand formal quality assurance, which
can mean a kind of quality evaluation and promotion of standardized quality assurance
system certification [4].
809
ISO 9002
ISO 9003
ISO 9004
for
The objective of the ISO series of standards is to allow companies to create proper quality
systems that will enable them to consistently deliver products or services at a desired level of
quality. Using ISO 9000 to implement a basic quality system for the purpose of internal
processes improvement can benefit a company in many ways.
Improving the quality of operations and thereby increasing their efficiency as well as
increasing control over operation can serve to decrease costs. The pay-off will come in
terms of things like less scrap, reworks, delays, complaints, etc. An improvement in quality
should result as improvement in relative market share which should eventually result in
increased return on investment and, subsequently, increased profi~. _
ISO 9000 may also be used as a measure for supplier control ...The benefits of having
suppliers with certified quality system are that, usually, the level of receiving inspection and
vendor qualification efforts can be reduced, and resulting in increased savings.
The basis of an ISO 9000 quality system implementation is the company management
commitment to the quality process emphasis completely throughout the company. If the
company management elects to pursue the ISO 9000 quality system implementation and
subsequently certification, and truly commits to all processes and operations as it is by
Figure 1, they should see the benefits mentioned under both internal improvements and
markets positioning.
G. Meden
810
811
Conformity assessment requirements address how a company proves that it complies with
the essential requirements of the directive. Conformity assessment procedure includes type
testing of the design and, possibly, periodic surveillance inspections or the quality system
certification.
It should be emphasized that the quality system certification is only one of possibilities.
Quality Management Systems are being developed to comply with the international
directives, standards, and laws.
4. CERTIFICATION PROCESS
The certification process involves an application, documentation review, possibly a preassessment, and a final assessment, followed by certification and ongoing surveillance.
Because of the ongoing surveillance, it is important for the company to select a certifier with
whom they can maintain a long-term relationship. A schematized outline of the certification
hierarchy is shown in Figure 3.
G. Meden
812
In some cases the pre-assessment is an optional step which a company wishing certification
may elect to undergo or to by-pass. The actual certification process can vary from company
to company, depending on practical circumstances and specific particulars, but usually the
main steps are [6]:
Management commitment
Steering team installation
Gap analysis i.e. a study of the existing
quality system
Training in the ISO 9000 quality system,
docUmentation preparing, and auditing
techniques
Documentation writing
New procedures implementation
Certifying body selection
Examination of the quality system
documentation by the certifying body
a Certification audit of the company quality
system, and
Certification.
The auditors will hold an introductory meeting to meet with the company management, and
during the assessment they will interview all levels of personnel to determine whether the
quality system as documented in the quality manual and supporting procedures have been
fully implemented within the company.
At the first sight, the process of obtaining certification could seem to be quite overexhaustive. Once the company is certified, a certificate is issued and the company is listed in
a register or directory which is published by the certifying body. It is important for a
company pursuing certification to understand the duration or the validity of its certification.
Some certifying bodies have certificates that are valid throughout pending continuing
successful surveillance visits. Others have certificates which are valid for a limited period.
Those whose certification expire conduct either a complete re-assessment at the end of the
certification period or an assessment that is somewhere between a surveillance visit and
complete re-assessment.
5. OBTAINING CERTIFICATION
The company must have a positive attitude about the quality system requirements because
the cooperation of every department and everyone within each department is predominant.
It is found that the fundamental for successful certification is a team effort.
Moreover, this is necessary for maintaining the quality system because ISO standard
requires an audit every six months. Each of these audits is completely random and takes just
the time to sample the company's procedures and processes.
813
For example, the auditor may ask a operator to show the operating process and procedures,
and may look for logs to establish traceability, signatures, responsibilities, and closing of the
operating loop.
In welding operation, for instance, welding rods must be stored at a certain temperature and
in a dry space for quality welds. The auditor may ask a welder about the place and
temperature the welding rods are stored, or what is the welding procedure, and usually will
ask to see his log to check when he was last certified.
The auditor may ask for a log of training, lists of employees send for training, list of courses
taken, and copies of the training policy.
Thus, obtaining certification should not be taken for easy. In a practical case, a list of
activities for obtaining the ISO certification could be as follows [7]:
Serialization, cah"bration, and traceability
of all measuring equipment
Revision level control and signature logs
for all documentation
Operators certification and procedure
qualification
Route sheets and work instructions
Retrieval of quality records
Handling and disposition of
nonconforming material and products
Preparation of quality assurance manual
and formal procedures
Implementation of quality system
Performing internal audits and coaching
each department about possible questions
and answers
Scheduling pre-assessment and/or final
assessment audits
By obtaining and maintaining the ISO certification, a company can improve its quality
control system, cut waste, and motivate its employees. In so doing, it improves its
competitive position on both international and local markets.
Perhaps the most important feature of the ISO 9000 quality system standards is that, unlike
many previous quality control standards, it goes far beyond the attempt to ensure product
quality through an inspection of finished end products rolling off a production line. Instead,
the ISO standards attempt to build in quality through an examination of the entire design,
development, and manufacturing process together with shipping and after-sales service.
In practical terms, the implementation of the quality system in a process includes all
operations from sales and marketing, design and engineering, customer order entry, receipt
of raw materials, all areas of manufacturing, assembly, and cah"bration, to final inspection
and testing, shipping, and an extended product service support.
G. Meden
814
6. BOTTOM LINE
The bottom line in regards to whether a company really does need a certified ISO 9000
quality system implemented revolves around four basic points [5].
First, it must take the time to know its products or services market requirements.
Second, it must also know its competition. If competitors elect to pursue certification, they
may be perceived in the market place as having a higher level of quality, and may adversely
impact the company's operation.
Third, know its certifier. It is a long term relationship, and the company must be sure to
selects someone who has resources to support its needs.
Finally, above all, management must talk to their customers and meet their requirements.
Companies that have attained world class quality have begun requiring their suppliers to
move toward world class quality as well. In this way, quality criteria spread gradually within
the entire supplier chain. A company desiring to upgrade the quality of its products or
services must consider the contribution of its suppliers and all personnel to the quality of
each product or service.
The manufacturing industry, firstly in Japan, then in the USA, and now widely in Europe,
has notably benefited from the deployment of quality function and development of quality
management. Although, the quality management practically is nothing more than
systematically applied common sense for a quality.
Various programs are being studied and implemented, but the most successful
implementation appear to be at those companies that require each individual to be fully
responsible for his own work and quality approach.
REFERENCES
1. Juran, J. M. :
2. Juran, J. M., ed.: Quality Control Handbook, McGraw- Hill, New York, 1962.
3. Meden, G.:
4. Meden, G.:
5. Potts, L.:
6. Purcell, D.:
7. Vermeer, F. J.:
ISO certification pays off in quality improvement, Oil & Gas Jouma~
90 (1992) 15, Tulsa, Oklahoma, 1992, 47- 52
D. Iwanczyk
University of Bochum, Bochum, Germany
interlinking of the production stages for combined hot forming production processes. On the one
hand, using these quality control loops allows preventative measures to be taken to ensure that
quality is achieved, thus permitting the results of the previous stages to be individually tailored to
the ensuing stages of production. On the other hand, controlling the quality of the production
stages is made possible and its effects are dependent on the quality of the ensuing stages. The
entire production line can be covered by using varying ranges of horizontal control loops and
optimal quality control, and, from an economic point of view, optimized quality control of the
whole production can take place.
1. INTRODUCTION
The success of a company is mainly dependent on its powers of innovation, its
productivity, and the quality of its goods. In particular, quality has become extremely
important [1]. When ensuring quality, it is no longer enough to carry out measurements and
tests after the completion of the product. The quality of the product should be planned,
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
D. Jwanczyk
816
guided, controlled and manufactured in all the production stages, keeping in mind the
demand that
"quality should not be controlled but produced"[2].
Apart from the legal aspects resulting from increased safety, regulations (product liability
laws) [3], interests in economic and competitive viability primarily force today's forming
technology companies to pay particular attention to ensuring product quality. The growing
tendency to manufacture products as near-net-shaped as possible has an influence on the
shaping of future product quality and the corresponding quality assurance measures that
must be undertaken [4]. Near-net-shape production is the manufacturing of workpieces
using forming technology whose functional surfaces require either no or very little
precision work at a later date. This places far greater demands on the dimensional, shape
and locational tolerances which have to be maintained, as well as requiring that all the
quality features are consistently maintained. These greater demands on the quality of the
forming workpieces can only be attained if the correspondingly high requirements are
fulfilled by all the production stages in the process. Quality orientated management is
necessary throughout the entire process.
2.AIMS
As a rule production processes in hot forming technology consist of several successive
stages. Combining parts of the processes over the material flow in line production is a
typical feature of hot forming processes. The workpiece is worked on in various stages
whe~ certain changes to the features are dealt with in tum. In this kind of combined
production the quality at each step of the process is crucial for the quality of the end
product. Therefore, if quality orientated production is to be achieved, not only the influence
of each stage on the result of the production process as a whole must be recognised and
taken into consideration but also the reciprocal influence of the individual stages [5].
process
stage
1
process
stage
2
process
stage
3
817
Te~hnology
quality control measures are optimally suited. Apart from sound knowledge of technology,
the basis of all quality controlling activities is suitable communication structures for
exchanging information on quality. One should be at production level and the other should
transcend all the levels. A joint central quality databank used by all the participants is
essential for integrating all the quality control measures.
disturbance
_ _ . flow of material
-----+
flow of Information
disturbance
disturbance
disturbance
quality assurance
Information
high level
authority
D. Jwanczyk
818
the degrees of deviation in the features of intermediate products only becomes obvious
after the intermediate products have reached the next stage of production. This means that
quality defects which occurred at an earlier stage and have not been discovered later
become evident when the process does not run as planned. Further quality control loops
which take these facts into consideration, are realised by feeding back information from
previous stages of the process. When deviations in quality occur whose cause can be traced
back to previous stages, the parties responsible for the possible cause are informed. The
existing defect is then dealt with in the affected stages by applying quality control
measures.
quality data of
process stage 2
r---~-_,
...
co-ordination
r .......................................................................,
process stage
process stage
process stage
3
2
material 1----1----1
"" adapt process - . ,
"' . adapt process 1---...
1--.
__..
to current
quality data
to current
quality data
quality data of
process stage 1 r - - - - - - - - - - - '
819
discrepancy between the existing quality of the initial data and the participants'
expectations. For this reason fast vertical control loops were designed allowing influence
which meets the demands of stages in combined production processes. The basic prerequisite for this is that there are direct communication channels between the higher levels
and the production level.
activate
higher-level
authority
-s
~"8
qualily data
of previous
process steps
E
(ij
~
CT
-l load
2.) demand
a modltiod
process
contlol
program
ptOC8SS
eonlrol
reaction ru les
and control
distvrbed
quality data
for following
process steps
quality activity
quality data
rc ~--p
process stage
P<ooram
.) initialize
process
1.) process is
workpiece
optimized
workpiece
orocess
5. QUALITY CONTROLLERS
Similar to the classic control loops, quality control loops can also have controlled systems
and controlling means. Quality control functions are also required on the production level
to effect the extended horizontal quality control loops. The tasks to be accomplished by
these controllers are determined by the intended behaviour of the control loops. As an
example of this, the quality controller is seen as a horizontal quality control loop, with a
preventative effect. The preventative pattern of behaviour found in extended quality control
loops assumes that there is a quality controller module in each stage of the process, which
has suitable strategies in store to vary the production programme within the stage of the
process so that it will react to changed initial qualities.
A first set of tasks is characterized by its ability to ascertain and react to changed initial
qualities. For this purpose the detailed quality data and information of the stored process
820
D. Jwanczyk
stages are analysed and categorized. The size and effects of the deviations between the
quality actually attained and what had been predicted for the intermediary products must be
evaluated. In the case of a large degree of deviation, the control programme of the
production stage will be modified on-line so that the desired end product quality, or at least
a minimisation of the negative influences, will be reached, despite a change in the basic
conditions. The decisions as to the necessary quality assurance reactions will be made
based on knowledge of the rules and controls. This results in direct intervention in the
control programme of the production stage or, if it proves necessary, demands are made to
higher levels to overcome the present quality problems with short term measures.
process stage
worl<plece
worl<plece
~~:~:~nee
activities
[ quality data
~a~lty
control data
Fn~~~~~~~=
elc.
quallty r
reaction rules
If XI then Yl
if X2 then Y2
quality data of
previous
procoss stages
quality
If Xn then Yn
L..-.r-r----r----r---'
~~~~~
quality control
module
reference value
for Input quality
?o~~~~ngata for
process stages
activate
higher
level
authority
output
quality
data
On the process control level, approaches based on the knowledge acquired are particularly
suitable for effecting the reaction controls in the quality control function. The tasks to be
dealt with by the reaction controls are generally:
- to comprehend and evaluate the present production situation
- to introduce suitable measures during a production stage
- to activate other components of the quality control
These reaction controls have to be individually tailored to each application (subprocess,
type of control). By doing this, progressive methods based on the knowledge acquired from
analyses and decisions can be used, such as expert systems and fuzzy logic.
6. COMMUNICATION STRUCTURES AND ELEMENTS
The most decisive precondition enabling quality control loops to function is the existence
of a well-functioning communication system to exchange data and information relevant to
quality. Taking the different demands into consideration which arise in the flow of quality
data to be realized on the process level, two different approaches for exchanging
information were followed up.
-
821
product
guality data
~~~r~-~~~~~1_?_:>_~
co-ordination
........
...... -----
822
D. Jwanczyk
8. ECONOMIC ASPECTS
As a rule it is very difficult to quantify the economic. advantages gained by quality control
measures. In terms of quality the advantages of the aforementioned quality control concept
are:
- higher production quality
-higher productivity
- controlled process chains due to extended quality control loops
- extended product ranges from hot forming processes
- error reduction
The special advantages of this concept can also be seen in the error tolerant behaviour
engendered by using control loops which are forward-looking in character. This fact can
also be taken into consideration when setting up the individual stages. Individual stages no
longer have to submit to strict tolerances if it is possible to individually compensate for
certain fluctuations in the features of the intermediary products in the course of production.
This leads to simple and cost-effective production processes.
REFERENCES
1. Rinne, H.; Mittag, H.-J.:Statistische Methoden der Qualitatssicherung,
Karl Hanser Verlag, Milnchen 1989
2. Warnecke, H.-J.: ,CIM- Die Untemehmen vemetzen sich",
Markte im Wandel, Bd. 14: CIM, Spiegel-Verlag, Hamburg 1990
3. Bauer, 0.: ,Vorbeugen ist besser- Qualitatssicherung im Schmiedebetrieb erfordert
viele MaBnahmen im gesamten Fertigungsablauf",
MM Maschinenmarkt, Vogel Verlag und Druck KG, Wilrzburg 96 (1990) 43
4. Konig, W.: ,Fertigungsverfahren Band 4- Massivumformen", 3. Aufl.,
VDI-Verlag GmbH, Dusseldorf 1993
5. MajJberg, W.; lwanczyk, D.: ,AusschuB vermeiden - Rechnergestiltztes
Qualitatssicherungssystem fiir das Warmumformen",
MM Maschinenmarkt, Vogel Verlag und Druck KG, Wilrzburg 99 (1993) 46
6./wanczyk, D.: Praventive Qualitiitssicherung mittels informationstechnischer Verkettung
vo~ Teilprozessen in der Umformtechnik
Dissertatio~. Ruhr-Universitat Bochum, Bochum 1994
7. Kiihn, R.: Architektur von hierarchischen Betriebssystemen fiir die
ProzeBautomatisierung in der Umformtechnik,
Dissertation, Ruhr-Universitat Bochum, Bochum 1992
824
825
826
-10log10(.;)
where y; is the response variable, and there are n observations in a run. A high value of SIN
ratio indicates that the corresponding combination of controllable factors is robust.
4. STRATEGIC DESIGN AND ANALYSIS OF MANUFACTURING SYSTEM
With the development of modem simulation languages, simulation has established its place
in the design ofmanufacturing systems [7]. Simulation permits modelling of manufacturing
systems at any desired level of detail. Controlled experiments can be carried out on a
simulated environment at a low cost.
We suggest the use of this environment for the selection of manufacturing strategies
following the Taguchi method. The effect of these strategies on the quality levels can be
modelled, and experimented upon. For example, the continuous improvement aspect of TIT
can be modelled by using a learning curve model. The Taguchi method permits us to select
robust strategies. An application of this simulation methodology is presented next.
4.1 Selecting performance measure and define the target
The goal of the experiment was to identity levels of the manufacturing strategies which
were most robust to quality problems. We used cost per unit of good finished product as
the surrogate for the cost effective quality level achieved by the manufacturing system.
Consequently, this cost was the response variable. The analysis gave an answer that on
what levels of the noise factors were the production cost on minimum while the selected
827
management strategies (factors) are the most insensitive on the noise m the given
manufacturing system.
Inner array of noise factors (L4 layout)
-- --
1 =Low
2 =High
AlAn
No.
1
"""'
SIN
828
829
Process-shift
Complexity
Run
SPC
JIT
Auto
1
2
3
4
1
1
1
1
2
1
1
2
2
1
1
1
2
1
2
1.43
1.45
1.37
1.45
1
2
1
2
1.59
1.60
5
6
2
2
2
7
8
Nominal Cp
2
2
1.54
1.61
1.27
1.35
1.27
1.39
1.45
1.59
1.43
1.55
Average
SIN
-4.04
-3.73
-3.50
-3.59
-4.91
2.08
1.84
1.85
1.72
2.24
1.97
1.47
1.47
1.42
1.47
1.66
1.63
1.56
1.52
1.48
1.51
1.73
1.68
2.01
1.89
1.59
1.64
1.64
1.67
-4.55
-4.38
-4.49
SSPC
+JIT
Automation
011
On
Strategies
830
5. CONCLUSIONS
We have presented a methodology for the selection of manufacturing strategies. This
methodology is based on the Taguchi's method of parameter design and takes into account
the current process capability, possible process shifts, the complexity of the existing
processes, and the costs and the benefits of the proposed strategies. The methodology was
demonstrated on an example hypothetical manufacturing system.
The Taguchi method is normally applied to the tactical design of the production system.
Our methodology presents an application of the method to the strategic design of
manufacturing systems. This methodology is extendable to include other manufacturing
strategies such as total quality control and supplier certification.
ACKNOWLEDGEMENTS
The third author is grateful for the Academic Research Visitors Grant that he received
from the School Rtsearch Committee of the University ofWaikato, which made it possible
for him to carry out the work.
REFERENCES
1. Mezgar, I.: Parallel quality control in manufacturing system design, in the Proc. of Int.
Conf of Industrial Engineering and Production Management, April 4-7., 1995.
Marrakech, Morocco, Eds. IEPM-FUCAM, pp. 116-125.
2. Foulds, L.R., Berka, P.: The achievement of World Class manufacturing in Central
Europe, Proc. of the IFORS SPC-2 Conference on "Transition to Advanced Market
Economies", June 22-25, 1992, Warsaw, Eds.: Owsinski, J.W., Stefanski, J., Strasyak,
A., pp139-145.
3. Hyde, A., Basnet, C., Foulds, L.R.: Achievement ofworld Class Manufacturing in new
Zealand: Current Status and Future Prospects, Proc. of the Conference on "NZ Strategic
Management Educators", Hamilton, New Zealand, NZ Strategic Management Society
Inc., pp.168-175.
4. Taguchi, G., Elsayed, E., Hsiang, T.C.: Quality Engineering In Production Systems,
McGraw-Hill Book Company, New York, 1989.
5. Ross, P.J.: Taguchi Techniques for Quality Engineering, McGraw-Hill Book Comp.,
New York-Auckland, 1988.
6. Drucker, P.F.: The Emerging Theory ofManufacturing, Harvard Business Review, MayJune, 1990, 94-102.
7. Law, A.M., Haider, S.W.: Selecting Simulation Software for Manufacturing
Applications, Industrial Engineering, 31, 1989, 33-46.
8. VAXNMS SIMSCRIPT 11.5 User's Manual, Release 5.1, CACI Products Company,
February 1989.
'
P. Cosic
University of Zagreb, Zagreb, Italy
1. INTRODUCTION
"As a rule, the greater a system's generality the lower its efficiency." [1]
The majority of complex and interesting problems do not have clear algorithmic solutions.
Thus, many important tasks are realised in a complex environment which interferes with
making the precise description and rigorous analysis of a particular problem. Traditional
algorithmic methods are not suitable enough for successful connecting of design and
manufacturing processes due to their complex nature (creativity, intuition, heuristics) [2].
832
P. Cosic
Therefore, in this paper the use of expert systems for the purpose of detection and
elimination of faults in the technology of deep drawing is considered. This technology is
selected because of its wide use in production, long-time application in practice, and
relatively suitable validation of selected variables by simulation or design of experiments.
2. SETTING PROBLEM SCOPE
Generally speaking, deep drawing is a process of forming sheet metal between an edgeopposing punch and a die (draw ring) to produce a cup, cone, box or shell-like part. The flat
blank is prepared from a strip or sheet. In the first stage of the expert system development,
the problem field is restrained by the following features:
steel sheet
axi-symmetrical product shape
no change of blank thickness
workpieces with or without flanges
multi phase production of workpieces.
833
own interpretation of how certainty factors should be used. Frames (often called classes)
are the basic building blocks of an object system, representing the generic classifications of
things that make up the observed domain. A frame consists of a set of attributes, often
called "slots". Good inheritance mechanism (often referred to as class hierachies) allows
multiple layers of parent/child relationships. Truth maintenance has the facilities that store
the links between asserted values and the rules that made that assertion. Forward chaining
as a control rule is defined using IF-THEN syntax that logically connects one or more
antecedent clauses (or premises) with one or more consequents (or conclusions). Backward
chaining as a control rule is used if we begin with a conclusion (or hypothesis) and want to
know all possible pieces of information that led to that conclusion. The inference engine
(Figure 1) fires one rule and asserts a new fact (the rule's consequent), and the new fact
matches the antecedent of another rule.
As a consequence of the inference engines linking together chains of rules, the expert
system has the ability to explain the chain of reasoning that led to the final conclusion. This
explanatory capability is the most useful feature to trust the advice the system is providing,
because it removes the black-box mystery from the process of converting raw facts into
expert advice. This feature also makes expert systems excellent training tools, because the
novice can examine step by step an expert's thought process. Expert systems have to allow
the developer to stop the inference engine temporarily at predefined breakpoints.
Breakpoints are useful for seeing if rule is fired at all, and for seeing whether the rules are
fired in the anticipated order.
To build the required robust domain model, we use the structures that cluster and organise
our facts concerning the observed process-structures called frames. The replacement of our
simple assertions with a representation that uses frames provides :
children- the direct descendents, inheriting by default all the attributes of the parents
a child may have more than one parent
the related ability to constrain the allowable values that an attribute can take on.
modularity of information
a mechanism that will allow us to restrict the scope of facts considered during forward or
backwaed chaining
access to a mechanism that supports the inheritance of information down a class
hierarchy.
In the considered model each object type will be implemented as a frame. These frames will
be organised into a hierarchy that containts all components. In fact, the parent /child
relationship offrames in a hierarchy are often refered to as class/subclass.
Validation techniques [6, 7] are used to assure the correctness of functionality and internal
logic, to check if the correct data or information is passed between the internal components
of an expert system. The validation set contains the following criteria:
P. Cosic
834
completeness
consistency
robustness
system aspects
user aspects
expert aspects.
Each of these criteria should be observed and implemented during the building of the expert
system.
The developed expert system [8] can be described as a diagnosis system (Figure 1).
Diagnosis systems relate observed behavioural irregularities with underlying causes. The
area of fault elimination causes includes the elements of design systems and debugging
systems. Design systems construct descriptions of objects in various relationships with each
other, and verify that these configurations conform to stated constraints. Debugging
systems are based upon planning, design, and prediction capabilities to create specifications
or reco~mendations for correcting a diagnosed problem. By using simulation and design of
experiments the developed expert system includes the elements of prediction systems.
4. IMPLEMENTATION OF EXPERT SYSTEM IN DEEP DRAWING
The developed expert system works in the off-line mode as a help for the elimination of the
observed faults during the technological process. The expert system is used in the way that
the selected sketch or photograph of the observed faults (42 faults up to now), suggests us
the possible cause of the faults (66 causes so far) and the possible ways of the fault
elimination (up to now 59 ways offault elimination).
The evaluation function estimates the possible causes of the faults and suggests actions
taking into account the following criteria : price of the product, time of use, level of
disturbing technological process, and standardisation of tool parts and technological
parameters.
The expert system can eliminate the observed faults in three groups of actions. The actions
can be related to workpiece material, tool, technological parameters, and to their
interactions (Figure 2).
The first selected example can explain the way for observing the kinds of faults on
workpieces and classificitation of actions for possible corrections. Analysing the observed
faults (Figure 3), fractures at the vessel's bottom can be noticed. The possible causes of the
faults can be multiple. They might include : s~et thickness tolerance, ultimate strength,
grain orientation and size, surface faults, roughness level of die and blankholder, punch-todie clearance, too high blankholder pressure, too large draw ratios, too small punch radius,
835
inadequate
lubrication,
too thin sheet , too large
ratios of ultimate strength
and conventional yield
limit , the deep drawing
tool behaving as a
blanking tool, etc. The
possible actions for faults
eliminations are classified
with respect to the
complexity of criteria and
the cost of actions. For
the implementation of
measures it is necessary to
group the steps related to
workpiece
material,
technological parameters,
process of anneling, and
tool
constructional
properties.Analysing the
selected photograph or
sketch (Figure 3), three
types of
faults and
possibly causes of the
faults are offered to us
when
using
forward
chaining rules as selected
836
P. Cosic
837
Choos~
an other matuial
too thin
dequate lubricant
Dieto punch
clearance
hanges In die dtuign
inadequate
nchnoso
dius too
smal
w radius
, roo small
Technological
param ete.r.s
Figure 4. Types of faults, possibly causes of faults and potential actions for fault elimination
CONCLUSION
The developed expert system can be a help in the process of elimination of observed faults
at the workpiece during the technological process. The observed difficulties in the visual
recognition of faults can influence the adequacy of specificied actions for the elimination of
fault causes. So, the expert system will be supplemented with more input data related to
material properties, chosen manufacturing technology and constructional tool properties.
838
F. Galetto
The suggested measures are ranked according to the degree of probability. The analytical
evaluation of the proposed measures helps in the process of the most efficient measure
selection.
The development will be continued through further sistematisation of the data from
literature, and through discussions and cooperation with experts of different educational
profiles. Also, simulations will be performed in the cases where there is a possibility to
lessen the subjectivity of action estimations in order to obtain the best response for the
observed problem. The validation of the simulations will be done by the selected design of
experiments.
REFERENCES
[1] Payne, E. C. : Developing Expert Systems, John Wiley & Sons, Inc., New
Yark/Chichester/Brisbane/Toronto/Singapore, 1990, pp. 401.
[2] Cser, L. : Stand der Anwendung von Expertensystemen in der Umformtechnik, 25
(1991) 4 pp. 77-83 ; 26(1992) 1 pp. 51-60.
[3] . - , Metal Handbook, "Forming", ASM Handbook Committee, 8th Edition, Vol 4,
American Society for Metals, Metals Park, Ohio, 44073, 1970, pp. 528.
[4] Oehler Kaiser, "Schnitt-Stanz-und Ziehwerkzeuge", Springer-Verlag, 7. Auflage, Berlin
Heidelberg, 1993, pp. 719.
[5] V. P. Romanovskij, "Spravocnik po holodnoj stampovke", Masinostroenie, Leningrad,
1979, pp. 520.
[6] P. Smith, S. Ng, A Steward, M. Roper, "Criteria for the Validation of Expert
Systems", Proc. of the World Congress on Expert Systems, Orlando, Florida, December 1619, 1991, Editor J. Liebowitz, Pergamon Press New-York-Oxford-Seoul-Tokyo, 1991, pp.
980- 988.
[7] A. Greb, "The LOOKER: Using an Expert System to Test an Expert System",
Proc. ofthe World Congress on Expert Systems, Orlando, Florida, December 16-19, 1991,
Editor J. Liebowitz, Pergamon Press New-York-Oxford-Seoul-Tokyo, 1991, pp. 1005 1012.
[8] J.A. Cervantes, "CELLOS, an Expert System for Quality Defects Diagnosis on
Cellophane Film Production", Proc. of the World Congress on Expert Systems, Orlando,
Florida, December 16-19, 1991, Editor J. Liebowitz, Pergamon Press New-York-OxfordSeoul-Tokyo, 1991, pp. 476-483.
[9] Nilsson, N. J. : Principles of Artificial Intelligence, Tioga Publishing Company Palo
Alto, California, 1980, pp. 476.
[10] Hayes-Roth, F., Waterman, D.A., Lenat, D.B. : Building Expert Systems, AddisonWesley Publishing Company, Inc., 1983, pp. 444.
F. Galetto
Polytechnic of Turin, Turin, Italy
840
F. Galetto
Too many of them think that Quality is growing and linked with Certification (based on
ISO 90001-2-3 standards). Only short-sighted managers have been relating Quality to high
cost; moreover products coming from Certified suppliers are often not better than from
other suppliers; Companies can get Quality from qualified suppliers meeting their needs,
not from paper-certified ones.
If managers meditate upon these facts, they must acknowledge that "low Quality can be a
very costly luxury for a company" and that "Quality has always been a competitive
advantage". Unfortunately top managers often think, wrongly, that "all the problems of
disquallty are originated by the workers, either in the manufacturing or in other areas".
They do not understand the important idea (shared by Deming, Juran, ... , and myself ) that
"more than 90% of times poor quality depends on the managers""Anybody can make a commitment to Quality at the boardroom table" (L. Iacocca)
Unfortunately management commitment to Quality .is not enough; managers must
understand and learn Quality ideas.
Too many companies are well behind the desired level of Quality management practices.
Quality is a serious and difficult business; it has to become an integral part of management.
The paper is addressed to managers because they are decisions markers: "managers have
the responsibility of major decisions in a company and the soundness of their decisions
affects the Quality of the products and the customer satisfaction". In order to make sound
decisions managers have to be aware of the consequences of their decisions; in relation to
Quality matters, managers have to commit themselves to assure that the concepts and
disciplines associated with Quality will be introduced into the developments programs of
the company.
Looking at the decisions of many companies, managers (if they are intellectually honest)
have to admit that, in many western countries, there is a general "lack of credible executive
action giving people permission to do things right and the help that such permission
requires".
Many times managers know little about Statistics and Probability Theory; nevertheless
they have to make decisions based on few data analysed with statistical methods
(devised by Statistics experts).
Managers do not like to ask themselves whether a method is good or bad especially
when a method provides them with results that are appealing; so-called experts do the
same several times.
Quality has always been a competitive advantage. Japanese recognised that and made the
right decision: to learn. They called American Gurus to teach them Quality ideas and
methods. So they broke the "Disquality Vicious Circle".
Recently western nations have recognised that education and training are essential, but in
some way they are not making Quality decisions: they use blindly methods imported from
Japan, just because they are Japanese methods (e.g. Taguchi Methods).
The paper shows that Logic and the Scientific Approach are able to provide the right route
toward the good methods for Quality.
We show some methods, in order to invite managers to break the "Vicious Circle"
that
(IGNORANCE-PRESUMPTUOUSNESS-PRESUMPTUOUSNESS-IGNORANCE)
need.
customers
their
prevents Companies from getting the Quality
841
Managers and scientists who will understand the core of the following ideas will help their
nations to reduce the Quality gap, and therefore the disquality costs.
Quality achievement is not a matter of statistics, but of sound engineering practice.
If a manufacturer is able to produce Quality items at minimum costs, he can sell them at
lower prices than competitors and then he certainly is bound to win the competitiveness
fight and increase its market share.
This certain route led the Japanese to their present dominance.
2. THREE ACTUAL CASES
We show three cases published by Taguchi Methods experts; the scientific analysis of data
provides different conclusions. Since the scientific analysis is correct, it follows that a huge
amount of money was wasted.
Managers, at every level, have to meditate upon these facts, decide to learn, and to climb
the ladder of knowledge: from
ignorance ~ awareness ~ simple knowledge ~ know-how ~ full understanding.
Quality is a competitiveness factor that must be integrated timely in all the company
activities in order to prevent failures; the only way is to give due importance to Quality in
each phase of the product development cycle.
That does not happen overnight, and needs a management metamorphosis, from "weathermanagers" to Rational Managers.
Starting from the seed of any knowledge "I know that 1 don't know", intellectually honest
people breaks the "Disquality Vicious Circle" and climb the ladder of know ledge.
Managers are decision makers and therefore the need the tools for thinking in decisionmaking in order to be rational managers: recognise problems, collect information, set
priorities accurately, find causes, consider all factors and other people's views, consider
alternative courses of actions, consider consequences and sequel to troubleshoot the future,
consider risks to any choice, .....
To make full use of the thinking ability of people there is a basic approach, the ITE
approach to decisions: every time a managers has to make a step in the decision process, he
must ask himself
If I do this Then I'll have this consequence Else I'll have this other consequence.
ITE is an integrated, holistic approach that releases intellectual resources that have been
hidden, unused or underused, opening channels of communication among people.
The first step toward this serious learning is: Intellectual Honestv.
There are many Quality techniques useful during the development phase; only two are
mentioned here, FMECA and DOE.
FMECA is to be used in order to identify potential failures and take preventive actions;
unfortunately managers either do not know the technique or they use the silly rule of
making decisions based on a priority index which is the product of 3 or 4 indexes; so doing
they do not base their behaviour on a rational approach and they do not make full use of the
thinking ability of people. There is no space here to pursue further the matter.
DOE helps a lot in preventing problems. The only way an engineer can "communicate"
with a phenomenon (failures, defects, yield, ROI, ... ) is through "data" (measurements on a
F. Galetto
842
R
-1
-1
1
1
-1
-1
1
1
-1
-1
-1
-1
-1
-1
1
1
1
1
-1
-1
-1
-1
-1
-1
1
1
-1
1
1
1
-1
-1
-1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
n. data
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
mean
-8,99786
-9,79303
-8,80422
-7,94556
-9,12618
-9,21131
-9,07666
-7,77125
-10,3254
-10,1674
-8,79386
-8,05024
-9,31471
-8,97796
-9,10306
-7,57433
0,288428
0,389932
0,250446
0,162311
0,055851
0,029337
0,027821
0,230403
0,165593
0,06934
0,377056
0,467217
0,162271
0,222795
0,143904
0,088606
SIN
29,882039
27,998568
30,919533
33,795531
44,265161
49,938118
50,271063
30,560055
35,897299
43,324507
27,355476
24,725800
35,178572
32,105445
36,022291
38,637624
4 factors at 2 levels:
S : size of capacitors V: voltage
T : temperature
R : radiation flow
The response is "current intensity" (f..!A).
For each treatment are shown: the mean of 4 replications, the standard deviation (the
square root of the variance), and the S;N, Signal to Noise ratio.
The analysis of SIN is:
843
Source
T
R
v
T*R
T*S
T*V
R*S
R*V
S*V
Error
total
ss
df
I
I
I
I
I
I
I
I
1
1
5
16
4,73692
43,2383
248,6896
37,I5833
39,04037
25,7I987
I8,89046
12,798I7
10,93448
109,1914
354,2195
21273,41
MS
4,73692
43,2383
248,6896
37,I5833
39,04037
25,7I987
I8,89046
12,798I7
10,93448
109,19I4
70,8439
F
0,067
0,610
3,5IO
0,525
0,55I
0,363
0,267
0,181
0,154
1,541
liv.prob. F
0.8063
0.-1700
0.1199
0.501-1
0.-1913
0.5731
0.6276
0.6885
0.7106
0.2695
Sis significant
81
20
22
25
28
25
26
17
23
20
82
8
12
8
10
9
12
6
8
4
values
83
0
-2
0
3
0
84
-9
-12
-14
-13
-8
-6
-3
-20
-18
-22
F. Galetto
844
Taguchi carried out the analysis of the means and considered the
linear effect B1 of the temperature
and two contrasts
L 1(A) between the suppliers not-Japanese and the 2 Japanese
L2(A) between the suppliers Japanese! and Japanese2
and the interaction of suppliers-contrasts with linear effect, as well.
The book marks with** the effects significant at 1%.
Source
L1(A)
L2(A)
BJ
BJ L 1(A)
BJ L2(A)
Error
total
ss
df
21
32
9.50
62.16
2056.86
36.63
1.26
48.97
2234.28
MS
9.50
62.16
2056.86
36.63
1.26
2.332
**
**
**
Analysing scientifically the data with the G-Method, whitout being blinded by the
means, it is found, with a significance level o{l%
L 1(A) is significant
L2(A) is significant
845
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-1 -1
-1 0
-1 1
0 -1
0 0
0 1
1 -1
I
0
1 1
-1 -1
-1 0
-1 1
0 -1
0 0
0 I
-1
0
Source
test Wafer 1
D E F
-1 -1 -1 -1 2029 1975 1961
0 0 0 0 5375 5191 5242
1 1 1 1 5989 5894 5874
-1 0 0 1 2118 2109 2099
1 -1 4102 4152 4174
0
1 -1 -1 0 3022 2932 2913
0 -1 1 1 3030 3042 3028
1 0 -1 -1 4707 4472 4336
-1 1 0 0 3859 3822 3850
1 1 0 -1 3227 3205 3242
-1 -1 1 0 2521 2499 2499
0 0 -1 1 5921 5766 5844
0 1 -1 0 2792 2752 2716
1 -1 0 1 2863 2835 2859
-1 0 1 -1 3218 3149 3124
1 0 1 0 3020 3008 3016
-1
-1 1 4277 4150 3992
0 -1 0 -1 3125 3119 3127
df
2
2
2
2
E
F
Error
total
2
9
17
A
B
ss
440
7
134
128
18
181
121
1004
MS
220
3.5
67
64
test Wafer 2
1975
5201
6152
2140
4556
2833
3486
4407
3871
3468
2576
5780
2684
2829
3261
3072
3888
3567
1934
5254
5910
2125
4504
2837
3333
4156
3922
3450
2537
5695
2635
2864
3205
3151
3681
3573
1907
5309
5886
2108
4560
2828
3389
4094
3904
3420
2512
5814
2606
2839
3223
3139
3572
3520
test Wafer 3
1952
5323
6077
2149
5031
2934
3709
5073
4110
3599
2551
5691
2765
2891
3241
3235
4593
4120
1941
5307
5943
2130
5040
2875
3671
4898
4067
3591
2552
5777
2786
2844
3189
3162
4298
4088
1949
5091
5962
2111
5032
2841
3687
45?9
4110
3535
2570
5743
2773
2841
3197
3140
4219
4138
16
5.0
4.8
90.5
13.4
6.8
3. SIMULATION STUDIES
Form the previous cases it is evident that a lot of disquality was generated and a huge
amount of money was wasted. Were they unfortunate? Absolutely not, they were ascientific.
In order to analyse the stupid statement "Taguchi methods works well" a simulation study
was carried out: a known model with 4 factors, A, B, C, D, was chosen and we analysed
how many times the Taguchi Method and G-Method "found the known truth".
In the model the "true values" for the significant effect of factors A, B, C, D, and for the
interactions AB, AD, BD, CD, were stated. 1000 simulation runs were carried out for each
of the following situation, by dividing the values of significant effect and by increasing the
846
F. Galetto
variability (four values of standard deviation s); a complete plan was generated and a
fractional plan (8 run s) was used (confounding pattern I+ABCD):
effect divided by
1
2
5
lO
50
100 1000
s=1
*
*
*
*
*
*
*
s=2
*
*
*
*
*
*
*
s=5
*
*
*
*
s=10
*
*
*
*
When the importance of factor and interaction is reduced, a s o u n d m e t h o d
h a s t o f i n d t h a t , according the statistic theory.
G-Method (c/a.uic) versu.s Taguchi
probability offinding significance of ALIASES,
when stated data are divided by 1 and standard deviation is s=S
- ------- ---1
100.
90 80 ~
70 -
. -
Classic
--
- - D - - SNI{/;5{
-~ 60 -
:-:::
...,
_g
::...
50 40 30
20 /0
A+BCD
:1
B+ACD
AB+CD
C+AIW
SN2{1;5{
SN3{1;5{
~CJ-
AC+BD
BC+A D
ABC+ I)
aliases
/00
90
80 T
T
~
~
g
~
10 r
70
60
.....
SS/{5:2{
:c; so 41)
20
SS2{5:1f
SSJ{5:2{
10 -
o ol=--~
A+BCIJ
B+ACD
Classic
AB+CIJ
C+!IJJIJ
AC+BD
BC+,-ID
AIJC+D
aliases
847
100 ~ .
90
80 '
~
~
'0
..
-<:>
-<:>
7 (}
..... .,.
40
._,
"'
"'- 30
..
20
.,
10
I{J,;
0
.., ..,"
.....
~
<::>
"l
.,~ .
60 I, .
5(}
'
~-
. ...
.....
!-..."
-~ .
s=JO
- = - s=S
-
-
-~
'
"l
..- -
"l
..... .....
"l"
.,.,
.,.,.
s=2
..... ..... oe
..... "l .....
Taguchi Method did not find that, in spite of the fact that it was used as "error" the value of
the pooling of "truly not significant effects... The three most popular formulae for the
Signal/Noise ratio, devised by Taguchi, were used for the analysis.
Some graph are presented to show the probability of "finding the truth".
DoesTaguchiMethodworks? ?? It is reallv robust in FAILURE!!!
"Signal/Noise ratios" used in connection with the so called Robust Design are nonsense
from a scientific point of view: these ratios are multifunctional transformations of the data,
and at the end the transformed data must be normally distributed if, logically, the F ratio
resulting from ANOVA and shown in the "Quality Engineering using Robust Design"
books should have any statistical sense).
4. CONCLUSION
It is left to the Intellectually Honest reader.
"Quality of methods for Quality is important" and that some methods are misleading (e.g.
Taguchi Methods, Bayes Methods, ...)
REFERENCES
1) D.H. Besterfield, "Quality Control", Prentice-Hall
2) P.B. Crosby, "Quality is free", McGraw-Hill
3) A.V. Feigenbaum "Total Quality Control", McGraw-Hill
4) Juran, "Quality Control Handbook", McGraw-Hill
5) W.E. Deming, "Out of the crisis", Cambridge
6) F. Galetto, "An application of experimental design in the automotive field", SIA
848
F. Galetto
R. Guseo
University of Udine, Udine, Italy
850
R. Guseo
1. INTRODUCTION
The replicated split-plot design within experimental blocks is well-known and it was
introduced as a special version of the factorial design when the level of a main factor (whole) is simultaneously applied to a set of experimental units. This particular
framework requires a special dependence claim, i.e. the equicorrelation among assumed
multinor.mal experimental errors.
This paper is devoted to the general case where the dependence among errors is arbitrary. In paragraph 2 the standard two factors split-plot design with random blocks
is examined. Paragraphs 3 and 4 generalize the error structure under which an exact
analysis of a block effect and a main effect of the whole factor A is proved. In paragraph
5, under an arbitrary multinormal assumption, the ratios between M S related to the
effects B and A X B are proved not to be F -distributed as in the uncorrelated case.
Via simple approximations, it is shown, in paragraph 6, that the classical standard
tests (split-plot) for main effect Band for interaction A X B may be controlled, under
the null hypothesis, with a semi-parametric argument that allows to recover robustly
the well-known deCisional procedure.
2. SPLIT-PLOT DESIGN WITH TWO FACTORS WITHIN COMPLETE BLOCKS
Let us consider the simplest situation with two main factors and a block factor. Unlike
the usual assumptions of a factorial design, a single and randomized application of a
design treatment is not easily performed. Economical and physical reasons suggest the
application of a factor A to a class of experimental units (whole-plots). For a fixed
level aj of the main factor A, the levels of a second factor B may be randomly assigned
to the elements (sub-plots) of the whole-plots.
For instance, in ceramics firing, the main factors are the oven temperature A and the
clay mixture B. With a block factor it is possible to control different geometrical
shapes of each piece. In a factorial design each piece would be treated separately and,
in this case, at fixed temperature A for a particular clay mixture B. If the design is a
split-plot, a batch (whole-plot) is defined with different clay mixture pieces (sub-plots)
of the same shape (block) and is simultaneously treated at a fixed temperature in the
oven. Following the terminology of the example, the independence assumption of the
errors related to the same batch for fixed shape and temperature of the oven has low
credibility. It seems more appropriate to suppose them correlated. The standard splitplot design assume an equicorrelation.
The reference model is only apparently analogous to the corresponding one used under
the factorial assumptions. To be more precise, the model is
Yijk
where i
= f1
(1)
A; k
851
1, 2, ... , b is the level index of factor B; Yiik is the observed response. The explanatory
components of (1) represent: J.L, the grand mean response; T;, the random effect of the
i-th block, T; "" N'(O, a~); aj, the fixed effect of the j-th level of A, L:j= 1 CY.j = 0;
f3k, the fixed effect of the k-th level of B, L:t=t f3k = 0; (af3)ik the fixed level of the
interaction A x B at levels j for A and k for B, L:j= 1 ( a/3)jk = L:~=l (af3)ik = 0.
The stochastic term, e;jk, represents the experimental error and is assumed to have null
mean and normal distribution, eijk ,. . ., N'(O, a 2 ). Independence between T; and e;jk is
allowed, T; ..l e;jk, nevertheless, unlike canonical factorial model, the correlation among
errors ts
'f . ., . ., k r_J. k' ;
pa2
1 z=z, J=J,
{ 2
if i = i', j = j', k = k';
E(eijk'ei'i'k')=
~
elsewhere.
3. A GENERALIZATION
The equicorrelation condition characterizing split-plot designs is not acceptable in
many applied contexts. In order to overcome this restriction by modelling the dependence locally, new designing techniques, such as the nearest neighbour (NN) technique
have been proposed.
In [1], for instance, the experimental error is assumed to be such that E(eijkeijk') =
plk-k'la2 if lk- k'l = 1, E(eljk) = a 2 and zero elsewhere.
In [2] a first order NN balanced design is introduced in order to model a particular
spatial autocorrelation: E(e;jkeiik') = plk-k'la2 , E(eTik) = a 2 and zero elsewhere. More
recently, in [3] optimal tests are proposed for nested factorial designs under a special
circular dependence structure.
In this paper the dependence assumption of paragraph 2 is extended to a more general
form, E( e;jkeijk') = a 2qkk', where Q = ( qkk') is an unknown full rank correlation matrix. This very weak assumption is of interest because a specifically patterned matrix
Q may be a priori unjustified. The other assumptions remain unchanged.
Let us consider the following vectorization of the model (1), y* = (YnbYn2, ... ,Ynb,
Y12b Y122, ... , Y12b, ... Yra1, Yra2, ... , Yrab)', based upon the Kronecker product
R. Guseo
852
SSR =
~y?../ab-y~.frab=~(T;-T./r+s; .. /ab-s.Jrab) 2
(3)
ijk
The vector, whose squared norm is equal to SSR, is [(I,.- U,.) 0 Ua 0 Ub](t:* + t*).
Let us define e* = * + t* and R =[(I,.- U,.) 0 Ua 0 Ub], then SSR = e*'Re* and,
by the independence between t:* and t*, e* "'N(o,0" 2 V + O';abW).
Let us consider, now, R = R/, = ~ 2 + O';ab, s = 1/,Q1b and, in particular,
SSR _ *'R- *
- e e.
(4)
(5)
Let us define Z 0 = Ub(0' 2 Q+O';abib), then Z~ = Z0 , because Ub(0" 2 Q+O';abib)Ub =
~: 1b1/,Q1b1l, + O';abUb = ( ~ 2 + O';ab) Ub = Ub. The matrix (RVar(e*)) 2 is idempotent, (RVar(e*)) 2 = ; 2 [(I,.- U,.) 0 Ua 0 Z~] = RVar(e*).
By a theorem about normal quadratic forms, see e.g. [4] p. 57, and because the rank
of the correspondig matrix in the (4) is r(R) = r- 1, the ratio SSR! is centrally chi
squared distributed
SSR
-- = e
*'R-
e
(6)
"''Y[,.-J,o]
+ O';ab.
The fixed effect of the factor A is evaluated by the quadratic form SSA,
SSA =
2
""'
'2
"
.../rab).
..2.fr-ab=L.....-(aj+s.j.frb-s
L...,.-Y.j./rb-y
Let us define
'1/J
(7)
ijk
= ~ 2 , where s =
1b'Q1b,
and E(MSA) =
~ 2 + rb ~j= 1 ; ; .
SSEA
ij
(9)
853
SSEA
-1/;- ,. . .,
(10)
x[(r-l)(a-1);0]'
SSB =
Let us define b* =
1r
1a
(11)
ijk
(12)
Let us define, now, B = [Ur0 Ua 0(16 - U 6 )] and, for simplicity, redefine e* = b* +E*,
so that SSB = e*'Be*, withe* "'N(b*,u 2 V).
Let S S B h = e*' Be* be a suitable quadratic form, where B = B h and 1 is a constant
to be defined later. It is easy to prove the following identity
(13)
Therefore, the matrix BVar( e*) is idempotent if and only if
( 14)
Such an equation has usually no solution for all matrices Q. A case which satisfies
the (14) is the following one. If Q = (1 - p)I 6 + pJb (standard split-plot) then
(Ib- Ub)[(1- p)Ib + pbUb](Ib- Ub) = (Ib- Ub)(1- p) and, therefore, (14) is true
with 1 = u 2 (1 - p). Excluding rare situations, SS'B is not a chi squared random
variable.
It is convenient, in the sequel, to determine the mean value and the variance of S'S'B by
exploiting the known theorem on moments and cumulants of normal quadratic forms.
See, e.g., theorem 1 in [4] p ..5.5. Its adapted version gives rise to
b
E(S'S'B) =
u 2 ( trQ-
~) + ra L /3~,
k=l
(15)
R. Guseo
854
f) + b":::
z::t=I
f3r
(16)
SSEAB
=
=
(17)
The vector, whose squared norm is equivalent to SSEAB, is [(Ir- Ur) 0 Ia 0 (IbUb)]E*, so that, if EAB = [(Ir- Ur) 0 Ia 0 (Ib- Ub)], ther1 SSEAB = E*'EABE*.
As previously stated, define similarly EAB = EAB/8, with 8 a real constant to be
determined. It is easily proved that ss~AB = E*'EABE*, is not chi squared distributed.
The mean value of SSEAB is
~)
{18)
{19)
The fixed effect due to interaction between factors A and B may be detected via the
quadratic fo~~ SSAB,
SSAB
jk
L....t ((a/3)ik
ijk
(20)
855
(21)
If E(X)
E(Y-JLzX) 2
X
then J.Lz
Var(Z)
Var(Y)+JL~Var(X)+(J.Ly-J.LxJ.Lz) 2
......,
E(X2)
= E(Z) ~ 1 by (21)
(22)
and
Var(Y) + Var(X)
(23)
~ Var(X) + (E(X))2.
- ) ......,
Va(F
rB-
T (I+
2
r.
_1_)
a(r-1)
(24)
1.
~+7.T1
Let us examine, now, the ratio between the first order approximations of T1 and T2 in
a neighborhood of Q = (1- p)lb + pJb. We attain preliminarly
T2
(25)
(26)
b;l.
(27)
R. Guseo
856
FAB
(f-1+~)
a(?-~ I)
+~
(28)
- (a- 1)(
I~ I 0.5
2
3
4
5.2
4
3.45
1 1.5
4 3.45
3.12 2.73
2.73 2.41
2.5
3.12
2.50
2.22
2.90
2.34
2.10
2.73
2.22
2.0
As a concluding remark, we have attained again the well-known classic criterion according to which a type F ratio is near to the significance threshold if its values belong
to the interval (2 - 4). Nevertheless, the proposed tables allow a much more flexible
choice. A motivated access to the second part of the split-plot analysis of variance
table, Tab. 1, may be allowed by exploiting an approximation even if the matrix Q
has an unknown covariance pattern, different from the standard one, (1 - p)I6 + pJ 6 .
Tab. 1
Source
BLOCK
FACT. A
ERROR (A)
FACT. B
MSA =
MSEA
SSA
(a-1)
=
= (~~~l
INT. Ax B
MSB
MSAB
ERROR (Ax B)
'
SSEA
((a-1 Hr-1 ))
SSAB
((a-1)(6-1))
)EAB- SSEAB
-
(a(b-1J(r-1))_
bsa
sa2
+ b1 L j
Q2
_:..L_
FA
(a-1)
b-1 (t r Q - b~)
a2 { t Q
s )
b-1
r - b
a2 (
b-1 trQ- ~I
__!L
+ 1a I:k(b-1)
+ r L jk
(af1);k
((a-1)(b-1))
FB
FAB
REFERENCES
1. Kiefer, J. and Wynn, H.P.: Optimum balanced block and Latin square design for
correlated observations, The Annals of Statistics, 9 (1981 ), 737-757
2. Cressie, N.A.C.: Statistics for Spatial Data, Wiley, New York, 1991
3. Khattree, R. and Naik, D.N.: Optimal tests for nested designs with circular stationary dependence, J. of Statistical Planning and Inference, 41 (1994), 231-240
4. Searle, S.R.: Linear Models, Wiley, New York, 1971
5. Rao, C.R. and Mitra, S.K.: Generalized Inverse of Matrices and its Applications,
Wiley, New York, 1971
C. Mortarino
University of Padua, Padua, Italy
1. INTRODUCTION
Off-line process optimization and quality assurance usually refer to multiresponse
situations and efficient statistical methods should preserve this multiresponse feature. Between-responses correlations could be exploited to improve the quality of the
analysis, although it is often difficult to explicit them or to give for them sensible
approximated values.
In the first steps of a study, the actual behaviour of the system to be analyzed is
usually approximated by a simple model linear in its parameters. In order to provide
estimates for those parameters, many experimental designs are available. Among
Published in: E. Kuljanic (Ed.) Advanced Manufacturing Systems and Technology,
CISM Courses and Lectures No. 372, Springer Verlag, Wien New York, 1996.
858
C. Mortarino
them, because of their simple use and good properties, two-level factorial designs are
extensively used. With a general factorial design, many explanatory variables, factors,
each having two or more levels, can be simultaneously handled. These experiments
provide the opportunity to estimate not only individual effects of each factor, main
effects, but also eventual interactions.
Full factorial designs provide for an experimental trial, run, corresponding to each
possible combination of factors' levels. This, however, leads quickly to a large experiment, often in contrast with expenditure links. Some reduced plans from those
designs, called fractions, may thus represent a useful compromise.
For two-level designs with n factors, fractions are formed by t 2n-m runs, where
m and t are indexes of reduction of the plan, which results in a reduction of effects
(typically higher order interactions) which can be estimated. Notice that t can be any
integer in { 1, 2, ... , 2m} : if t equals a power of 2, the resu1ting plan is called a regular
fraction, otherwise this plan is an irregular one.
Section 2 will introduce the model here used with a description of symbols and
notations. A simple multiresponse example will be also presented in order to give
an immediate idea of a situation fitting the context here described and to explain
in a more intuitive way some of the techniques used. In section 3, the main question will be discussed: usual assumption of independence among runs and between
responses measured at the same run is often unacceptable; a new one much weaker
than the previous one will be described and a new sufficient condition in order to
verify it will be presented. Results proved with this assumption in previous papers [1]
and [2] will be also recalled, proving that a standard estimation method, i.e. ordinary
least squares method, gives m"inimum variance linear unbiased parameters' estimators.
Through the example introduced in previous section a possible source of dependence
will be finally examined. Section 4 is devoted to the examination of the behaviour of
above-mentioned estimation method when the true dependence pattern only lies in a
neighbourhood of the assumption presented in section 3.
2. CANONICAL MODEL
Let
= 1, 2, ... , h,
= 1, 2, ... , N,
(1)
z~
contains factors' levels used for the u-th run, levels expressed
in coded form, i.e. as elements of the subset {-1,+1}; /3; is here a k-dimensional
parameters' vector whose estimation is required; finally C:;u are stochastic terms which
859
are assumed to have zero expectation and, at least in this canonical model, common
variance and independent distribution.
Model (1) can be alternatively represented in matrix form
*
YNhx1
= X*Nhxkh {3*khx1
+ Nhx1
*
(2)
'
{ 6,
{ 150,
{ 1.8,
{ 24,
{ 30,
{ 120,
polysulfide index
reflux rate
moles polysulfide
time (in minutes)
solvent ( cm3 )
temperature (C)
~1
~2
~4
~5
~6
7}
170}
2.4}
36}
42}
130}
Assignment of coded level ( -1) to the lowest level of each factor and ( + 1) to the
higher level allows description of a 6-factor 2-level full factorial design (2 6 ) through
the following table:
runs
coded
levels
1
2
3
+1
+1
+1
+1
+1
+1
+1
+1
+1
~4
~5
+1
+1
+1
+1
+1
-1
+1
-1
+1
64
-1
-1
-1
-1
-1
-1
~1
~2
~6
factors
Matrix X is obtained from this table as follows: columns corresponding to main effects
are equal to the columns of this table; columns corresponding to required interactions
can be obtained as products of columns of this table. Model matrix X* is finally
calculated from X* = X I 3 .
860
C. Mortarino
Conversely, if we knew that not all factors are expected to influence all response
components, we could use a smaller plan: supposing that each response component
can be influenced only by a factor subset, we could construct a plan with as many
factors as the cardinality of the greatest among those subsets and that could be done
without violating the assumption of a common matrix model. If, for example, there
are, reliable indications that
a) ~ 1 and ~6 are not expected to influence 11,
b) 6 and ~4 are not expected to influence Y2 ,
c) ~4 and ( 5 are not expected to influence Y3 ,
in this simple case each response component is influenced only by four factors; then
we could use a 24 design with only 16 runs, described by
coded
levels
runs
(aJ ~
~2
~4
6 (a)
~6 ~ ~4
1
2
3
+1
+1
+1
+1
+1
+1
+1
+1
-1
+1
-1
+1
+1
+1
+1
+1
-1
+1
16
-1
-1
-1
-1
-1
-1
factors
From previous table it's easy to see that for each response component columns corresponding to factors assumed to influence it form the basis of a 24 design: columns
corresponding to (6, 6, ~4 , 6) are main effects columns of a 24 design- used for the description of 11- as well as columns corresponding to (6,6,~ 5 ,~6 ) and to (~1,6,~3,~6 )
D
do for the description of Y2 and Y3 , respectively.
Observe that the subject of previous example could have been chosen from many other
fields, as metallurgy or chemistry, for example, since, as already anticipated, multiresponse problems arise from very different situations. This example was not intended
to be exhaustive of the range of possible applications, but it was only proposed to
illustrate in a quick way the techniques here referred to.
Once model (2) is assumed, it's easy to calculate the minimum variance best linear
unbiased estimates for {3* from
(3)
861
3. DEPENDENT EXPERIMENTS
Previous model (2) relied on the strong assumption of uncorrelation between responses
measured on different experimental units and between different response components
measured on the same experimental unit. This second kind of uncorrelation, in particular, is almost always denied by real situations. Uncorrelated runs prove also to be
quite difficult to realize in experimentation: precise rules given to an operator in order
to perform correctly an experiment may not be fully understood and, consequently,
partially violated. In this case the analyst should not proceed along the standard
way, but he/she may not be able to know how the experiment was performed and
analyze data accordingly. That's why it is important to explore which situations can
be treated through the canonical methods described in section 2.
Consider model (2)
y* = X* {3*
+ e* ,
and
Cov(e*) = :E*.
(4)
In other words nothing is assumed: we leave the dependence pattern completely free.
We now partially restrict this broad range of patterns introducing for :E* the assumption of extended V -robustness property with respect to the model matrix X*,
property proposed for the uniresponse case in [4] and extended to the multiresponse
case by [1].
Definition.
A covariance matrix
Ch,
(5)
862
C. Mortarino
responses measured on two experiments to the difference of set-up between those two
experiments. The most important thing is that nothing is assumed about the link
function g: this leaves a sufficient range for dependence among different runs and a
totally unconstrained range for dependence among response components.
A useful result could be obtained turning expression (5), which is in terms of X,
the coded matrix, into an equivalent expression in terms of T(i) matrices: T(i) is, for
i = 1, 2, ... , h, the matrix of a 2n-m factorial design whose N = 2n-m rows represent
experimental runs through the original co-ordinates of levels of factors supposed to
influence the i-th response component.
For every "( 1 = ['YI, ')'2 , .. , /'k] and h' = [81 , 82 , , 8k], define T('Y, h) C IRk as the
subset
(6)
of cardinality 2k. Within the class
F = {f
F I :Jg/(): IR -tCh
V('Y,h) v~,yET('Y,h),
(7)
J(~,y)=:gJ(I~-yl)},
where 1~- Yl denotes the vector whose j-th element is lxi- Yil, j = 1, 2, ... , k.
We are now able to prove the sufficient condition previously mentioned, which is
an extension of a previous one proved in [1] for the case T(i) = T Vi = 1, 2, ... , h.
(8)
E:w
' (t(l)
t(l))
(t(2) t(2))
(t(h) t(h))]
l[ m1
u , w , m2 u ' w ' 'mh u ' w
2
t[gl(ltSlJ- t~ll),g2(1tS )- t~ll), ... ,gh(!tSh)- t~li)J.
863
Since each of the arguments of g;( ) can assume only two values, denoting by IA the
indicator function of the set A, there exist a function 9; such that
9; ( Xu1Xw1,
XiuXjw, , Xu(k+l)Xw(k+l))
It follows that
.E:w = 1[91 ( :C~ 0
as claimed.
We want to emphasize that to J:1" belong very different functions: regardless how
their components are "added" (giving the maximum freedom to the pattern of covariance within responses), each cqmponent separately can be some general distance
function (or "similarity" index) among the design points expressed through the factors' original co-ordinates.
The emphasis given to V -robust covariance patterns is motivated by the strong
results proved through it. In [1] it has been proved that, if in (2) model matrix X*
is associated to a regular fraction of a factorial design and .E* is extended V -robust
w.r.t. X*, optimal estimates for {3* can be obtained with the same method used in
section 2, i.e. with the method used when responses were completely uncorrelated
because
f3':.vLs = j3~LS i
here f3':.vLs = (X*'.E* X*)- 1X*'.E*y* is weighted least squares estimator, which however could not be calculated since .E* is, except for trivial cases, an unknown matrix.
For irregular fractions a further step is necessary: these designs are, by construction, formed by several smaller (regular) subfractions: in this case it has been proved
in [2] that the same result above described can be obtained only if we also assume
independece among runs belonging to different subfractions; this makes that result
particularly suitable for sequential experiments.
Example (continued). In this situation a 26 or a smaller plan was performed. Runs
order should have been randomized, but, in practice, can we really be sure? Maybe
we should consider the possibility of correlated runs: for example, a greater similarity
between experimental set-up in two runs could entail a greater similarity between factors that cannot be controlled and, hence, we may have a greater covariance between
corresponding responses than we have for responses measured at runs with a more
C. Mortarino
864
Tho rem 2. Let C j3 and C j3 estimators obtained through ordinary least squares
and weighted least squares methods, respectively, for Cf3 for model (2). Let E* =
Cov( e*) = W + pF , where W is an extended V -robust matrix with respect to X*.
Here r is a generical symmetrical matrix, such that
1
= D(O(p2 )),
V(i,j);
in other words the difference between those covariance matrices, or the variance loss
due to the use of the ordinary least squares estimator in place of the weighted least
squares one, is of order p2
REFERENCES
1. Guseo, R. and Mortarino, C.: Multiresponse dependent experiments: robustness
of 2k-p fractional factorial designs, 1995 (submitted to J. Statist. Plano. Infer.)
2. Mortarino, C.: Multiresponse irregular fractions of two-level factorial designs with
dependent experimental runs, Proceedings XI International Workshop on Statistical
Modelling- Poster Session, Orvieto, July 15-19 1996
3. Box, G.E.P. and Draper, N.R.: Empirical model building and response surfaces,
Wiley, New York, 1987
4. Krouse, D.P.: Patterned matrices and the missspecification robustness of 2k-p designs, Comm: Statist. Theory Meth., 23 (1994), 3285-3301
5. Guseo, R. and Mortarino, C.: V-robustezza approssimata, Atti della XXXVIII
Riunione Scientifica della Societa Italiana di Statistica, Rimini, 9-13 aprile 1996,
vol.2. 219-226.
AUTHORS INDEX
Alberti N.
Albrecht R.
Alvarez M.L.
Antonelli D.
Armarego E.J.A.
Arsenovic M.
Bahns 0.
Baldazo R.
Baptista R.M.S.O.
Bariani P.F.
BasnetC.
BeckS.
Beghi A.
Bellido L.F.
Beltrame M.
Benic D.
Berti G.A.
Bianchini F.
Bode C.
Borsellino C.
BragliaM.
Bray A.
BrunoM.
Burelli M.
Burgos A.
Buscaglia F.
Caloska J.
Capello E.
Carrino L.
Caruso A.
Cebalo R.
Ceretti E.
Chemyr I.A.
Chu W.H.
Cosic P.
Cosmi F.
Crippa G.
CukorG.
Curtins H.
Custodio P.M.C.
D'Angelo L.
D'Errico G.E.
Dassisti M.
De Bona F.
47
273
443
63
97
377
273
443
329
311, 345, 361
821
273
615
607
121
281
311, 345,361
633
77
659
427
63
143
745
443
517
759
541
599
583
501,699
135,227
509
557
829
607
691
219
751
329
311, 345,361
143, 167
583
485
De Chiffre L.
De Filippi A.
De Toni A
Dersha A.
Desai T.A.
DiVita G.
Dini G.
Dolinsek S.
Dudeski Lj.
Dukovski V.
Eversheim W.
Failli F.
Favro S.
Ferrari M.
Ferreira P.S.
Filice L.
Fioretti M.
Forcellest A.
Foulds L.
Franceschini F.
Francesconi L.
Franchi D.
Frantini L.
Gabrielli F.
Galantucci L.M.
Galetto F.
Gecevska V.
Gerschwiler K.
Giardini C.
Goch G.
Grottesi A.
Grum J.
Guggia R.
Guglielmi E.
Guseo R.
Henriques E.
IwanczykD.
Jakobuss M.
Jiang P.Y.
Junkar M.
Jurkovic M.
Kampus Z.
Karpuschewski B.
Kitakubo S.
Klocke F.
Konig W.
Kopac J.
643
691
435
243
557
599
461
127
477,667
477,667
235
461
691
557
211
737
121
319
821
63
353
751
737
319
243,265,583
837
683
7
135,227
783
353
493, 767
311, 361
167
847
211
813
191
235
525
385
337
183
411
7
7
151
Koziarski A.
Krajewski W.
Kruszynski B.W.
Kuljanic E.
Kuzman K.
La Pierre B.
Landolfi M.
LanzettaM.
Lazarev J.
Lazarevic D.B.
Lenz E.
Lepschy A.
Levi R.
Lo Casto S.
LoNigro G.
Lo Nostro G.M.
Lo Valvo E.
Lombardo A.
Louren~o P.A.S.
Lucchini E.
Lucertini M.
Lujic R.
LunguM.
Maccarini G.
Majdandzic N.
Mancini A.
Masan G.
Maschio S.
Mathew P.
Matteucci M.
MedenG.
Meneghello R.
MeneghettiA.
Menegon R.
Merchant M.E.
Mertins K.
Mescheriakov G.N.
Mesquita R.
Mesquita R.M.D.
Mezgar I.
Miani F.
Micari F.
Micheletti G.F.
Midera S.
Mijanovic K.
MikacT.
Mohr J.
159
623
159, 199
23,121,219
337
273
599
591
759
369,549
393
615,623
63
115
257
143
115,659
257
729
745
303
469
567
135,227
469
319
77
745
107
485
805
345,643
435
751
1
273
509
211
729
821
121
47, 737
85
199
151
289
485
MonkaP.
MonnoM.
Montanari R.
Moroni G.
Mortarino C.
Motta A.
Mrsa J.
Mtihlhausser R.
Mtiller H.
Narimani R.
Neuberger A.
NicolichM.
Nicolo F.
Odanaka T.
Opran C.
Ostafiev D.
Ostafiev V.
Pandilov Z.
Pantenburg F.J.
Pascolo C.
Pascolo P.
Passannanti A.
Pavlovski V.
Pepelnjak T.
Persi P.
Pesenti R.
Piacentini M.
Pinto P.
Plaia A.
PlazaS.
Poli M.
Rabezzana F.
Radovanovic M.R.
Radovanovic M.R.
Rasch P.O.
Regent C.
Rinaldi F.
Romano D.
Romano V.F.
Rf/Ssby E.
Rotberg J.
Ruisi V.F.
Sacilotto A.
Sahay A.
Sakic N.
Sawodny 0.
Schulz H.
775
541
353
599
855
517
799
575
575
107
707
403
303
411
567
207
207
667
485
451
451
791
683
337
403
403
115, 659
211
257
443
517
751
369
549
175
183
633
63,675
607
419
393
115,659,737
643
651
251,297
783
37
Sebastijanovic S.
Seliger G.
Semeraro Q.
Sergi V.
Settineri L.
Shapochka O.V.
Shirizly A.
Shohdohji T.
Simunovic G.
Smoljan B.
Sokovic M.
SpinaR.
Stefanic N.
Stoic A.
Stoiljkovic Lj.
Stoiljkovic V.
Stojanovic N.
Strozzi A.
Sturm R.
Tantussi G.
Tirosh J.
Tomac N.
Ty>nnessen K.
Tonshoff H.K.
Tomincasa S.
Trajkovski S.
Tricarico L.
Tu J.
Udiljak T.
Ukovich W.
Valenti P.
Viaro U.
Villa A.
Vrtanoski G.
Wang J.
Webster J.A.
Zelenika S.
Znidarsic M.
Zompl A.
Zuljan D.
469
575
541
353
675
509
707
411
469
715, 799
151
265
251,297
501
377
369,377
377
721
767
591
707
175
175,419
77, 183
675
533
243,265
557
699
303,403,633
791
615,623
303
477
97
191
485
525
63
493