Service News Vol 30 No 1 2005
Service News Vol 30 No 1 2005
Service News Vol 30 No 1 2005
1, 2005
L
DESK
elcome to 2005 and our first Service News issue of
the year. Numerous requests have been received
A SERVICE PUBLICATION OF over the last few months for information about
LOCKHEED MARTIN AIR MOBILITY SUPPORT
SFAR 88. The article, SFAR 88 — Fuel Tank System — Fault
www.lockheedmartin.com/ams Tolerance Evaluation, provides a synopsis of the background,
Volume 30, No. 1 2005 analysis of the issues, and the resulting Lockheed Martin
recommendations.
CONTENTS
Safety, in general, is something that must be a constant part
Click on Entry
to Go Directly to the Article
of our operating routine. The Hercules aircraft we operate,
and in many cases the environment we operate in, can be
From the Editor’s 2
Desk quite hazardous to us and/or to our equipment. Constant
vigilance must be maintained as we go about our daily
Aviation Safety 3 routines. Aviation Safety Management — Pay Now or Pay
Management Later, is an excerpt from a well-received presentation on
Safety and is both thought provoking and entertaining.
SFAR 88—Fuel Tank 5
System Fault In light of the recent USAF press release and given the
Tolerance Evaluation significant interest and concern over the C-130 Wing Service
Life, a copy of our recent memorandum is included in this
C-130 Wing Service 10 publication. More details will follow in upcoming issues.
Life Update
This publication is produced for you . . . the Hercules
community. Your suggestions for topics or ways to improve
Certified Parts—An 11 the relevance or content of the Service News are always
Update
encouraged. We try to include interesting photographs of the
Hercules at work. So, if you have comments, suggestions or
Operator’s Corner 18
photographs to share with others in the Hercules community,
please forward them to us.
Air Mobility Support 19
2006 Calendar —
Call for Photos
Improving 19
Information Delivery
Address all comments or questions
pertaining to the Service News, to the
Service News Editor at
[email protected]
RESTRICTION NOTICE
This publication is intended for information
only. Its content neither replaces nor
revises any material in official manuals or
publications. Copyright © 2005 Lockheed
Martin Corporation. All rights reserved.
Permission to reprint articles or
photographs must be requested in writing
from the editor.
Image Courtesy of the US Air National Guard
T he dictionary defines safe as being “free from harm or risk”, certainly one of the
principal goals of any endeavor, particularly this business of building and flying
airplanes. But how many of you really fly, drive, operate, maintain, or otherwise
use anything that really fits this definition? If any of you raise your hand, be prepared to
have your bubble burst. Let me instead suggest to you that this principle of “safe” is not a
measurable absolute, and also tell you that it is virtually impossible to attain. It is instead a
relative abstraction, meaningful only in comparison to something else. You want safe?
Compared to what? Well, if we can’t guarantee freedom from harm or risk, how do we
operate? We survive and succeed by changing our focus to what we’re really after—an
acceptable degree of risk, thus ending up with something we can manage, for it’s really not
safety we’re looking for, except in the abstract, but rather a level of risk we can tolerate and
attempt to control. The “operational risk management” (ORM) discipline is an outgrowth of
this way of thinking and has become a formal part of many aviation programs over the last
decade in recognition of the distinction. ORM attempts to analyze an operation
beforehand to make it relatively, and I emphasize relatively, safer than it would otherwise
be—not to make it safe, but rather make the risks of the operation acceptable. ORM
doesn’t replace safety; it merely refines the concept by analyzing the relationship between
a hazardous condition or operation and its probability of occurring. In short, you don’t
manage safety but you can and do manage risk. Keep this in mind as we talk about how
safe something is and always be prepared to ask, “compared to what?”
It’s fairly common to hear someone use “99.9%” to signify high quality or confidence.
Bearing that in mind, what if things in your lives were 99.9% good or right; you’d be happy
as a clam, correct? Let’s see. A few years back someone calculated that in the US,
99.9% right would mean:
Not so good, huh? Okay, let’s re-engineer things to be an order of magnitude better;
surely 99.99% would be acceptable, don’t you think? If you do, you would have to be
willing to accept:
Satisfied? Didn’t think so. So how do we decide what is safe enough? One determinant is
cost, which then begs the question of just how much safety can you afford? So let’s talk about . . .
Here’s a little test. What is the almost universal first response of program management when
the safety engineer proposes a safety improvement? “How much will it cost?” Not whether it’s
moral or ethical or smart—it’s first and foremost about how much money. Some managers are
smart enough to ask instead, “What will it cost if we don’t do this?” Marginally more rational
reasoning, but money is still the driving force.
In his book, Aviation Safety Programs — A Management Handbook, Professor Wood goes on
to explain that business costs of risk are divided into two categories—insured or uninsured.
The cost of risk is never less than your minimum insurance premium, one of the fixed costs of
doing business. Have an accident, however, and watch what happens. The lost time,
inconvenience, and aggravation that follow any accident have value; so do the higher insurance
premiums that usually result. These and other uninsured costs mount up, generally to two or
three times insured costs. In 1995, the average insurance payout for a major commercial
aircraft accident was from $120-200 million for the hull loss and $2.8 million average liability
payment per passenger. One operator suffered a $35 million hull loss and a total liability of
$375 million, after which its premiums rose to twice the industry average. That’s a powerful
incentive if ever there was one.
Ground accidents can be prohibitively expensive too. The best estimate in the US aviation
industry is that these cost nearly a billion dollars per year, even more incentive if you need one.
When we report accident costs, we typically account for the cost of insurance and worker’s
compensation if injuries occur, but we rarely calculate or report uninsured costs. Some typical
ones are:
• Insurance deductibles.
• Lost time and overtime.
• Investigation/corrective action costs.
• Loss of spares and specialized equipment.
• Higher costs of operating remaining equipment.
• Cost of hiring and training replacement workers.
• Loss of productivity of injured workers.
• Costs of cleanup and restoration of order.
• Loss of equipment use.
• Costs to rent/lease replacement equipment.
• Fines, citations, legal fees.
• Increased insurance premiums and excess liability claims.
• Costs of lost business and damage to reputation.
(Continued on page 12)
O n July 17, 1996, a 25-year old Boeing Model 747-100 series airplane was involved
in an in-flight breakup after takeoff from Kennedy International Airport in New
York, resulting in 230 fatalities. The accident investigation conducted by the
National Transportation Safety Board (NTSB) indicated the center wing fuel tank exploded
due to an unknown ignition source. The NTSB issued recommendations intended to:
• Reduce heating of the fuel in the center wing fuel tanks on the existing fleet of
transport airplanes.
• Reduce or eliminate operation with flammable vapors in the fuel tanks of new type-
certificated airplanes.
• Re-evaluate the fuel system design and maintenance practices on the fleet of
transport airplanes.
The accident investigation focused on mechanical failure as providing the energy source
that ignited the fuel vapors inside the tank.
The NTSB announced their official findings of the TWA 800 accident at a public meeting
held August 22 through 23, 2000, in Washington, D.C. The NTSB determined the
probable cause of the explosion was ignition of the flammable fuel/air mixture in the
center wing fuel tank. Although the ignition source could not be determined with certainty,
the NTSB determined the most likely source was a short circuit outside of the center wing
tank allowing excessive voltage to enter the tank through electrical wiring associated with
the fuel quantity indication system (FQIS). Opening remarks at the hearing also indicated:
”. . . This investigation and several others have brought to light some broader issues
regarding aircraft certification. For example, there are questions about the adequacy
of the risk analyses that are used as the basis for demonstrating compliance with
many certification requirements.”
This accident prompted the FAA to examine the underlying safety issues surrounding fuel
tank explosions, the adequacy of the existing regulations, the service history of airplanes
certificated to these regulations, and existing maintenance practices relative to the fuel
tank system. FAA, NTSB, and Boeing undertook a program to examine aircraft in
permanent storage to see what the condition of the components of the fuel system were.
The accompanying photographs document some of the items found on these stored
aircraft.
On October 26, 1999, the FAA issued Notice of Proposed Rulemaking (NPRM) 99-18,
which was published in the Federal Register on October 29, 1999 (64 FR 58644). Three
separate requirements were proposed in that notice:
The Lockheed Martin Aeronautics Company 382/L-100 aircraft were required to meet requirements
of SFAR 88 Transport Airplane Fuel Tank System Design Review, Flammability Reduction, and
Maintenance Requirements. To ensure the design principles applied to each system are sufficient to
achieve the required level of safety, and consequently comply with the requirement, extensive
analysis was conducted on the entire fuel system.
The Plan
Lockheed Martin Aeronautics Company determined that to document the System Safety
Assessment (SSA) which met the requirements of SFAR 88, 14 Code of Federal Regulations (CFR)
25.981 and Advisory Circular (AC) 25.981-1C (draft), the following would have to be developed:
(Continued on page 7)
Scope
The scope of the analysis was to document those design features that preclude a fire and explosion
in the 382B, C, E, F, and G model aircraft fuel tanks. In conducting the safety analysis, the fuel
tanks, the fuel system up to the nacelle, and the surrounding dry bay areas that might be subject to
fuel that has leaked from the tanks or fuel fumes, were investigated. The intent of the System
Safety Assessment was to demonstrate that ignition sources are non-existent; exist at an acceptable
level; or are unacceptable but can be made acceptable if recommended design changes and
continued airworthiness requirements are implemented.
The components included in this analysis were the fuel pumps (boost and dump), fuel level control
valves, fuel quantity probes, plumbing tubes, equipment wire harnesses, conduits, filler caps, the
fuel tank construction, equipment in surrounding dry bays, venting systems, fuel dump systems, and
drain systems. Effects of adjacent systems (e.g., bleed air, other wires in the same wire bundles,
etc.) on these components were considered. Lightning, HIRF, EMI and their effects on the fuel tank
system were considered. All potential latent failures were considered to be possible and the effects
of another failure combined with the latent failure were analyzed. The heating phenomenon from
environmental, equipment failure, and equipment malfunction was considered in relation to the auto-
ignition temperature of the fuel. Cascading failures were analyzed, and those that did not meet
extremely improbable levels were addressed.
The major components in the fuel tanks were subjected to Failure Modes and Effects Analysis
(FMEA) and quantitative Fault Tree Analysis (FTA). These were the Fuel Quantity Indicating
System (FQIS), the Refuel System (including the Single Point Refueling (SPR) System and Fuel
Level Control Valves), and the Fuel Transfer System (Fuel Boost Pumps, Dump/Transfer Pumps,
and Dump Valves).
The SSA identified potential failures and addressed what events must occur for the potential failure
condition to materialize, as well as discussions concerning their likelihood.
• A single point failure inside the Tank-in-Unit (TIU) in combination with EMI could potentially
cause the units to introduce 115 VAC on the Tank Unit wiring. This could result in fuel vapor
ignition if there is FOD or a short circuit between the Tank Units or the associated wiring
inside the tank.
(Continued on page 8)
Lightning Analysis
The assessment addressed the lightning induced fuel tank ignition threat on the 382/L-100. Several
areas of investigation were identified. Additional testing was carried out to verify effects regarding
skin thickness and the effectiveness of the suggested mitigations. That testing took place in July
2004 at Lightning Technologies Incorporated Laboratory in Pittsfield, Massachusetts. The testing
validated the 0.080 thickness of the fuel tank skin with three applications of paint being the minimum
requirement. External tank nose and tail cones and any area over the main tanks less than 0.080
thickness will be addressed by a Service Bulletin (SB).
SSA Conclusions
The final task was a detailed SSA assessment of each SFHA Failure Condition performed using all
appropriate FAA requirement, and guidance information, Lockheed Martin drawings and analyses,
as well as vendor drawings and analyses. The goal of this assessment was to identify those failure
conditions that required no further action, those that required an inspection, and those that required
a design enhancement to meet SFAR 88 requirements.
(Continued on page 9)
Recommendations
In order to maintain the Lockheed Martin Hercules Model 382 aircraft in compliance with the SFAR
88 guidance, the recommended actions were as follows:
• Provide inspection requirements to assure the aircraft in the field adhere to existing
appropriate design features and maintain continued airworthiness. These inspection
requirements are provided in the form of Service Bulletins. (SB 382-28-19 (82-770) Fuel –
SFAR 88 – Dry Bay Zonal Inspection and Inspection/Repair of Static Ground Terminal of Fuel
System Plumbing)
• Provide design improvements and issue to the operators as Service Bulletins distributed as
ADs so as to maintain continued airworthiness.
• Add Transient Suppression Devices (TSDs). (SB 382-28-20 (82-772) Fuel – SFAR 88 –
Installation of Ground Fault Interrupter (GFI), Transient Suppression Device (TSD), and
Flame Arrestor for Protection of Fuel System)
• Add Additional Bonding. (SB 382-28-21 (82-773) Fuel – SFAR 88 – Lightning Bonding
Jumper Installation)
• Add Ground Fault Interrupters (GFIs). (SB 382-28-20 (82-772) Fuel – SFAR 88 –
Installation of Ground Fault Interrupter (GFI), Transient Suppression Device (TSD), and
Flame Arrestor for Protection of Fuel System)
• Define remedial action for fuel tank skin panels based on lightning strike test results.
(Service Bulletin pending)
• Add Flame Arrestors for Fuel Vents in Lightning Zone 2A. ). (SB 382-28-20 (82-772) Fuel
– SFAR 88 – Installation of Ground Fault Interrupter (GFI), Transient Suppression Device
(TSD), and Flame Arrestor for Protection of Fuel System)
UPDATE
ockheed Martin recently distributed the memorandum shown below. This
memorandum was distributed due to the extensive interest in C-130 Wing Service
Life, following the USAF aircraft groundings and flight restrictions. As stated in the
memorandum, Lockheed Martin will issue a Service Bulletin addressing wing fatigue
cracking and service life in the near future. This SB will affect International Military and
Commercial C-130/L-100 operators. More detail will also be included in the next issue of the
Service News.
Company
Lockheed Martin Aeronautics
, GA 30063
86 South Cobb Drive Marietta
14 February 2005
CONCERNS
CRACKING AND SERVICE LIFE
C-130/L-100 WING FATIGUE
ce issued a Press Release on
rs are awa re the United States Air For
Ma ny C-1 30 ope rato ings and flight restrict ions due
arding C-130 aircraft ground
Friday, February 11, 2005, reg t two years, Lockheed Martin
king concerns. Over the pas
to center wing fatigue crac l expertise to the USAF to
n pro vid ing analytical support and technica to
Aerona utic s has bee kheed Martin will continue
st them in eva luat ing the center wing service life. Loc mon ths ahe ad.
assi ted in the
and other operators, as reques
provide support to the USAF,
has also been assessing C-
the US AF sup por t activities, Lockheed Martin for
In par alle l wit h ter wing and outer wing,
es, related to both the cen
130/L-100 service life issu king, win g serv ice life, and
operators. Wing fatigue crac
International and Commercial cificall y address ed during the 2002, 2003, and
rati ona l usa ge hav e bee n spe
aircraft ope
ferences (HOCs).
2004 Hercules Operators Con
ressing wing fatigue cracking
e a Service Bulletin (SB) add
Lockheed Martin intends to issu This SB will affect Internat
ional Military and
nea r futu re.
and service life in the arding wing service life will
L-1 00 ope rators. All analysis efforts reg
Com mer cial C-1 30/ H is not the same as aircraft
ent Baseline Hours (EBH). EB
be evaluated in terms of Equival be different than the aircraft
wing flight hours, which may
flight hours. Both the actual sly been replaced, and the pas
t aircraft mission
if the cen ter win g has pre viou
flight hours
red in order to determine EBH.
usage severit y must be conside
first phase is determining the
a phased approach, where the
The SB will necessarily take ope rator usage evaluat ion was
ope rato rs C-1 30 fleet. The need for an
EBH of an ind ivid ual the 2003 and 2004 HOCs.
khe ed Ma rtin presentations made during
emp has ized in Loc used to determine if further
evaluated, their EBH will be
Once an operator’s usage is rs in evaluat ing their aircraft
ed Martin can assist operato
actions are required. Lockhe
EBH for their fleet.
usage and in determining the
on requirements beyond the
will define structural inspecti
The second phase of the SB ons, tailored to varying EBH
the urgency of these inspecti
normal inspection program, and restrict ions or aircraft
wit h hig h EB H levels could require flight
limits. Som e airc raft to determine what remedia l
und ing unt il stru ctur al insp ections can be implemented
gro
action is appropriate.
30/L-100 customers in
com mit ted to supporting all of our C-1
Lockhe ed Ma rtin is dent and necessary remedia l
serv ice life issue and recommending pru
asse ssin g the win g raft while minimizing the
operations of C-130/L-100 airc
actions to ensure continued safe looks forward to working
rator’s fleet. Lockheed Martin
potential impact on the ope eva luat ions are conducted and
r wit h ope rato rs and thei r Service Centers as usage
togethe
ts are implemented.
structural inspection requiremen
AN UPDATE
n the world of C-130 spares it is often assumed that a part is high quality if it looks
good, fits, and has a Certificate of Conformance. The fact of the matter is that this is
often not true. Some of the most critical qualities of a part may be overlooked in a
general inspection because they cannot be seen or measured. The most significant of
these are special processes and materials.
Blumer continues, “One of our biggest concerns is that without knowing it an operator may
install a substandard and potentially dangerous part on their aircraft. We make Certified
Parts widely available in the C-130 B-H marketplace to prevent this from happening. All of
the part resellers who support the industry have access to high quality spares. By simply
requesting Certified Parts, operators assure themselves that the parts they receive are
made correctly.”
It is advised that even when a request for Certified Parts is made, the parts should be
checked upon delivery. Look for the Hologram (holographic sticker) on the part or part
packaging to assure you have received the correct parts. Well into its second year the
Certified Parts program has issued more than 120,000 holograms. Many of the large parts
providers in the industry are carrying substantial inventories of Certified Parts. For more
information on Certified Parts, as well as other “Hologram” programs please visit
www.LMHologram.com.
Management today is largely an exercise in cost containment, with two general types of costs —
fixed and variable — to control. About the first, we can do little or nothing, like that minimum
insurance premium from before, or fuel, spare parts, and the like. Control of the variable
costs—things like personnel, training, safety, marketing, advertisement, PR—in addition to
excess premiums and other uninsured costs, is where managers get to manage.
Typically, the ratio between fixed and variable costs is about 80/20. In aviation, excess
insurance premiums and uninsured costs are about 5 percent of total costs, a seemingly
manageable and reasonable amount. But that 5 percent of total is as much as 25 percent of
variable costs, a significant amount and one that should and will get the attention of smart
managers. You, as the Safety engineer, can help management control or reduce some of their
costs with effective accident prevention, lessons learned, and other similar aviation safety
programs, helping save a substantial portion of the variables over which managers have actual
control. As a result, if for no other reason than economics, you can be effective. In short, make
it about money and they will listen.
All too often these days, contributing to team success does not include raising objections to
team activities and decisions or recommending different courses of action from what the
majority thinks. Team leaders will at times accuse a dissident member of “not being a team
player” if they disagree. Since when did dissent become synonymous with disloyalty? What if it
is precisely a person’s job to enforce rules, standards, regulatory requirements, and processes?
What often happens to these people? Human nature being what it is, those whose role it is to
oversee, critique, judge, and evaluate the work of others often are not very popular. You who
are flight examiners or quality inspectors know what I’m talking about. Standardization,
evaluation, quality assurance, safety, audit, inspection, enforcement are all disciplines whose
practitioners may not be the most welcome members of a group. But as General Colin Powell
said: “Being responsible sometimes means pissing people off. It’s inevitable if you’re
honorable.” Many people resent being told what they can and cannot do, or that what they are
(Continued on page 13)
doing isn’t right or doesn’t meet requirements, or worse that they’ve failed at something. This
goes for managers as well as workers. I’ve heard both accuse a Safety engineer of not being a
team player when the safety guy pointed out an error, discrepancy, or failure to satisfy a
requirement, and then recommended a fix; but I would counter that the accused was functioning
as a true team member by raising the objection, for that is his role. Managers sometimes have
asked that these type people be removed from their programs or have threatened to delete their
budgets if they did not follow orders. People like this who fall in love with their projects and
programs become defensive when their success is at risk and often blame whoever points out a
problem instead of finding out who caused it. It’s one thing to question a recommendation and
ask for substantiation; that’s a supervisor’s responsibility. It’s an entirely different matter to
shoot the messenger just because he won’t change the message.
Safety used to consist largely of the “fly-fix-fly” method of hazard and risk control, the “Oops,
back to the drawing board” approach. Designers gave little thought to trying to postulate
beforehand what might go wrong—design, build, test, and fix was the accepted process. If you
were smart and lucky enough to get it right the first time, all was well from both a safety and
management point of view. But how many times are we that lucky? Few things, particularly we
humans, ever approach perfection, and nothing is risk-free; why then do we persist in trying to
prove otherwise? “Endlessly repeating the same process, hoping for a different result” is a
definition of insanity sometimes attributed to Albert Einstein; whether he actually said it or not,
it’s true. It wasn’t until the 1950s when we first began to embrace a systems approach to safety
on the ballistic missile program, wherein requirements were established beforehand to preclude,
prevent, or mitigate hazards and their consequences. Since the outcome of an unanticipated
serious failure of a missile was usually catastrophic, it made sense to prevent it from happening
from both an economic and human standpoint. The top-down, “fly-fix-fly” design approach was
simply too costly. As we’ll see, though, the systems approach wasn’t universally adopted back
then, despite the efforts of aviation safety pioneers like C. O. Miller and Roger Lockwood who
founded the System Safety Society in 1962. In fact, it still encounters resistance in some
respects even today. Here’s an example you may recognize.
The late Dr. Richard Feynman, one of the world’s leading physicists, in his role as a member of
the President’s Commission that investigated the Challenger accident, found fault with NASA’s
top-down design and testing methods. Speaking of the shuttle’s main engine, he said it “was
designed and put together all at once with relatively little detailed preliminary study of the
material and components. Then when troubles are found…, it is more expensive and difficult to
discover the causes and make changes.…[A] simple fix…may be impossible to implement
without a redesign of the entire engine.” It would have been much wiser and certainly
significantly less expensive to have used a bottom-up approach, or what he called the
“component system” for main engine design, wherein the properties of the materials,
components, subsystems, and systems are investigated and understood and then subjected to
(Continued on page 14)
rigorous tests to validate preliminary design decisions. He wrote “as deficiencies and design
errors are noted they are corrected and verified with further testing. Since one tests only parts
at a time, these tests and modifications are not overly expensive….Failures are easily isolated…
and there is a very good chance that the modifications to the engine to get around the final
difficulties are not very hard to make, for most of the serious problems have already been
discovered and dealt with in the earlier, less expensive, stages of the process.” In a nutshell,
and in far more eloquent words than I could ever muster, he was describing system safety.
One of the many other findings in the Commission’s report was that “Organizational structures…
placed safety, reliability, and quality assurance offices under the supervision of the very
organizations and activities whose efforts they are to check.” This wasn’t meant as a
compliment. There was no safety representative on the management team that made key
decisions concerning the launch. The “extensive and redundant safety program” that existed on
the earlier Apollo program was rendered ineffective in the face of unrelenting schedule
pressures. Sounds familiar, even today. Those who advised no launch or recommended
caution and further analysis were told to show reason why they considered the launch
potentially dangerous. Thus, NASA management violated the cardinal rule of system safety: it
is emphatically not the safety engineer’s job to prove something is unsafe; it is the designer’s, or
management’s, or the operator’s job to prove that it is safe.
The US military long ago recognized the need for autonomy for these types of oversight
functions and positioned them to reduce or eliminate control by the organizations and people
under scrutiny. Safety in particular almost always reports directly to the commander as a staff
function outside the control of any line unit or supervisor. Otherwise, the chilling effect of
burying the Safety people deep within a group they are charged to oversee and evaluate
ultimately would render them ineffectual, but even more worrisome, would send a powerful
message to others that Safety isn’t important. That’s not the message any organization wants
to send.
Culture of Safety
Among the several definitions of culture, the one that applies for our purposes is this: “the set of
shared attitudes, values, goals, and practices that characterizes a company or corporation.”
There can indeed be a culture of safety, one that operates to everyone’s benefit. It consists of
four principal elements—leadership, trust, attitude, and integrity—each very important, and all of
which, properly employed, will enhance your chances of success. The successful leaders are
those who inspire rather than intimidate, leading by example instead of by demand. Just as
important is an atmosphere of assured reliance, or trust, between and among superiors, peers,
and subordinates, one that demonstrates confidence, dependability, and faith in their abilities.
Your attitude itself can promote both safety and success, especially if you are always attuned to
the needs and desires of your own people and those of your customers. Let me suggest,
however, that the remaining element—integrity—may be the most important of the lot, both
because it incorporates the attributes of the others and is the one without which the others
would have little if any meaning.
Though related in some ways, integrity is different from honesty. Very simply, honesty is
adherence to the facts, a refusal to lie, steal, or deceive in any way. Integrity is somewhat more
(Continued on page 15)
complex, a firm adherence to a code of values, being trustworthy to a degree that you are
incapable of being false to a trust, responsibility, or a pledge. The former is more rule oriented;
the latter value based. Doing right because it’s the rule is honesty; doing right because it’s the
right thing to do is integrity. This is similar to the distinction the columnist Leonard Pitts recently
made between reputation and character. He wrote, “Reputation, it has been said, is about who
you are when people are watching. Character is about who you are when there’s nobody in the
room but you. Both matter, but of the two, character is far and away the most important. The
former can induce others to think well of you. But only the latter allows you to think well of
yourself.” Or as my mama told me when I was young, integrity is stopping at a stop sign in the
middle of the desert when there’s no one else around. It’s what you do and how you behave
when nobody’s watching that really matters.
Business ethics has been the topic du jour of late with all the revelations of financial
improprieties at many corporations. The people whose faces and behaviors have been in the
news might have benefited from paying close attention to what Norm Augustine, former
Lockheed Martin president and CEO, once presented as his personal checklist for helping
decide what is the ethically correct thing to do:
• Is it legal?
• If someone else did this to you, would you think it was fair?
• Would you be content if this were to appear on the front page of your hometown
newspaper?
• Would you like your mother to see you do this?
If you could answer “yes” to all four questions, then whatever you’re about to do is probably
ethical.
Another version of this is “Does it pass the smell test?” Occasionally, some course of action
appears to or actually does follow all the technical rules but still leaves you with a feeling of
discomfort; it just doesn’t “smell” right. If so, ask yourself these four questions and see how it
smells then. Chances are it may be a little rank!
What does all this have to do with safety? If your first response to a safety requirement is “How
can I get around that?” instead of “How can I satisfy that?,” you may be headed for trouble. If
your attitude is that your situation, your project or task, you and your people are somehow
“special” and thus have a different set of rules from everyone else, you’re likely to have
problems soon. If you more often than not sacrifice the “value” choice on the altar of
expediency or schedule, you will eventually regret that decision. When you’ve cut the last
corner and still must compromise to meet your deadline and you don’t call a time out, you’re
only one decision from potential disaster. The “How did we end up in this mess?” post-mortem
after many an accident contains some or all of these as significant steps in the sequence of
events that culminated in a mishap. If this is your culture, the values you embrace and promote,
failure is almost inevitable. It’s a matter only of when, not if.
So What Do We Do?
In a September 2002 message to Company supervision, Dain Hancock, LM Aero President,
reiterated the management behaviors he and his leadership team said were necessary to
achieve the Company’s goals for the year. Among several were three that sound pretty
familiar. One was that “Schedule will NOT take priority over Quality and Safety.” Another
was that “Healthy debate, discussion, and trust will form the basis of our interactions.”
Finally, he challenged supervisors to be “accountable for performance—getting the job
done, safely, and with the highest quality – the first time.” These are meant as every day
behaviors, not crisis actions, and they are key to meeting our goals.
In closing, let me suggest to you, especially those who manage and lead, something that has
fallen from favor in recent years, an approach the truly successful leaders all use. I call it
Managing Intangibles for lack of a better term. Everybody, especially those “bottom line”
manager types about whom I spoke earlier, wants to be able to measure success in some
quantifiable way, to be able to substantiate a dollar value to prove their worth. Many things,
however, don’t lend themselves to these type metrics; safety success is one of them. How
does one count the number of accidents prevented or serious incidents avoided? Who can say
whether this or any other safety briefing kept someone out of harm’s way? No one can, of
course.
But somewhere inside I know I’ve been successful in some way. Can’t prove it, but I know it
when I see the light come on in a young crewmember’s eyes or notice a change in behavior on
the part of a maintenance troop after we’ve had a safety briefing. I see it in a design engineer’s
learning about safety design requirements and remembering them on the next program and a
flight test engineer who initiates contact with his flight safety counterpart before a test program
starts instead of after a problem occurs. And I see it when an operating unit asks for Safety to
visit because they think they need some help getting the accident prevention message across.
I’ll admit to a chronic case of aging idealism as I arise each morning and don my rose-colored
glasses, and I refuse to believe caring and commitment to one’s profession are quaint,
eccentric, or out of style. Someone once proposed that, in trying to decide on a course of
action, once you reach 40 percent certainty that you’re right on an unknown, go with your gut
rather than waiting for 70 or 80 or 100 percent. Well, I’ve been beyond that 40 percent mark
for a long time, I accept that I’ll never get near 100, and I know in my gut that we’re
succeeding, however slowly, so I’ll continue to forge ahead.
Maybe all of us could benefit from being a bit more daring in our decision making, from placing
value ahead of cost as a decider, from doing something simply because it’s the right thing to
do. Get away from the bottom line as your only concern and dare to be innovative. Accept that
success can be measured in many ways other than with a dollar sign. That one life is worth
all the effort!
Figure 1—SB82-779/382-32-53
Revision 2 is the first published issue of this Service Bulletin to the operator. The basic
issue and Revision 1 were for FAA review only and were never published to the operator.
This Service Bulletin outlines enhancements to the flame arresting system.
82-776/382-32-52, Dated November 9, 2004, LANDING GEAR – RELOCATION OF NOSE AND MAIN
LANDING GEAR DOWN LOCK INDICATION
Basic Issue - The purpose of this modification is to relocate the nose and main landing gear
down lock indication grounds inside the nose and main landing gear wheel wells to
locations where they are less likely to fail. Failure of the landing gear ground circuit results
in loss of landing gear down lock indication.
Basic Issue - During troubleshooting of the DF301E UHF/VHF Direction Finder System, it
was discovered that the 317970-1/-5 DF Antenna Systems have not been electrically
bonded properly. Inspect aircraft for proper operation of the DF Antenna.
82-782, Dated October 20, 2004, ELECTRICAL POWER — INSTALLATION OF BATTERY CHARGER
Basic Issue - This modification is an optional product improvement. The battery charger
modification provides a controlled battery charging current, thereby preventing any
overheating while safely maintaining the charge on the batteries. This prevents excessive
battery replacement on the modified aircraft.
82-771/382-57-82, Dated December 7, 2004, WINGS – INSPECTION OF CENTER WING UPPER AND LOWER
RAINBOW FITTING FOR CRACKS
Basic Issue - Inspect center wing upper and lower rainbow fitting for cracks in accordance with
Hercules Airfreighter C-130/L-382 Series Progressive Inspection Procedures work cards SP-176
(upper fitting) and SP-257 (lower fitting).
82-784/382-71-23, Dated January 6, 2005, POWER PLANT - INSPECTION OF ENGINE TRUSS MOUNT FOR
CRACKING OF THE DIAGONAL BRACE LUGS WITH THE PRESS FIT STEEL BUSHING INSTALLED
Basic Issue - Engine truss mount cracking of the diagonal brace lugs, with the press fit steel bushing
installed, has been experienced by some C-130 operators. Aircraft serial number 4801 and up and
aircraft prior to 4801 with diagonal lugs modified by Service Bulletin 82-410/382-71-13 and/or
repaired by SMP 583 to install press fit bushing or replaced with current production configuration
truss mounts.
82-780/382-25-09, Dated January 6, 2005, EQUIPMENT/FURNISHINGS ─ REWORK OF LIFE RAFT RELEASE CABLE
PULLEY BRACKET LOCATED AT FS 612.75 LH
Basic Issue - Field reports indicate that the life raft release cable has been rubbing against the top of
the pulley bracket located at FS 612.75 LH. Locate the pulley bracket located at FS 612.75 LH and
reverse the rub block in bracket assembly, such that the slot in the block is oriented downward.
82-778/382-71-22, Dated January 23, 2005, POWER PLANT — TEFLON HOSES AS AN ALTERNATE FOR
ELASTOMERIC (RUBBER) HOSES ON POWER PLANT, NACELLE, AND QEC
Basic Issue - This Service Bulletin is issued to make Teflon hoses available to operators using the
engines and associated QEC, should operators choose to use Teflon hoses in lieu of the current
standard elastomeric hoses. Teflon hoses offer many advantages over the older elastomeric types.
Hose assemblies made of Teflon have practically unlimited shelf life and greatly enhanced service
life. They also expand much less under pressure. This is a product improvement, and the Teflon
hoses are provided as an alternate for the elastomeric hoses.
OPERATOR’S CORNER
2006
lthough we are barely into 2005, it’s
time to begin the development of
Air Mobility Support’s 2006 Calendar.
DELI
the C-130 Hercules B-H models. This calendar
features a selection of beautiful photographs of
Hercules aircraft submitted by commercial and
military operators around the world.
Air Mobility
Support
It takes almost the entire year to collect these
images, determine the layout, and have the
Calendar printed. So we need to start now. If
you have digital (.gif or .jpg) photographs that
It’s Time to Send in Your
you would like to be considered for the AMS Hercules Photographs!!
2006 Calendar, please send them to them to:
Lockheed Martin AMS, Attn: 2006 Calendar, 86 South Cobb Drive, Marietta Georgia 30063-0589
IMPROVING INFORMATION
L
DELIVERY
ockheed Martin Air Mobility Support has been working hard to improve its ability to
quickly disseminate information and deliver data to the global population of Hercules
operators and Service Centers.
AMS has well over 500 users on this system today. If you are not one of them and think
that on line access to Lockheed Martin documents, Service News magazines, facilities for
secure electronic collaboration with the Engineering Services organization, or the Technical
Support Center will help you do your job, simply go to www.lockheedmartin.com/ams and
click the link to the Document / Data Library to download the Access Request form and
follow the instructions provided to submit your request.