Senate Hearing, 108TH Congress - Climate History and The Science Underlying Fate, Transport, and Health Effects of Mercury Emissions
Senate Hearing, 108TH Congress - Climate History and The Science Underlying Fate, Transport, and Health Effects of Mercury Emissions
Senate Hearing, 108TH Congress - Climate History and The Science Underlying Fate, Transport, and Health Effects of Mercury Emissions
108359
HEARING
BEFORE THE
COMMITTEE ON
ENVIRONMENT AND PUBLIC WORKS
UNITED STATES SENATE
ONE HUNDRED EIGHTH CONGRESS
FIRST SESSION
Printed for the use of the Committee on Environment and Public Works
(II)
C O N T E N T S
Page
OPENING STATEMENTS
Allard, Hon. Wayne, U.S. Senator from the State of Colorado, prepared state-
ment ...................................................................................................................... 11
Cornyn, Hon. Jon, U.S. Senator from the State of Texas, prepared statement . 58
Inhofe, Hon. James M., U.S. Senator from the State of Oklahoma .................... 1
Jeffords, Hon. James M., U.S. Senator from the State of Vermont, prepared
statement .............................................................................................................. 7
Voinovich, Hon. George V., U.S. Senator from the State of Ohio ........................ 3
WITNESSES
Legates, David R., director, Center for Climatic Research, University of Dela-
ware ....................................................................................................................... 12
Prepared statement .......................................................................................... 209
Levin, Leonard, program manager, Electric Power Research Institute .............. 40
Prepared statement .......................................................................................... 211
Mann, Michael E., assistant professor, University of Virginia, Department
of Environmental Sciences .................................................................................. 9
Prepared statement .......................................................................................... 173
Responses to additional questions from:
Senator Inhofe ........................................................................................... 178
Senator Jeffords ......................................................................................... 194
Myers, Gary, professor of Neurology and Pediatrics, Department of Neurology,
University of Rochester Medical Center ............................................................ 44
Prepared statement .......................................................................................... 299
Rice, Deborah C., toxicologist, Bureau of Remediation and Waste Manage-
ment, Maine Department of Environmental Protection ................................... 42
Prepared statement .......................................................................................... 283
Responses to additional questions from Senator Jeffords ............................. 284
Soon, Willie, astrophysicist, Harvard-Smithsonian Center for Astrophysics ..... 6
Prepared statement .......................................................................................... 58
Responses to additional questions from Senator Jeffords ............................. 155
ADDITIONAL MATERIAL
Articles:
Climate Research, Vol. 23:89110, 2003, Proxy Climatic and Environ-
mental Changes of the Past 1000 years ...................................................... 127148
Energy & Environment Vol. 14, Nos. 2 and 3, 2003, Reconstructing
Climatic and Environmental Changes of the Past 1000 Years: A Re-
appraisal....................................................................................................... 60126
Geophysical Research Letters, Vol. 31, Estimation and Representation
of Long-term (>40 year) Trends of Northern-Hemisphere-gridded Sur-
face Temperature: A Note of Caution ...................................................... 149154
Original Contributions, Effects of Prenatal and Postnatal Methylmercury
Exposure From Fish Consumption on Neurodevelopment..................... 302308
Personal Health, Tip the Scale in Favor of Fish: The Healthful Benefits
Await .............................................................................................................. 321
Risk Analysis, Vol. 23, No. 1, 2003, Methods and Rationale for Derivation
of a Reference Dose for Methylmercury by the U.S. EPA ......................... 290298
(III)
IV
Page
Continued
The Atlanta Journal-Constitution, June 6, 2003, Clear Skies Mercury
Curb Put in Doubt ........................................................................................ 317
The Lancet, Prenatal Methylmercury Exposure from Ocean Fish Con-
sumption in the Seychelles Child Development Study........................... 309316
The New York Times, July 29, 2003, Does Mercury Matter? Experts
Debate the Big Fish Question ...................................................................... 319
The Philadelphia Inquirer, March 7, 2003, Mercury Rising ......................... 318
Chart, National Mean Mercury Concentration in Tissues of Selected Fish
Species (all sample types) .................................................................................... 289
Letter, to Senator Inhofe, from John Christy ........................................................ 323
Report, EPRI, May 2003, A Framework for Assessing the Cost-Effectiveness
of Electric Power Sector Mercury Control Policies ........................................ 217282
CLIMATE HISTORY AND THE SCIENCE UN-
DERLYING FATE, TRANSPORT, AND HEALTH
EFFECTS OF MERCURY EMISSIONS
U.S. SENATE,
COMMITTEE ON ENVIRONMENT PUBLIC WORKS,
AND
Washington, DC.
The committee met, pursuant to notice, at 9 oclock a.m. in room
406, Senate Dirksen Building, Hon. James M. Inhofe (chairman of
the committee) presiding.
Present: Senators Inhofe, Allard, Carper, Clinton, Cornyn, Jef-
fords, Thomas and Voinovich.
OPENING STATEMENT OF HON. JAMES M. INHOFE,
U.S. SENATOR FROM THE STATE OF OKLAHOMA
Senator INHOFE. The meeting will come to order.
We have a policy that we announced when I became chairman
of the committee that we will start on time, whether anyone is here
or not here, members, witnesses or others. So I appreciate all of
you being punctual in spite of the fact that the Senators are not.
One of my primary objectives as chairman of the committee is to
improve the way in which science is used. I think that when I be-
came chairman of this committee, I announced three very out-
rageous things that we were going to do in this committee that
have not been done before. No. 1, we are going to try to base our
decisions, things that we do, on sound science. No. 2, we are going
to be looking at the costs of some of these regulations, some of
these policies that we have, and determine what they are going to
be. And No. 3, we are going to try to reprogram the attitudes of
the bureaucracy so that they are here not to rule, but to serve.
Good public policy decisions depend on what is real or probable,
not simply on what serves our respective political agendas. When
science is debated openly and honestly, public policy can be debated
on firmer grounds. Scientific inquiry cannot be censored. Scientific
debate must be open. It must be unbiased. It must stress facts
rather than political agendas.
Before us today, we have two researchers who have published
what I consider to be a credible, well-documented, and scientifically
defensible study examining the history of climate change. Further-
more, these are top fields of inquiry in the Nations energy environ-
ment debate and really the entire worlds energy environment de-
bate. We can all agree that the implications of this science are
global, not only in terms of the environmental impacts, but also en-
(1)
2
ergy impacts, global trade impacts, and quite frankly, no less than
global governance impacts.
We could also all agree that as a result of the import and impact
of these issues, it is absolutely crucial that we get this science
right. False or incomplete or misconstrued data are simply not an
acceptable basis for policymaking decisions in which the Congress
of the United States is involved. Such data would violate the Data
Quality Act, which we passed on a bipartisan basis here in the
Senate and which we have bipartisanly embraced. If we need more
data to satisfy our standards, then so be it.
This Administration is prepared to do so in an aggressive strat-
egy that the climate change strategic plan outlines. The 1000-year
climate study that the Harvard-Smithsonian Center for Astro-
physics has compiled is a powerful new work of science. It has re-
ceived much attention, and rightfully so. I would add at this time,
it did not receive much attention from some of the liberal media
who just did not want to believe that any of the facts that were
disclosed were accurate.
I think the same can be said in terms of work that has recently
received attention of the hockey stick study. In many important
ways, the Harvard-Smithsonian Centers work shifts the paradigm
away from the previous hockey stick study. The powerful new find-
ings of this most comprehensive study shiver the timbers of the
adrift Chicken Little crowd.
I look forward to determining whose data is most comprehensive,
uses the most proxies, maintains the regional effects, avoids losing
specificity through averaging statistics, considers more studies, and
most accurately reflects the realities of the Little Ice Age, reflects
the realities of the Medieval Warming Period, and more.
Mercury presents a different set of issues. That would be our sec-
ond panel. It is well-established that high levels of exposure to
methyl-mercury before birth can lead to neuro-development prob-
lems. But what about mercury consumed through fish, the most
common form of prenatal exposure? Mercury makes its way into
fish through various ways, but primarily though deposition from
air emissions, with 80 percent of emissions deposited either region-
ally or globally, not locally. Global mercury emissions are about
5,000 tons a year. About half of those are man-made emissions.
In the United States, a little more than 100 tons are emitted
from non-power plant sources. Industry is making great strides in
reducing these emissions. I would like to submit for the record this
EPA document available on their Web site which indicates that
when rules now on the books are fully implemented at non-power
plant, nationwide emissions will be cut by nearly 50 percent. Power
plants emit about 50 tons of mercury annually, about 1 percent of
the worldwide emissions.
In setting policy, key questions need to be answered, such as how
would controls change this deposition; what portion of mercury ex-
posure can not be controlled; and what are the health impacts of
prenatal exposure. We will hear testimony today that indicates any
changes to mercury exposure in fish would be minimal under even
the most stringent proposal to regulate mercury. Today, we will
also hear testimony that the most recent and comprehensive study
3
each other, as well as with the model simulations, all of which are
shown, within the estimated uncertainties. That is the gray-shaded
region.
The proxy reconstructions, taking into account these uncertain-
ties, indicate that the warming of the northern hemisphere during
the late 20th century, that is the northern hemisphere, not the
globe, as I have sometimes heard my study incorrectly referred to,
the northern hemisphere during the late 20th century, that is the
end of the red curve, is unprecedented over at least the past mil-
lennium and it now appears based on peer-reviewed research, prob-
ably the past two millennia.
The model simulations demonstrate that it is not possible to ex-
plain the anomalous late-20th century warmth without the con-
tribution from anthropogenic influences. These are the consensus
conclusions of the legitimate community of climate and paleo-
climate researchers investigating such issues.
Astronomers Soon and Baliunas have attempted to challenge the
scientific consensus based on two recent papers, henceforth collec-
tively referred to as SB, that completely misrepresent the past
work of other legitimate climate researchers and are deeply flawed
for the following reasons. No. 1, SB make the fundamental error
of citing evidence of either wet or dry conditions as being in sup-
port of an exceptional Medieval Warm Period. Such an ill-defined
criterion could be used to define any period of climate as either
warm or cold. It is pure nonsense.
Experienced paleoclimate researchers know that they must first
establish the existence of a temperature signal in a proxy record
before using it to try to reconstruct past temperature patterns. If
I can have exhibit two, this exhibit shows a map of the locations
of a set of records over the globe that have been rigorously ana-
lyzed by my colleagues and I for their reliability as long-term tem-
perature indicators. I will refer back to that graphic shortly.
No. 2, it is essential to distinguish between regional temperature
changes and truly hemispheric or global changes. Average global or
hemispheric temperature variations tend to be far smaller in their
magnitude than those for particular regions. This is due to a tend-
ency for the cancellation of simultaneous warm and cold conditions
in different regions, something that anybody who follows the
weather is familiar with, in fact.
As shown by exhibit three, if I can have that up here as well
now, thank you, this exhibit plots the estimated temperature for
various locations shown in the previously displayed map. As you
can see, the specific periods of relative cold and warm, blue and
red, differ greatly from region to region. Climatologists, of course,
know this. What makes the late 20th century unique is the simul-
taneous warmth indicated by nearly all the long-term records. It is
this simultaneous warmth that leads to the anomalous late-20th
century warmth evident for northern hemisphere average tempera-
tures.
The approach taken by SB does not take into account whether
warming or cooling in different regions is actually coincident, de-
spite what they might try to tell you here today.
No. 3, as it is only the past few decades during which northern
hemisphere temperatures have exceeded the bounds of natural var-
11
erally agreed that during the early periods of the last millennium,
air temperatures were warmer and that temperatures became cool-
er toward the middle of the millennium. This gave rise to the terms
the Medieval Warm Period and the Little Ice Age, respectively.
However, as these periods were not always consistently warm or
cold, nor were the extremes geographically commensurate in time,
such terms must be used with care.
In a change from its earlier reports, however, the Third Assess-
ment Report of the Intergovernmental Panel on Climate Change,
and now the U.S. National Assessment of Climate Change, both in-
dicate that hemispheric and global air temperatures followed a
curve developed by Dr. Mann and his colleagues in 1999. This
curve exhibits two notable features, and I will point back to Dr.
Manns exhibit one that he showed a moment ago. First is a rel-
atively flat and somewhat decreasing trend in air temperature that
extends from 1000 A.D. to about 1900 A.D. This feature is an
outlier that is in contravention to thousands of authors in the peer-
reviewed literature.
This is followed by an abrupt rise in the air temperature during
the 1900s that culminates in 1998 with the highest temperature
on the graph. Virtually no uncertainty is assigned to the instru-
mental record of the last century. This conclusion reached by the
IPCC and the National Assessment is that the 1990s was the
warmest decade, with 1998 being the warmest year of the last mil-
lennium.
Despite the large uncertainty, the surprising lack of significant
temperature variations in the record gives the impression that cli-
mate remained relatively unchanged throughout most of the last
millennium, at least until human influences began to cause an ab-
rupt increase in temperatures during the last century. Such char-
acterization is a scientific outlier. Interestingly, Mann et al replace
the proxy data for the 1900s by the instrumental record and
present it with no uncertainty characterization. This, too, yields the
false impression that the instrumental record is consistent with the
proxy data and that it is error-free. It is neither.
The instrumental record contains numerous uncertainties, result-
ing from measurement errors, a lack of coverage over the worlds
oceans, and underrepresentation of mountainous and polar regions,
as well as undeveloped nations and the presence of urbanization ef-
fects resulting from the growth of cities. As I stated before, the
proxy records only in part reflect temperature. Therefore, a simul-
taneous presentation of the proxy and instrumental record is the
scientific equivalent to calling apples and oranges the same fruit.
Even if a modest uncertainty of plus or minus one-tenth of a de-
gree Celsius were imposed on the instrumental record, the claim of
the 1990s being the warmest decade would immediately become
questionable, as the uncertainty window would overlap with the
uncertainty associated with earlier time periods. Note, too, that if
the satellite temperature record, where little warming has been ob-
served over the last 20 years, had been inserted instead of the in-
strumental record, it would be impossible to argue that the 1990s
was the warmest decade. Such a cavalier treatment of scientific
data can create scientific outliers, such as the Mann et al curve.
14
climate research community is the one that I have given you ear-
lier.
Now, as far as the issue of data, how much data was used, there
are a number of misstatements that have been made about our
study. One of them is with regard to how much data we used. We
used literally hundreds of proxy records. We often represented
those proxy records, as statistical climatologists often do, in what
we call a state space. We represented them in terms of a smaller
number of variables to capture the leading patterns of variability
in the data. But we used hundreds of proxy indicators, more in fact
than Dr. Soon referred to. In fact, we actually analyzed climate
proxy records. Dr. Soon did not.
Senator JEFFORDS. Dr. Soon, in a 2001 article in Capitalism mag-
azine, you said that because of the pattern of frequent and rapid
changes in climate throughout the holocene period, we should not
view the warming of the last 100 years as a unique event or as an
indication of manmade emissions effect on the climate.
But according to NOAAs Web site upon close examination of
these warm periods, including all the ones that you cited in your
past and most recent article,
It became apparent that these periods are not similar to the 20th century
warming for two specific reasons. One, the periods of hypothesized past warm-
ing do not appear to be global in extent or, two, the period of warmth can be
explained by known natural climate forcing conditions that are uniquely dif-
ferent than those of the past 100 years.
Why didnt either of your articles make an impact on the state
of the science or NOAAs position?
Dr. SOON. Thank you for your question, Senator. As you may be
aware, my paper just got published this year, January 2003 and
April 2003, so it is all fairly recent. I have just written up this
paper very recently, so I do not know what impact it will have on
any general community, but I do know all my works are done con-
sulting works from all major paleoclimatologists in the field, includ-
ing Dr. Mann and his esteemed colleagues.
As to the comments about the Capitalism magazine, I am not
aware of that particular magazine. I do not know whether I sub-
mitted anything to this journal or this magazine. I do stand by the
statement that it is important to look at the local and regional
change before one takes global averages because climate tends to
vary in very large swings in different parts of the world. That real-
ly is the essence of climate change and one ought to be really look-
ing very carefully at the local and regional change first, and also
one should not look strictly at only the temperature parameter, as
Dr. Mann has claimed to have done. That I think is very important
to take into account.
Senator JEFFORDS. Dr. Mann, could you comment?
Dr. MANN. Yes. Both of those statements are completely incor-
rect. If Dr. Soon had actually read any of the papers that we have
published over the past 5 years or so, he would be aware of the fact
that we use statistical techniques to reconstruct global patterns of
surface temperature. We average those spatial patterns to estimate
a northern hemisphere mean temperature, just as scientists today
seek to estimate the northern hemisphere average temperature
from a global network of thermometer measurements. We use pre-
18
as anthropogenic factors. That includes the role of the sun, the role
of human land use changes, and the role of human greenhouse gas
increases. The model estimates are typically consistent with what
we have seen in the observations earlier.
As far as the next 1,000 years, that is not a particular area of
expertise of mine, but I am familiar with what the mainstream cli-
mate research community has to say about that. The latest model-
based projections indicate a mean global temperature increase of
anywhere between .6 and 2.2 degrees Centigrade. That is one de-
gree to four degrees Fahrenheit relative to 1990 levels by the mid-
21st century under most scenarios of future anthropogenic changes.
While these estimates are uncertain, even the lower value would
take us well beyond any previous levels of warmth seen over at
least the past couple of millennia. The magnitude of warmth, but
perhaps more importantly the unprecedented rate of warming, is
cause for concern.
Senator ALLARD. Dr. Legates.
Dr. LEGATES. Yes. I agree, too, that attribution is one of our im-
portant concerns. As a climatologist, I am very much interested in
trying to figure out what drives climate. We know that a variety
of factors exist. These include solar forcing functions; these include
carbon dioxide in the atmosphere; these include biases associated
with observational methods; these also include such things as land
use changes. For example, if we change the albedo or reflected
amount solar radiation, that too will change the surface tempera-
ture.
So it is really a difficult condition to try to balance all of these
possible combinations and to try to take a very short instrumental
record and discern to what extent that record is being driven by a
variety of different combinations.
My conclusion probably in this case to directly answer your ques-
tion is that the temperature likely would rise slightly, again due
to carbon dioxide, but it would be much more responsive to solar
output. If the sun should quiet down, for example, I would expect
we would go into a cooling period.
Senator ALLARD. I guess the question that I would have, now,
you know you have increased CO2. So how is the environment in
the Earth going to respond to increased CO2? Have any of you
talked to a botanist or anything to give you some idea of what hap-
pens when CO2 increases in the atmosphere? Plants utilize CO2,
extract oxygen. We inhale oxygen and extract CO2. Will plants be
more prosperous with more CO2? How does that impact the plant
life? Can that then come back on the cycle and some century later
mean more O2 and less CO2?
So I am wondering if any of you have reviewed some of these cy-
cles with botanists and see if they have any scientific data on how
plants respond to CO2 when that is the sole factor. I am not sure
I have ever seen a study. There is moisture and other things that
affect plant growth, but just CO2 by itself. Have any of you seen
any scientific studies in that regard?
Dr. SOON. I have seen that. In fact, I have written a small paper
that has a small section regarding that.
Senator ALLARD. And what was their conclusion?
20
Thank you.
Senator INHOFE. Thank you, Senator Thomas.
Senator Carper.
Senator CARPER. Thank you, Mr. Chairman. I want to welcome
our witnesses this morning. Dr. Legates, it is great to have a fight-
ing Blue Hen here from the University of Delaware. We are de-
lighted that you are here. Dr. Mann, thanks for coming up, and Dr.
Soon, welcome. We thank you for your time and your interest and
your expertise on these issues, and your willingness to help us on
some tough public policy issues that we face.
Dr. Mann, I would start off if I could and direct a question to
you. I understand we have had thermometers for less than 200
years, and yet we are trying to evaluate changes in temperature
today in this century and the last century with those that occurred
500 or 1,000 or 2,000 years ago. I understand that we use proxies
for thermometers, if you will, and for those kinds of changes in
temperature.
I wonder if you could help me and maybe the committee better
understand how we compare todays temperature measurements to
the proxies of the past. Are there potential risks with relying on
some of those proxies?
Dr. MANN. Absolutely. We have to use them carefully when we
try to reconstruct the past temperature history. So when I say we
have to use them carefully, it means some of the things that I dis-
cussed in my testimony earlier, that we need to actually verify that
if we are using a proxy record to reconstruct past temperature pat-
terns, that proxy record is indeed reflective of temperature
changes. That is something that typically paleoclimate scientists
first check to make sure that the data they are using are appro-
priate for the task at hand. Of course, we have done that in our
work. I did not see evidence that Soon and colleagues have done
that.
First of all, we next have to synthesize the information. There
have been some misleading statements made here earlier on the
part of the other testifiers with regard to local versus regional or
global climate changes. Of course, we have to assimilate the infor-
mation from the local scale to the larger scales, just as we do with
any global estimate of quantity. So we take the regional informa-
tion; we piece together what the regional patterns of change have
been, which may amount to warming in certain areas and cooling
in other areas. Only when we have reconstructed the true global
or hemispheric regional patterns of change can we actually esti-
mate the northern hemisphere average, for example.
A number of techniques have been developed in the climate re-
search community for performing this kind of estimate. My col-
leagues and I have described various statistical approaches in the
detailed climate literature. Some of the estimates are based on fair-
ly sophisticated techniques. Some of them are based on fairly ele-
mentary techniques. Yet all of the results that have been published
in the mainstream climate research community using different
techniques and different assortments of proxy data have given, as
I showed earlier in my graph, the same basic result within the un-
certainties. That has not changed. An article that appeared last
month in the American Geophysical Union, which is actually the
23
warm at night, but also reflect energy in the daylight. So you have
these odd playbacks into the climate system which make it very
difficult to say that if I hold everything else constant and change
one variable, what will happen. Well, in reality, it is impossible to
hold everything constant because it is a very intricate and inter-
woven system that one change does have feedbacks across the en-
tire spectrum.
Senator CARPER. Thanks. I think my time has expired, Mr.
Chairman. Is that correct?
Senator INHOFE. Yes. Thank you, Senator Carper.
Senator CARPER. Thank you.
Senator INHOFE. We will have another round here. In fact, I will
start off with another round. Lets start with Dr. Legates. Dr.
Legates, was the temperature warmer 4,000 to 7,000 years ago
than it is today?
Dr. LEGATES. My understand was during about 4,000 to 7,000
years ago, in a period referred to as the climatic optimum, which
sort of led to enhanced agriculture and led to development of civili-
zation, generally the idea is that warmer temperatures lead to
more enhanced human activity; colder temperatures tend to in-
hibit. Again, as we get back 4,000 to 7,000 years ago, it becomes,
the error bars are getting wide as well. But the general consensus
is that temperatures were a bit warmer during that time period.
Senator INHOFE. OK. Senator Thomas had something about, he
had alluded to 1940. Yesterday when I was giving my talk and
doing the research for that, it was my understanding that the
amount of CO2 emitted since the 1940s increased by about 80 per-
cent. Yet that precipitated a period of time from about 1940 to 1975
of a cooling-off period. Is that correct?
Dr. LEGATES. That is correct. It is sort of a perplexing issue in
the time series record that from 1940 to 1970 approximately, while
carbon dioxide was in fact increasing, global temperatures appear
to be decreasing.
Senator INHOFE. Dr. Mann, you have I might say impugned the
integrity of your colleagues and a few other people during your
presentation today. The Wharton Econometric Forecasting Associ-
ates did a study as to the effect of regulating CO2 and what would
happen. American consumers would face higher food, medical and
housing costs; for food, an increase of 11 percent; medicine, an in-
crease of 14 percent; and housing, an increase of 7 percent. At the
same time, the average household of four would see its real income
drop by $2,700 in 2010.
Under Kyoto, the energy and electricity prices would nearly dou-
ble and gasoline prices would go up an additional 65 cents a gallon.
I guess I would ask at this point, what is your opinion of the Whar-
ton study?
Dr. MANN. OK. First, I would respectfully take issue with your
statement that I have impugned the integrity of the other two tes-
tifiers here. I have questioned their, and I think rightfully, their
qualifications to state the conclusions that they have stated. I pro-
vided some evidence of that.
Senator INHOFE. Well, illegitimate, inexperienced, nonsense
Dr. MANN. Those are words that I used. Correct.
Senator INHOFE [continuing]. That is a matter of interpretation.
25
Go ahead.
Dr. MANN. I would furthermore point out that the very models
that I have referred to track the actual instrumental warming and
the slight cooling in the northern hemisphere. There was no cooling
of the globe from 1940 to 1970, the northern hemisphere
Senator INHOFE. OK. The question I am asking you is about
WEFA.
Dr. MANN. I am not a specialist in public policy and I do not be-
lieve it would be useful for me to testify on that.
Senator INHOFE. Dr. Legates, have you looked at the report that
Wharton came out with concerning the possible effects, economic
results of this?
Dr. LEGATES. Again, I am not a public policy expert either, and
so the economic impacts are not something which I would be quali-
fied to testify on.
Senator INHOFE. OK, Dr. Legates, do you think you have more
data than Dr. Mann?
Dr. LEGATES. I think we have looked at a large variety of time
series. We have looked at essentially a large body of literature that
existed both prior to Dr. Manns analysis and since Dr. Manns
analysis, in attempting to figure out why his curve does not reflect
the individual observations. It is one issue associated with when
you put together data sets, to make sure that the composite sort
of resembles the individual components.
Senator INHOFE. OK. The timeline, Dr. Mann, is something I
have been concerned with, and those of us up here are listening to
you and listening to all three of you and trying to analyze perhaps
some of the data that you use and the conclusions you came to,
having been 4 or 5 years back, compared to a study that was done
referring to Smithsonian-Harvard, the 1,000-year study that was
just completed, or at least given to us in March of this year. I
would like to have each of you look at the chart up here and just
give us a response as to what you feel in terms of the data that
both sides are using today.
Dr. MANN. I guess you referred to me first?
Senator INHOFE. That is fine. Yes.
Dr. MANN. OK. Well, I think we have pretty much demonstrated
that just about everything there is incorrect. In a peer-reviewed
publication that was again published in the Journal Eos of the
American Geophysical Union about a month ago, that article was
cosigned by 12 of the leading United States and British climatolo-
gists and paleoclimatologists. We are already on record as pretty
much pointing out that there is very little that is valid in any of
the statements in that table. So I think I will just leave it at that.
Senator INHOFE. Do the other two of you agree with that?
Dr. LEGATES. If I may add, the Eos piece was actually not a ref-
ereed article. It is an Eos Forum piece, which by definition is an
opinion piece by scientists for publication in Eos. That is what is
contained on the AGU Web site for Eos Forum.
Senator INHOFE. All right. Let me ask one last question here. Dr.
James Hansen of NASA, considered the father of global warming
theory, said that the Kyoto Protocol will have little affect on global
temperatures in the 21st century. In a rather stunning followup,
Hansen said it would take 30 Kyotos, let me repeat that, 30 Kyotos
26
do rightly point out the incredible natural cycle, but we are now
so influencing that natural cycle, I do not know if we have the time
to contemplate the balance once again regaining itself in our won-
derfully regenerating Earth.
Senator INHOFE. Thank you, Senator Clinton.
Senator Carper.
Senator CARPER. Thanks, Mr. Chairman. I just want to followup.
Senator Clinton was kind in her comments on the legislation, the
one that Senator Jeffords has introduced and second on legislation
I have introduced along with Senators Judd Gregg, Lincoln Chafee
and Lamar Alexander.
Are any of you familiar with that legislation? Would you like to
become familiar over the next 5 minutes?
[Laughter.]
Dr. SOON. No, we will stick to science. Politics is too complicated.
Senator CARPER. All right. That may be the best approach.
We are trying to figure out if there is a reasonable middle ground
on this issue. I am part of a group that Buddy MacKay, a former
colleague of mine from Florida, calls the flaming moderates or
flaming centrists. We can spend a whole lot of time discussing the
impact of Kyoto caps, or we can focus on what steps we actually
need to take.
The approach that Senators Gregg and Chafee and Alexander
and myself have taken, at least with respect to four pollutants, we
say unlike the Presidents proposal where he only addresses sulfur
dioxide and nitrogen oxide and mercury, and does not address CO2,
as you know, because he thinks we need to study it a bit more. Our
approach says that there ought to be caps on CO2; that they should
be phased in; that we should use a cap and trade system; we
should give utilities the opportunity to buy credit for levels of CO2
emissions that they maintain at high levels; and they should be
able to contract with, among others, farmers and those who would
be forced out of lands to change their planning patterns or change
their animal feedlot operations in order to be able to sequester
some of the CO2 that occurs in our planet.
We have something called new source review. The President
would eliminate it entirely. I think in Senator Jeffords approach,
it is pretty much left alone. There is a good argument that says
that utilities under current law, if they make some kind of minor
adjustment and minor investment in their plant, that they have to
make a huge investment with respect to the environmental con-
trols. As a result, it keeps them from making even common sense
kinds of investments in their plantssort of the laws of unin-
tended consequences. That is sort of the approach that we have
taken.
Now that you know all about it, if you were in our shoes, what
kind of an approach would you take? Let me just start with our
University of Delaware colleague here, Dr. Legates.
Dr. LEGATES. Generally, I favor no regrets policies, where they
have other applications as well. But again, getting into the politics
and the non-science aspects of what to do is out of my area of ex-
pertise. I may have my own beliefs, but they are no more important
or less important than the average person. I would rather not tes-
tify to those here.
33
called bolus dose, what we call a bolus dose. So they have a low
level of methyl-mercury intake which may be occasionally punc-
tuated with a higher intake level. The source of methyl-mercury
does not matter, whether it is through fish or through whale. So
the fact that it is whale meat per se is not really relevant.
None of the panels, including the National Research Council
panel, could come to any kind of conclusion about the importance
of the pattern of intake, because the data just are not available.
There just are not scientific data that speak directly to that. But
what the Faroe Islands investigators have done because this was
raised as a concern and because they have hair, and they had hair
from their population that was stored, they were able to go back
and do segmental analysis, so that you cut the hair up into tiny
little pieces and look at mercury levels across the length of the
hair.
What they did was they eliminated the mothers that had the
most variable hair levels that might suggest that there was this
bolus exposure of these particular women and these particular
fetuses. What they found was that the effect was actually stronger
when they eliminated these women, which makes a certain amount
of sense because you are decreasing variability when you do that.
Senator INHOFE. Thank you, Dr. Rice.
Senator Jeffords.
Senator JEFFORDS. Thank you all for your testimony on this very
important and timely topic.
Some of you have seen this mornings New York Times full-page
article on mercury and its health effects. This helps to set a context
for our discussion.
Dr. Rice, what exactly is a reference dose level and what does it
mean in terms of the so-called safe levels of fish consumption? Does
EPA reference dose level include a built-in tenfold safety threshold?
Dr. RICE. The reference dose is designed to be a daily intake
level that a person could consume over the course of their lifetime
without deleterious effects. So it is designed to be the amount of
mercury you could eat every day in your life and not harm yourself.
Now, when EPA did its calculation, it is important to understand
that when the National Academy of Sciences modeled a number of
endpoints for each of the studies, and those were the Faroe Islands
study, the New Zealand Study, both of which found effects, as well
as the Seychelles study which did not, they identified not a no-ef-
fect level. They identified a very specific effect level. That effect
level is associated with a doubling of the number of children that
would perform in the abnormal range, in other words, the lowest
5 percent of the population. So this is in no way a no-effect level.
To that, the EPA applied a tenfold so-called uncertainty factor.
The point of that was to take into account things that we did not
know, data that we did not have, as well as the pharmacodynamic
and the pharmacokinetic variability. Now, there were actually data
that was again modeled by the NAS and reviewed by the NAS, that
says that the pharmacokinetic variability, in other words the wom-
ans ability to get rid of methyl-mercury from her body, differs by
a factor of three. So that already takes up half of the uncertainty
factor.
49
Dr. RICE. I agree with Dr. Myers. These studies are very com-
plex. I think that that is even more reason not to rely on one study
while eliminating other studies for consideration.
Again, these studies have been peer-reviewed numerous times.
The Seychelles Islands study and the Faroe Islands study have
been reviewed now by several panels. They are both thought to be
very high quality, very well-designed and well-executed studies.
The NAS, as well as the previous panel, talked at great length
about what might account for the differences between these stud-
ies. We really do not know what accounts for the differences be-
tween these studies. The NAS modeled three studies. The New
Zealand study was also a positive study.
The National Academy of Sciences and the EPA agreed with
them that it was not scientifically justifiable for protection of the
health of the American public to rely on the negative study and ex-
clude the two positive studies. I said at least a couple of times in
my testimony that what the NAS did to try to address that was
to do an integrative analysis that included all three studies, includ-
ing the Seychelles Islands study, and modeled it statistically.
When EPA then took those analyses and derived, what we did
was we derived a series of reference doses, kind of sample reference
doses, that were based on a number of endpoints from both the
New Zealand study and the Faroe study, as well as the integrative
analysis of all three studies. The integrative analysis of all three
studies also yields a reference dose of 0.1. So that made me person-
ally very comfortable that we were doing the right thing scientif-
ically in our derivation of the reference dose.
Senator INHOFE. These are supposed to be 5-minute rounds and
it has been 8 minutes, so we will recognize Senator Allard.
Senator ALLARD. Dr. Rice and Dr. Myers, you have in your com-
ments talked about methyl-mercury as being the toxic compound as
far as human health is concerned. Are there other mercurial com-
pounds that are toxic to humans?
Dr. RICE. Yes. All forms of mercury are toxic to humans.
Senator ALLARD. Including the elemental form?
Dr. RICE. Yes.
Senator ALLARD. OK.
Dr. RICE. But in terms of environmental exposure, it is really the
methyl-mercury form that we are worried about because that is the
form that gets into the food chain and is concentrated and accumu-
lated up the food chain. That is what people actually end up being
exposed to.
Senator ALLARD. OK. Thanks for clarifying that. I appreciate
that. So this gets into the environment and consequently in the fish
or food chain or whatever. Is the starting point always bacteria op-
erating on the elemental form of mercury? Or is it these various
compounds that bacteria operate on and then end up being assimi-
lated into the food chain? How does that happen?
Dr. RICE. In most circumstances, it is the inorganic form, not the
elemental mercury, but the inorganic form that is available to be
taken up by various microorganisms.
Senator ALLARD. How do we get to that organic form, the methyl-
mercury? How do we get there?
51
that they need in the Individuals With Disabilities Act, and I think
similarly on the scientific side with respect to better research and
better analysis. But it is troubling to me that we are looking at a
problem where the preponderance of the evidence I think is clear,
where we know that there is a transmission, whether it is 60,000,
150,000, 300,000-plus children, and it needs some more effective re-
sponse.
I wanted to ask you, Dr. Rice, now that you are in Maine, from
the State perspective, how closely do you work with the State
health department on environmental health issues? Do you ex-
change information with the State health department and even
with the State education department about some of the work that
you are doing?
Dr. RICE. I actually knew the State toxicologist for Maine quite
well before I went up there, so I do interact with the health depart-
ment. The methyl-mercury issue is very important to Maine. Maine
has a very good program for trying to get rid of methyl-mercury
from dental amalgams, from thermometers, from the kinds of
things that can be controlled; to not put mercury in landfills be-
cause Maine understands that we are at the end of the pipeline for
methyl-mercury deposition. Maine has a terrible problem with fish
advisories. There are a lot of places where fish cannot be eaten in
Maine because of the deposition of methyl-mercury.
So I do work closely with the folks over there, and in fact my way
here was paid by the air office, the Maine air office because the
State of Maine is so very concerned about this issue. Maine is rural
and it is poor, and it cannot really absorb the consequences of these
kinds of additional exposures on the health of the people of Maine.
Senator CLINTON. Similarly, new science is demonstrating that
we need lower standards for lead, based on what we are now deter-
mining. A lot of that groundbreaking work was done at the Univer-
sity of Rochester about lead exposures and the impacts of lead ex-
posure. We can take each of these chemicals or compounds piece
by piece, but I think that certainly when it comes to mercury and
lead and their impacts on childrens development, it is not some-
thing I feel comfortable studying and waiting too much longer on,
particularly because there are so many indirect costs. I know that
Dr. Levins work looked at some of the risks and cost-benefits, but
people do not seem to factor in this special education population
that has been growing.
Dr. RICE. If I may make a comment, I think your analogy is an
apt one, and I think it is a very informative one. In 1985, there was
a report to Congress on the cost-benefits of lead, of keeping lead
out of gasoline, in fact. The benefits based on not only special edu-
cation and things like lower birth weight with respect to lead, but
also just the economic consequences of lowering the IQ of workers
amounted to billions and billions of dollars a year in 1985 dollars
or 1994 dollars. So as this effort goes forward in terms of figuring
out how much it is going to cost to reduce mercury emissions, this
other side of the equation, how much it is going to cost not to,
needs to be kept very, very well in mind.
Senator CLINTON. Thank you, Dr. Rice.
Senator INHOFE. Thank you, Senator Clinton.
I thank the panel very much for their testimony.
57
1 Which was awarded the 1989 nation-wide IEEE Nuclear and Plasma Sciences Society Grad-
uate Scholastic Award and the 1991s Rockwell Dennis Hunt Scholastic Award for the most rep-
resentative PhD thesis work at the University of Southern California.
156
up-dated [NOTE: this statement by Bradley et al. (2003) referred primarily to the
tree-ring data base from the International Tree-Ring Data base.], so a direct proxy-
based comparison of the 1990s with earlier periods is not yet possible. [p. 116 of
Bradley et al., 2003, In: Alverson, K., R.S. Bradley and T.F. Pedersen (eds.)
Paleoclimate, Global Change and the Future. Springer Verlag, Berlin, 105149]
Agreeing with discussion on p. 260261 of Soon et al. (2003), Bradley et al. (2003)
cautioned that in the case of tree rings from some areas in high latitudes, the
decadal time-scale climatic relationships prevalent for most of this century appear
to have changed in recent decades, possibly because increasing aridity &/or
snowcover changes at high latitudes may have already transferred the ecological re-
sponses of trees to climate (cf. Jacoby and DArrigo 1995; Briffa et al. 1998). For
example, near the northern tree limit in Siberia, this changing relationship can be
accounted for by a century-long trend to greater winter snowfall. This has led to de-
layed snowmelt and thawing of the active layer in this region of extensive perma-
frost, resulting in later onset of the growing season (Vaganov et al. 1999). It is not
yet known how widely this explanation might apply to the other regions where par-
tial decoupling has been observed, but regardless of the cause, it raises the question
as to whether there might have been periods in the past when the tree ring-climate
response changes, and what impact such changes might have on paleotemperature
reconstructions based largely on tree ring data. (p. 116117).
Bradley et al. (2003) also worried that Paleoclimate research has had a strong
northern hemisphere, extra-tropical focus (but even there the record is poorly known
in many areas before the 17th century). There are very few high resolution
paleoclimatic records from the tropics, or from the extra-tropical southern hemi-
sphere, which leaves many questions (such as the nature of climate in Medieval
times) unanswered. (p. 141). Bradley et al. continued All large-scale paleotemp-
erature reconstructions suffer from a lack of data at low latitudes. In fact, most
northern hemisphere reconstructions do not include data from the southern half
of the region (i.e. [missing comma] areas south of 30N). Furthermore, there are so
few data sets from southern hemisphere that it is not yet possible to reconstruct
a meaningful global record of temperature variability beyond the period of instru-
mental records. For the northern hemisphere records, it must be recognized that the
errors estimated for the reconstructions of Mann et al. (1999) and Briffa et al. (2001)
are minimum estimates, based on the statistical uncertainties inherent in the meth-
ods used. These can be reduced by the use of additional data (with better spatial
representation) that incorporate stronger temperature signals. However, there will
always be additional uncertainties that relate to issues such as the constancy of the
proxy-climate function over time, and the extent to which modern climate modes
(i.e., those that occurred during the calibration interval) represent the full range of
climate variability in the past [i.e., similar unresolved research questions had been
raised in p. 239242 and p. 258264 of Soon et al. 2003]. There is evidence that in
recent decades some high latitude trees no longer capture low frequency variability
as well as in earlier decades of the 20th century (as discussed below in Section 6.8)
which leads to concerns over the extent to which this may have also been true in
the more distant past. If this was a problem (and currently we are not certain of
that) it could result in an inaccurate representation of low frequency temperature
changes in the past. Similarly, if former climates were characterized by modes of
variability not seen in the calibration period, it is unlikely that the methods now
in use would reconstruct those intervals accurately. It may be possible to constrain
these uncertainties through a range of regional studies (for example, to examine
modes of past variability) and by calibration over different time intervals, but not
all uncertainty can be eliminated and so current margins of error must be consid-
ered as minimum estimates [meaning the actual range of error is larger than shown
in Mann et al. 1999 or the IPCC TARs charts]. (p. 114115).
It is also very important to heed warnings and cautions from other serious re-
searchers about not over stating the true confidence of a reconstructed climatic re-
sult based on indirect proxies. Esper et al. (2003, Climate Dynamics, vol. 21, 699
706) modestly apprised of the current situation in reconstructing long-term climatic
information from tree rings: Although these long-term trends agree well with ECS
[i.e., Esper, Cook, Schweingruber in 2002, Science, vol. 295, 22502253], the ampli-
tude of the multi-centennial scale variations is, however, not understood. This is be-
cause (1) no single multi-centennial scale chronology could be built that is not sys-
tematically biased in the low frequency domain, and (2) no evidence exists that
would support an estimation of the biases either in the LTM [Long-term mean
standardization] nor in the RCS [Regional curve standardization] multi-centennial
chronologies. Consequently, we also avoided providing formal climate calibration
and verification statistics of the chronologies. Note also that the climate signal of
the chronologies low frequency component could not be statistically verified anyway.
157
This is because the high autocorrelations, when comparing lower frequency trends,
significantly reduce the degrees of freedom valid for correlation analyses. We believe
that a formal calibration/verification/transfer function approach would leave the im-
pression that the long-term climate history for the Tien Shan [i.e., the location of
Esper and five colleagues study] is entirely understood, which is not the case. Fur-
ther research is needed to estimate the amplitude of temperature variation in the
Alai Range [south of Kirghizia] over the last millennium. (p. 705)
Question 4. Do you claim that the Mann study does not reconstruct regional pat-
terns of temperature change in past centuries?
Response. In Soon et al. (2003, Energy & Environment, vol. 14, 233296), I and
my colleagues cautioned that the regional temperature patterns resulted from Mann
and colleagues methodology are too severely restricted by the calibration particular,
we are concerned that the regional (and hence larger spatial-scale averages) varia-
bility of temperature on multidecadal and centennial time scales deduced from such
a method will be underestimated.
Recently, the methodology of Mann et al. (1998) has been seriously challenged by
McIntyre and McKitrick (2003, Energy & Environment, vol. 14, 751771) in that
poor data handling, obsolete data and incorrect calculation of principal compo-
nents were shown as the errors and defects of Mann et als. paper. The exchange
between Mann and colleagues and McIntyre and McKitrick is ongoing, but the use
of obsolete data is a clear case of misrepresentation of regional basis of change in
Mann et als work. Further problems in Mann et al. (1998) are outlined under Ques-
tion No. 13 below. Additional documentation (including responses by Prof. Mann
and his colleagues) and updates can be found in http://www.uoguelph.ca/rmckitri/
research/trc.html.
Question 5. Do you maintain that the Mann study extrapolated global tempera-
ture estimates from the northern hemisphere?
Response. I have not seen any global temperature curves presented in the two
earlier studies by Mann et al. (1998 and 1999). But please consider the deep con-
cerns about the lack of proxy data especially over the tropics (30N to 30S) and the
southern hemisphere raised by Soon et al. (2003) and even in the independent paper
by Professor Manns close colleagues and co-authors (Bradley and Hughes), i.e., in
Bradley et al. (2003), discussed under Question No. 3 above.
Global temperature estimates, based on indirect climate proxies, from 2001980
were shown in Mann and Jones (2003, Geophysical Research Letters, vol. 30 (15),
1820) as Figure 2c. But I am unsure if the temperature series presented by Mann
and Jones (2003) could adequately represents the variability over the whole globe
since it was openly admitted that the proxies used covered only 8 distinct regions
in the Northern Hemisphere and 5 for the Southern Hemisphere (see the coverage
of proxies shown in Figure 1 of Mann and Jones, 2003).
More importantly, Soon et al. (2004, Geophysical Research Letters, vol. 31,
L03209) showed that the 40-year smoothed instrumental temperature trend for the
Northern Hemisphere shown as Figure 2a of Mann and Jones (2003) has a phys-
ically implausible high value at year 2000 (see more discussion in Question No. 6
below). We caution that the extremely rapid rate of warming trend of 1 to 2.5 C
per decade implied by the published results by Mann and his colleagues over the
last one to 2 years [comparing Mann and Jones (2003) with both Mann (2002,
Science, vol. 297, 14811482) and Mann et al. (2003, Eos, 84(27), 256257)], is most
likely due to the artifacts of methodology and their procedure of trend smoothing.
I am submitting the pdf file (SLB-GRL04-NHtempTrend.pdf) of Soon et al. (2004)
for the record of the committee.
Question 6. Do you maintain that historical and instrumental temperature records
that are available indicate colder northern hemisphere temperature conditions than
the Mann et al northern hemisphere temperature reconstruction in the past cen-
turies?
Response. I am not sure about the meaning of this question. But when contrasted
with borehole-based reconstruction, the Northern Hemisphere terrestrial tempera-
tures produced by Mann et al. (1998, 1999) over the last 500 years may have been
too warm by about 0.4 C during the 17th18th century (see Huang et al. 2000, Na-
ture, vol. 403, 756758). Recent attempts by Mann et al. (2003, Journal of Geo-
physical Research, vol. 108. (D7), 4203) and Mann and Schmidt (2003, Geophysical
Research Letters, vol. 30 (12), 1607) to rejustify and defend the Mann et al. (1998,
1999) results have been shown to be either flawed or invalid by Chapman et al.
(2004, Geophysical Research Letters, vol. 31, L07205) and by Pollack and Smerdon
(2003, Geophysical Research Abstract of EGS, vol. 6, 06345). The eventual fact will
no doubt emerge with increased understanding, but Chapman et al. (2004) warned
158
that A second misleading analysis made by Mann and Schmidt [2003] concerns use
of end-points in reaching a numerical conclusion. . . . It is based on using end
points in computing changes in an oscillating time series, and is just bad science.
With regard to instrumental thermometer data of the past 100150 years, it is
important to note that Soon et al. (2004) has recently shown that the 40-year
smoothed Northern Hemisphere temperature trend shown in Mann and Jones
(2003) has a physically implausible high value at the year 2000 endpoint especially
when studied in context with previous published results by Mann et al. (2003, Eos,
vol. 84 (27), 256257) and Mann (2002, Science, vol. 297, 14811482). This impor-
tant updated information, admittedly with the benefit of hindsight, together with
the works by Chapman et al. (2004) and McIntyre and McKitrick (2003), showed
clearly that the Northern Hemisphere temperature trends, either proxy-based or in-
strumental, derived by Mann et al. (1998, 1999) and Mann and Jones (2003) are not
reliable.
Question 7. Is it your understanding that during the mid-Holocene optimum pe-
riod (the period from 40007000 B.C.) that annual mean global temperatures were
more than a degree C warmer than the present day?
Response. Again, I am not sure if there are sufficient proxy data that would allow
a meaningful quantitative estimate of annual mean global temperatures back six to
nine thousand years. But in a new paper for the Quaternary Science Reviews, Dar-
rell Kaufman and 29 co-authors (2004, Quaternary Science Reviews, vol. 23, 529
560) found that indeed there are clear evidence for warmer than present conditions
during the Holocene at 120 out of 140 sites they compiled across the Western Hemi-
sphere of the Arctic. Kaufman et al. (2004) estimated that, at the 16 terrestrial sites
where quantitative data are available, the local Holocene Thermal Maximum sum-
mer temperatures were about 1.60.8 C higher than the average of the 20th cen-
tury. The coarse temperature map sketched on the NOAAs Paleoclimatology web
site: http://www.ngdc.noaa.gov/paleo/globalwarming/images/polarbigb.gif sug-
gests that the summer temperatures 6000 years ago may have been 2 to 4 C warm-
er than present in the other sector (Eastern Hemisphere) of the Arctic.
Question 8. As a climatologist, can you explain what kind of quantitative analysis
it takes to determine whether or not the last 50 years has been unusually warm
compared to the last 1000 years?
Response. The theoretical requirement is fairly simple: (a) find local and regional
proxies that are sensitive to variations of temperature on timescales of decade, sev-
eral decades and century; (b) have sufficient spatial coverage of these local and re-
gional proxies. Then one would be able to compare the last 50 years of the 1000-
year record with the previous 950 years.
Soon et al. (2003) had indeed initiated an independent effort in this direction and
concluded that a truly global or hemispheric averaged temperature record for the
past 1000 years is not yet forthcoming because of the large and disparate range of
the indirect local and regional proxies to temperature such that a robust ability of
different proxies in capturing all the necessary scales of variability cannot yet be
confirmed. The main problem I foresee in having any definitive answers for now is
related to the fact that the statistical association of each proxy to climatic variables
like temperature can itself be variable and changing depending on the location and
time interval. But I am not sure if the sole focus on temperature as the measure
of climate is sensible if not unnecessarily narrow.
In Soon et al. (2003), we consider climate to be more than just temperature so
we did not narrowly restrict ourselves to only temperature-sensitive proxies. For ex-
ample, in addition to temperature, we are equally concerned about expansion and
reduction of forested and desert-prone areas, tree-line growth limit, sea ice changes,
balances of ice accumulation and ablation in mountain glaciers and so on. When
studying the ice balance for a glacier, it is important to insist that although glaciers
are very important indicators of climate change over a rather long time-scale, they
are not simply thermometers as often confused by heated discussion pointing to evi-
dence for global warming by carbon dioxide (see additional discussion on factors, es-
pecially atmospheric carbon dioxide, in determining Earths climate and its change
under Questions No. 19, 20, 25, 30 and 35 below). Examples include statements by
Will Stefen, director of the International Geosphere-Biosphere Program, Tropical
glaciers are a bellweather of human influence on the Earth system (quoted in the
article The melting snows of Kilimanjaro by Irion, 2001, Science, vol. 291, 1690
1691) or by Professor Lonnie Thompson, Ohio State University,
We have long predicted that the first signs of changes caused by global
warming would appear at the few fragile, high-altitude ice caps and glaciers
within the tropics . . . [t]hese findings confirm those predictions. We need to
take the first steps to reduce carbon dioxide emissions. We are currently doing
159
nothing. In fact, as a result of energy crisis in Californiaand probably in the
rest of the country by this summerwe will be investing even more in fuel-
burning power plants. That will put more power in the grid but, at the same
times it will add carbon dioxide to the atmosphere, amplifying the problem
(quoted in Ohio State Universitys press release, http://www.acs.ohio-state.edu/
units/research/archive/glacgone.htm).
A clarification about the physical understanding of modern glacier retreats and
climate change, especially those on Kilimanjaro, is necessary and has been forth-
coming with important research progress. First, Molg et al. (2003, Journal of Geo-
physical Research, vol. 108 (D23), 4731) recently concluded that their study:
highlights that modern glacier retreat on Kilimanjaro is much more complex
than simply attributable to global warming only, a finding that conforms with
the general character of glacier retreat in the global tropic [Kaser, 1999]: a proc-
ess driven by a complex combination of changes in several different climatic pa-
rameters . . . with humidity-related variables dominating this combination.
In another new paper for the International Journal of Climatology, Kaser et al.
(2004, International Journal of Climatology, Modern glacier retreat on Kilimanjaro
as evidence of climate change: Observations and facts, vol. 24, 329339; available
from http://geowww.uibk.ac.at/glacio/LITERATUR/index.html) provided clear an-
swers that neither added longwave radiation from a direct addition of atmospheric
CO2 nor atmospheric temperature were the key variables for the observed changes,
as revealed in this long but highly informative passage:
Since the scientific exploration of Kilimanjaro began in 1887, when Hans
Meyer first ascended the mountain (not to the top at this time, but to the crater
rim), a central theme of published research has been the drastic recession of
Kilimanjaros glaciers (e.g., Meyer, 1891, 1900; Klute, 1920; Gilman, 1923;
Jager, 1931; Geilinger, 1936; Hunt, 1947; Spink, 1949; Humphries, 1959;
Downie and Wilkinson, 1972; Hastenrath, 1984; Osmastion, 1989; Hastenrath
and Greischar, 1997). Early reports describe the formation of notches, splitting
up and disconnection of ice bodies, and measurements of glacier snout retreat
on single glaciers, while later books and papers advance to reconstructing gla-
cier surface areas. . . . Today, as in the past, Kilimanjaros glaciers are mark-
edly characterized by features such as penitentes, cliffs (Figure 3a/b) [not repro-
duced here], and sharp edges, all resulting from strong differential ablation.
These features illustrate the absolute predominance [emphasis added] of incom-
ing shortwave radiation and latent heat flux in providing the energy for abla-
tion (Kraus, 1972). A positive heat flux from either longwave radiation or sen-
sible heat flux, if available, would round-off and destroy the observed features
within a very short time ranging from hours to days. On the other hand, if de-
stroyed, the features could only be sculptured again under very particular cir-
cumstances and over a long time. Thus, the existence of these features indicates
that the present summit glaciers are not experiencing ablation due to sensible
heat (i.e., from positive air temperature). Additional support for this is provided
by the Northern Icefield air temperature recorded from February 2000 to July
2002, which never exceeded 1.6 C, and by the presence of permafrost at 4,700
m below Arrow Glacier on the western slope . . .
Kaser et al. (2004) continue with this synopsis of interpretations and facts:
A synopsis of (i) proxy data indicating changes in East African climate since
ca. 1850, (ii) 20th century instrumental data (temperature and precipitation),
and (iii) the observations and interpretations made during two periods of
fieldwork (June 2001 and July 2002) strongly support the following scenario.
Retreat from a maximum extent of Kilimanjaros glaciers started shortly before
Hans Meyer and Ludwig Purtscheller visited the summit for the first time in
1889, caused by an abrupt climate change to markedly drier conditions around
1880. Intensified dry seasons accelerated ablation on the respectively illumi-
nated vertical walls left in the hole on top by Reusch Crater as a result of vol-
canic activity [emphasis added]. The development of vertical features may also
have started on the outer margins of the plateau glaciers before 1900, primarily
as the formation of notches, as explicitly reported following field research in
1898 and 1912 (Meyer, 1900; Klute, 1920). A current example of such a notch
development is the hole in the Northern Icefield (see Figure 2). Once started,
the lateral retreat was unstoppable, maintained by solar radiation despite less
negative mass balance conditions on horizontal glacier surfaces, and will come
to an end only when the glaciers on the summit plateau have disappeared. This
is most probable within next decades, if the trend revealed in Figure 1 con-
tinues. Positive air temperatures have not contributed to the recession process
160
on the summit so far. The rather independent slope glaciers have retreated far
above the elevation of their thermal readiness, responding to dry conditions. If
present precipitation regime persists, these glaciers will most probably survive
in positions and extents not much different from today. This is supported by the
area determinations in Thompsons et al. (2002) map, which indicate that slope
glaciers retreated more from 1912 to 1952 than since then. From a hydrological
point of view, melt water from Kibos glaciers has been of little importance to
the lowland in modern times. Most glacier ablation is due to sublimation, and
where ice does melt it immediately evaporates into the atmosphere. Absolutely
no signs of runoff can be found on the summit plateau, and only very small riv-
ers discharge from the slope glaciers. Rainfall reaches a maximum amount at
about 2,500 m a.s.l. [above sea level] (Coutts, 1969), which primarily feeds the
springs at low elevation on the mountain; one estimate attributes 95 percent
of such water to a forest origin (Lambrechts et al., 2002). The scenario pre-
sented offers a concept that implies climatological processes other than in-
creased air temperature [emphasis added] govern glacier retreat on Kilimanjaro
in a direct manner. However, it does not rule out that these processes may be
linked to temperature variations in other tropical regions, e.g., in the Indian
Ocean (Latif et al., 1999; Black et al., 2003).
Lindzen (2002, Geophysical Research Letter, vol. 29, paper 2001GL014360) fur-
ther added that Recent papers show that deep ocean temperatures have increased
somewhat since 1950, and that the increase is compatible with predictions from cou-
pled GCMs [General Circulation Models]. The inference presented is that this de-
gree of compatibility constitutes a significant test of the models. . . . [But] it would
appear from the present simple model (which is similar to what the IPCC uses to
evaluate scenarios) that the ocean temperature change largely reflects only the fact
that surface temperature change is made to correspond to observations, and says
almost nothing about model climate sensitivity. . . . It must be added that we are
dealing with observed surface warming that has been going on for over a century.
The oceanic temperature change [at depth of 475 m or so] over the period reflects
earlier temperature change at the surface. How early depends on the rate at which
surface signals penetrate the ocean. In other words, the recently noted warming
of the deeper ocean is not a proof of global surface and atmospheric warming by
increasing CO2 in the air because the parameters of climate sensitivity and rate of
ocean heat uptake are not sufficiently well quantified. In addition, if the earlier oce-
anic surface temperature warming mentioned by Lindzen were indeed initiated and
occurred substantially long ago, then there would be no association of that change
to man-made CO2 forcing.
Question 9. The IPCC has found that the late 20th century is the warmest period
in the last 1000 years, for average temperature in the northern hemisphere. Does
your paper provide a quantitative analysis of average temperatures for the northern
hemisphere for this specific time periodthat is, for the later half of the 20th cen-
tury?
Response. It should be understood that (1) the conclusion of the IPCC Working
Group Is Third Assessment Report (2001; TAR), (2) the evidence shown in Figure
1b of the Summary for Policymaker, (3) Figure 5 of the Technical Summary, and
(4) Figure 2.20 in Chapter 2 of TAR were all derived directly from the conclusion
in Mann et al. (1999) and Figure 3a of Mann et al. (1999). Therefore all comments
and criticisms presented in this Q&A about Mann et al. (1999) apply to the IPCC
TARs conclusion. In addition, Soon et al. (2004) recently cautioned that the 40-year
smoothed northern hemisphere temperature trend shown in Figure 2.21 of TAR
(2001) cannot be replicated according to the methodology described in the caption
of Figure 2.21. The failure in replication introduces a significant worry about the
actual quality of scientific efforts behind the production of Figure 2.21 in TAR
(2001).
The answer to the second part of your direct question is no. Here are the related
reasons why a confident estimate of the averaged northern hemisphere temperature
for the full 1000 years (including the full 20th century) is not yet possible, despite
what had been claimed by Mann et al. (1999). First, several authors, including those
detailed in section 5.1 of Soon et al. (2003) and those pointed out in Question No.
6, had shown that the 1000-year series of mathematical temperature derived by
Mann et al. (1999) has significantly underestimated the multidecadal and centennial
scale changes. Second, the focus of Soon et al. (2003) is to derive understanding of
climatic change on local and regional spatial scales, instead of over the whole north-
ern hemisphere per se, because those are the most relevant measures, in practical
sense, of change. In addition, we provided the first-order attempt to collect all avail-
able climate proxies relevant for local and regional climatic changes, but not re-
161
stricted to temperature alone. But more pertinent to your question is the fact dis-
cussed in Soon et al. (2003) that different proxies respond with differing sensitivities
to different climatic variables, seasons, plus spatial and temporal scales, so that a
convenient derivation of a self-consistent northern hemisphere averaged annual
mean temperature for the full 1000 years, desirable as the result may be, is not yet
possible.
Question 10. Does your paper provide any quantitative analysis of temperature
records specifically for the last 50 years of the 20th century?
Response. Soon et al. (2003) considered all available proxy records with no par-
ticular prejudice. If the individual proxy record covers up to the last 50 years of the
20th century, then quantitative comparisons are performed, mostly according to the
statements from the original authors. Please consider some of the detailed quan-
titative discussion in section 4 of Soon et al. (2003) and the qualitative results com-
piled in Table 1 of that paper.
Question 11. In an article in the Atlanta Journal Constitution (June 1, 2003), you
were quoted as acknowledging during a question period at a previous Senate lunch-
eon that your research does not provide a comprehensive picture of the Earths tem-
perature record and that you questioned whether that is even possible, and that you
did not, . . . see how Mann and the others could calibrate the various proxy
records for comparison. How then does your analysis provide a comprehensive pic-
ture of Earths temperature record or have any bearing on the finding by the IPCC,
that the late 20th century is the warmest in the last 1000 years?
Response. Thank you for referencing the article. I must first state on the record
that contrary to the claim in this Atlanta Journal Constitution (June 1, 2003) article
http://www.ajc.com/business/content/business/0603/01warming.html, the writer,
never, as claimed, conducted a telephone interview with me. No such conversation
took place and I am rather shocked by this false claim. This fact has gone uncor-
rected until now.
The strengths and weaknesses of my research works are fully discussed in Soon
et al. (2003). The paper documented detailed local and regional changes in several
climatic variables to try to obtain a broader understanding of climate variability. We
concluded that:
Because the nature of the various proxy climate indicators are so different,
the results cannot be combined into a simple hemispheric or global quantitative
composite. However, considered as an ensemble of individual observations, an
assemblage of the local representations of climate establishes the reality of both
the Little Ice Age and the Medieval Warm Period as climatic anomalies with
worldwide imprints, extending earlier results by Bryson et al. (1963), Lamb
(1965), and numerous other research efforts. Furthermore, these individual
proxies are used to determine whether the 20th century is the warmest century
of the 2nd Millennium at a variety of globally dispersed locations. Many records
reveal that the 20th century is likely not the warmest nor a uniquely extreme
climatic period of the last millennium, although it is clear that human activity
has significantly impacted some local environments.
The question on the difficult problem of calibrating proxies of differing types and
sensitivities to climatic variables is discussed in Soon et al. (2003) and some criti-
cisms on the weaknesses of the reconstruction by Mann et al. (1999) or the related
IPCC TARs conclusion are listed especially under Questions No. 6 and 9.
Question 12. Do you believe that appropriate statistical methods do not exist for
calibrating statistical predictors, including climate proxy records, against a target
variable, such as the modern instrumental temperature record?
Response. True progress in the field of paleoclimatology will certainly involve a
better and more robust means of interpreting and quantifying the variations and
changes seen in each high-resolution proxy record. The issue is not merely a prob-
lem awaiting solution through appropriate statistical methods like the EOF method-
ology adopted by Mann et al. (1998, 1999). On pp. 241242 of Soon et al. (2003),
we briefly outlined our straight-forward approach and contrasted it to the one used
by Mann and colleagues that does not necessarily lead to results with physical
meaning and reality.
Question 13. In determining whether the temperature of the Medieval Warm Pe-
riod was warmer than the 20th century, does your study analyze whether a 50-
year period is either warmer or wetter or drier than the 20th century? If so, why
is it appropriate to use indicators of drought and precipitation directly to draw infer-
ences of past temperatures? Please list peer-reviewed works that specifically support
the use of these indicators for inferring past temperature.
162
Response. The detailed discussion behind our usage of the term Medieval Warm
Period or Little Ice Age was described in Soon et al. (2003). We are mindful that
the two terms should definitely include physical criteria and evidence from the ther-
mal field. But we emphasize that great bias would result if those thermal anomalies
were dissociated from hydrological, cryospheric, chemical, and biological factors of
change. So indeed our description of a Medieval Climatic Anomaly (see a similar
sentiment later reported by Bradley et al. 2003, Science, vol. 302, 404405) in Soon
et al. (2003) includes a warmer time that contains both drought or flooding condi-
tions depending on the locations.
With regard to the last part of your question, I would answer by detailing only
one exampleMann et al (1998). This influential study used both direct precipita-
tion measurements and precipitation proxies as temperature indicators. This study
was indeed applied by the IPCC TAR (2001). These include historical precipitation
measurements in 11 grid cells, two coral proxies (reported in Mann et al. [1998] as
precipitation proxies; see http://www.ngdc.noaa.gov/paleo/ei/datasupp.html for
this and following references), two ice core proxies, 3 reconstructions of spring pre-
cipitation in southeast United States by Stahle and Cleaveland from tree ring data,
12 principal component series for tree rings in southwestern United States and
Mexico reported as precipitation proxies by Stahle and Cleaveland (and Mann et al.
1998) and one tree ring series in Javamaking a total of 31 precipitation series
used as proxies in temperature reconstruction by Mann et al. (1998). In this peer-
reviewed article, for the precipitation data in a grid cell in New England, the re-
searchers apparently used historical data from Paris, France (please see Figure 2
of McIntyre and McKitrick, 2003 and their discussion on pp. 758759). For a grid
cell near Washington DC, the researchers used historical data from Toulouse,
France. For a grid cell in Spain, the researchers used precipitation data from Mar-
seilles, France. Of the 11 precipitation series used in Mann et al. (1998), only one
series (Madras, India) is correctly located. The precipitation data used by these au-
thors cannot be identified in the source cited in paper Mann et al. (1998). While pre-
cipitation data and precipitation-related proxies can be instructive in providing in-
formation on past distribution of moisture and circulation patterns (and thus tem-
perature), it is important to correctly identify the series used and important not to
use data from the wrong continent for historical reconstructions.
Question 14. Do you maintain that any two 50-year periods that occur within a
multi-century interval can be considered coincident from a climatic point of view?
Response. The question raised here about the connection of any two 50-year peri-
ods in any two regions to be related from climatic point of view is both important
and interesting. But the answer will be strongly dependent on the nature of forcings
and feedbacks involved. If longer-term cryospheric or oceanic processes are involved
then the answer would be yes.
Question 15. Do your two recent studies employ an analysis (that is, a statistical
or analytical operation performed upon numerical data) of a single proxy climate
record?
Response. The meaning of this question is not entirely clear to me. But I would
say yes under the context of what is being said.
Question 16. Has your study produced a quantitative reconstruction of past tem-
perature patterns? Do you have a measure of uncertainty or verification in your de-
scription of past temperatures?
Response. The results and conclusion of Soon et al. (2003) are best judged by the
paper itself. Quantitative assessments of local and regional changes through the cli-
matic proxies are discussed in section 4 of that paper as well as some qualitative
picture described in Figures 1, 2 and 3 of that paper. Again, Soon et al. (2003) did
not tried to distill all the collected proxies down to produce a strict temperature-
only result since we are interested in a broader understanding of climate variability.
Part of the answers given under Questions No. 9 and 11 can help elaborate what
was done by Soon et al. (2003). I would also like to direct your attention to the two
warnings listed under Question No. 3 by Bradley et al. (2003) and Esper et al.
(2003) concerning any undue, over confidence in promoting quantitative certainties
in the reconstruction of past temperatures through highly imprecise black boxes of
indirect proxies.
Question 17. Your study indicates that you have compiled the results of hundreds
of previous paleo-climate studies. Have you verified your interpretation of the hun-
dreds of studies with any of the authors/scientists involved in those studies? If so,
how many?
Response. Specific authors and scientists that provided help in our work were list-
ed in the acknowledgement section (p. 272) of Soon et al. (2003). We have also re-
163
ceived generous help and comments from several scientists who are certainly highly
qualified in terms of paleoclimatic studies. But the ultimate quality and soundness
of our research shall always be our own responsibility.
In the September 5, 2003 Chronicle for Higher Education article (by Richard
Monastersky), there were indeed two very serious accusations that suggested that
Soon et al. (2003) had misrepresented or abused the conclusions by two original au-
thors whose work we had cited. Our corrections and explanations to these unfortu-
nately false claims can be studied from the documentation listed in the URL http:/
/cfa-www.harvard.edu/wsoon/ChronicleHigherEducation03-d (read especially
Sep12-lettoCHE3.doc and Sep12-lettoCHE4.doc).
Question 18. What was earths climate like the last time that atmospheric con-
centrations of carbon dioxide were at todays levels or about 370 parts per million
(ppm) and what were conditions like when concentration were at 500 ppm, which
will occur around 2060 or so?
Response. Co-answer to this question is listed under Question No. 19 below.
Question 19. Please describe any known geologic precedent for large increases of
atmospheric CO2 without simultaneous changes in other components of the carbon
cycle and the climate system.
Response. My July 29, 2003 testimony was about the climate history of the past
1000 years detailed in Soon et al. (2003) rather than any potential (causal or other-
wise) relationship between atmospheric carbon dioxide and climate. The fact re-
mains that the inner working of the global carbon cycle and the course of future
energy use are not sufficiently understood or known to warrant any confident pre-
diction of atmospheric CO2 concentration at year 2060. Please consider co-answer
to this question under Question No. 25 below.
However, it is abundantly obvious that atmospheric CO2 is not necessarily an im-
portant driver of climate change. It is indeed a puzzle that despite the relative low
level of atmospheric CO2 of no more than 300 ppm in the past 320420 thousand
years (Kawamura et al., 2003, Tellus, vol. 55B, 126137) compared to the high levels
of 330370 ppm since the 1960s there is the clear suggestion of significantly warm-
er temperatures at both Vostok and Dome Fuji, East Antarctica, during the
interglacials at stage 9.3 (about 330 thousand years before present; warmer by
about 6 C) and stage 5.5 (about 135 thousand years before present; warmer by
about 4.5 C) than the most recent 1000 years (see Watanabe et al., 2003, Nature,
vol. 422, 509512; further detailed discussion on environmental changes in
Antartica over the past 1000 years or so, including the most recent 50 years can
be found in section 4.3.4 or pp. 256257 of Soon et al. 2003).
But there are important concerns about the retrieval of information on atmos-
pheric CO2 levels from ice cores. Jaworowski and colleagues (1992, The Science of
the Total Environment, vol. 114, 227284) explained that:
Ice is not a rigid material suitable for preserving the original chemical and
isotopic composition of atmospheric gas inclusion. Carbon dioxide in ice is
trapped mechanically and by dissolution in liquid water. A host of physico-
chemical processes redistribute CO2 and other air gases between gaseous, liquid
and solid phases, in the ice sheets in situ, and during drilling, transport and
storage of the ice cores. This leads to changes in the isotopic and molecular com-
position of trapped air. The presence of liquid water in ice at low temperatures
[even below70 C] is probably the most important factor in the physico-chem-
ical changes. The permeable ice sheet with its capillary liquid network acts as
a giant sieve which redistributes elements, isotopes and micro-particles. Carbon
dioxide in glaciers is contained: (1) in interstitial air in firn; (2) in air bubbles
in ice; (3) in clathrates; (4) as a solid solution in ice crystals; (5) dissolved in
intercrystalline veins and films of liquid brine; and (6) in dissolved and particu-
late carbonates. Most of the CO2 is contained in ice crystals and liquids, and
less in air bubbles. In the ice cores it is also present in the secondary gas cav-
ities, cracks, and in the traces of drilling fluids.
The concentration of CO2 in air recovered from the whole ice is usually much
higher than that in atmospheric air. This is due to the higher solubility of this
gas in cold water, which is 73.5- and 35-times higher than that of nitrogen and
oxygen, respectively. The composition of other atmospheric gases (N2, O2, Ar)
is also different in ice and in air inclusions than in the atmosphere. Argon39
and 85Kr data indicate that 36100 percent of air recovered from deep Ant-
arctic ice cores is contaminated by recent atmospheric air during field and lab-
oratory processing. Until about 1985, CO2 concentrations in gas recovered from
primary air bubbles and from secondary gas cavities in pre-industrial and an-
cient ice were often reported to be much higher than in the present atmosphere.
After 1985, only concentrations below the current atmospheric level were pub-
164
lished. Our conclusion is that both these high and low CO2 values do not rep-
resent real atmospheric content of CO2.
Recently reported concentrations of CO2 in primary and secondary gas inclu-
sions from deep cores, covering about the last 160,000 years, are much below
the current atmospheric level, although several times during this period the
surface temperature was 24.5 C higher than now. If these low concentrations
of CO2 represented real atmospheric levels, this would mean (1) that CO2 had
not influenced past climatic changes, and (2) that climatic changes did not influ-
ence atmospheric CO2 levels. (p. 272273)
Additional historical evidence reveals natural occurrences of large, abrupt climatic
changes that are not uncommon and they occurred without any known causal ties
to large radiative forcing change. Phase differences between atmospheric CO2 and
proxy temperature in historical records are often not fully resolved; but atmospheric
CO2 has shown the tendency to follow rather than lead temperature and biosphere
changes (see e.g., Dettinger and Ghil, 1998, Tellus, vol. 50B, 124; Fischer et al.,
1999, Science, vol. 283, 17121714; Indermuhle et al., 1999, Nature, vol. 398, 121
126).
In addition, there have been geological times of global cooling with rising CO2
(during the middle Miocene about 12.514 million years before present [Myr BP],
for example, with a rapid expansion of the East Antarctic Ice Sheet and with a re-
duction in chemical weathering rates), while there have been times of global warm-
ing with low levels of atmospheric CO2 (such as during the Miocene Climate Opti-
mum about 14.517 Myr BP as noted by Panagi et al., 1999, Paleocenoragphy, vol.
14, 273292). A new study of atmospheric carbon dioxide over the last 500 million
years (Rothman, 2002, Proceedings of the (US) National Academy of Sciences, vol.
99, 41674171) concluded that, CO2 levels have mostly decreased for the last 175
Myr. Prior to that point [CO2 levels] appear to have fluctuated from about two to
four times modern levels with a dominant period of about 100 Myr. . . . The result-
ing signal exhibits no systematic correspondence with geologic record of climatic
variations at tectonic time scales.
Question 20. According to a study published in Science magazine, [B. D. Santer,
M. F. Wehner, T. M. L. Wigley, R. Sausen, G. A. Meehl, K. E. Taylor, C. Amman,
W. M. Washington, J. S. Boyle, and W. Bruggemann Science 2003 July 25; 301:
479483], manmade emissions are partly to blame for pushing outward the bound-
ary between the lower atmosphere and the upper atmosphere. How does that fit
with the long-term climate history and what are the implications?
Response. It should first be noted that Pielke and Chase (2004, Science, vol. 303,
1771b; and see p. 1771c by Santer et al. and additional counter-reply by Pielke and
Chase, with input from John Christy and Anthony Reale, available as paper 278b
at http://blue.atmos.colostate.edu/publications/reviewedpublications.shtml) had
criticized and challenged Santer et al.s claim and conclusion that,
[o]ur results are relevant to the issue of whether the real-world troposphere
has warmed during the satellite era. . . . The direct evidence is that in the ALL
experiment [i.e., climate model results that included changes in well-mixed
greenhouse gases, direct scattering effects of sulfate aerosols, tropospheric and
stratospheric ozone, solar total irradiance and volcanic aerosols; see more dis-
cussion below], the troposphere warms by 0.07 C/decade over 19791999. This
warming is predominantly due to increases in well-mixed greenhouse gases.
. . . Over 19791999, roughly 30 percent of the increase in tropopause height
in ALL is explained by greenhouse gas-induced warming of the troposphere.
Anthropogenically driven tropospheric warming is therefore an important factor
in explaining modeled changes in tropopause height.
In contrast, Pielke and Chase (2004) offered the observed evidence and concluded
that
[g]lobally averaged tropospheric temperature trends are statistically indistin-
guishable from zero. Thus, the elevation of the globally averaged tropopause re-
port in [Santer et al., 2003] cannot be attributed to any detectable tropospheric
warming over this period. In addition, the climate system is much more com-
plex than defined by tropospheric temperature and tropopause changes. Linear
trend analysis [in Santer et al., 2003] is of limited significance. Changes in glob-
al heat storage provide a more appropriate metric to monitor global warming
than temperature alone.
Soon and Baliunas (2003, Progress in Physical Geography, vol. 27, 448455) had
also previously outlined the incorrect fingerprint of CO2 forcing observed in even the
best and sophisticated version of climate models thus far. A more general and com-
prehensive discussion about the fundamental difficulties on modeling the effects of
165
carbon dioxide using current generation of climate models is given in Soon et al.
(2001, Climate Research, vol. 18, 259271). Thus, the new paper by Santer et al.
(2003) does not supercede or overcome the difficulties with respect to General Cir-
culation Climate Models raised in Soon and Baliunas (2003).
Both the meaning and strength of the model-dependent results shown in Santer
et al. (2003) remain doubtful and weak for several additional reasons.
First, Figure 2 of Santer et al. (2003) itself confirmed that the modeled changes
in tropopause height are caused mainly by large stratospheric cooling related to
changes in stratospheric ozone (they admitted so even though their note No. 35 indi-
cates that their numerical experiments did not separate tropospheric and strato-
spheric ozone changes) rather than by the well-mixed greenhouse gases that are
supposed to be the subject of concern. Second, the model experiments of Santer et
al. (2003) did not include changes in stratospheric water vapor which is known to
be a significant factor for the observed stratospheric cooling (see e.g., Forster and
Shine, 1999, Geophysical Research Letters, vol. 26, 33093312). Third, the failure to
account for stratospheric water vapor contradicted the documented significant in-
creases of stratospheric water vapor in the past half-century from a variety of in-
strumentations (e.g., Smith et al, 2000, Geophysical Research Letters, vol. 27, 1687
1690; Rosenlof et al., 2001, Geophysical Research Letters, vol. 28, 11951198; though
Randel et al. [2004, Journal of the Atmospheric Sciences, submitted] recently noted
that unusually low water vapor has been observed in the lower stratosphere for
20012003). Fourth, the model experiments by Santer et al. (2003) had clearly ne-
glected (see note No. 18 of that paper) the role of the Suns ultraviolet radiation that
is not only known to be variable (e.g., Fontenla et al. 1999, The Astrophysical Jour-
nal, vol. 518, 480499; White et al., 2000, Space Science Reviews, vol. 94, 6774)
but also known to exert important influence on both the chemistry and thermal
properties in the stratosphere and troposphere (e.g., Larkin et al., 2000, Space
Science Reviews, vol. 94, 199214).
Finally, the physical representation of aerosol forcing (which should not be re-
stricted to sulfate alone) in Santer et al. (2003) is clearly not comprehensive and
at best highly selective. Early on, Russell et al. (2000, Journal of Geophysical Re-
search, vol. 105, 1489114898) cautioned that
[o]ne danger of adding aerosols of unknown strength and location is that they
can be tuned to give more accurate comparisons with current observations but
cover up model deficiencies.
Anderson et al. (2003, Science, vol. 300, 11031104 and see also exchanges in
Crutzen et al., 2003, vol. 303, 16791681) recently cautioned that:
we argue that the magnitude and uncertainty of aerosol forcing may affect the
magnitude and uncertainty of total forcing [i.e., the global mean sum of all in-
dustrial-era forcings] to a degree that has not been adequately considered in
climate studies to date. Inferences about the causes of surface warming over the
industrial period and about climate sensitivity may therefore be in error. . . .
Unfortunately, virtually all climate model studies that have included anthropo-
genic aerosol forcing as a driver of climate change (diagnosis, attribution, and
projection studies; denoted applications in the figure) have used only aerosol
forcing values that are consistent with the inverse approach. If such studies
were conducted with the larger range of aerosol forcings determined from the
forward calculations, the results would differ greatly. The forward calculations
raise the possibility that total forcing from preindustrial times to the present
. . . has been small or even negative. If this is correct, it would imply that cli-
mate sensitivity and/or natural variability (that is, variability not forced by an-
thropogenic emissions) is much larger than climate models currently indicate.
. . . In addressing the critical question of how the climate system will respond
to this [anthropogenic greenhouse gases] positive forcing, researchers must seek
to resolve the present disparity between forward and inverse calculations. Until
this is achieved, the possibility that most of the warming to date is due to nat-
ural variability, as well as the possibility of high climate sensitivity, must be
kept open. [emphasis added]
To further understand the complexity of calculating aerosol forcing, Jacobson
(2001, Journal of Geophysical Research, vol. 106, 15511568) has to account for a
total of 47 species containing natural and/or anthropogenic sulfate, nitrate, chlo-
ride, carbonate, ammonium, sodium, calcium, magnesium, potassium, black carbon,
organic matter, silica, ferrous oxide, and aluminium oxide in his recent estimate
of only the global direct radiative forcing by aerosols. (Jacobson [2001] found that
the global direct radiative forcing by anthropogenic aerosols is only 0.12 W/m2
while the forcing by combined natural and anthropogenic sources is 1.4 W/m2.)
There are also the indirect aerosol effects. Temperature or temperature change is
166
clearly not the only practical measure of effects by aerosols. Haywood and Boucher
(2000, Reviews of Geophysics, vol. 38, 513543) stressed the fact that the indirect
radiative forcing effect of the modification of cloud albedo by aerosols could range
from 0.3 to 1.8 W/m2, while the additional aerosol influences on cloud liquid
water content (hence, precipitation efficiency), cloud thickness and cloud lifetime are
still highly uncertain and difficult to quantify (see e.g., Rotstayn and Liu, 2003,
Journal of Climate, vol. 16, 34763481). This is why one can easily appreciate the
difficulties faced by Santer et al. (2003) because climate forcing by aerosols is not
only known within a wide range of uncertainties but also to a large degree of un-
known.
Therefore, I conclude that in addition to the fundamental issues related to climate
model representation of physical processes, papers like Santer et al. (2003) have also
failed the basic requirement for internal consistencies in the accounting for poten-
tially relevant climatic forcing factors and feedbacks. This is why I cannot comment
on the implication of this particular study and the meaning of the study for long-
term climate history.
Question 21. In your testimony, you discussed there being warming and cool-
ing for different periods. If you did not construct an integral across the hemisphere
or a real timeline, dont your findings really just say there were some warm periods
and cool periods, and therefore cannot speak to the issue of the rate of warming
or cooling?
Response. I am not sure about the meaning of this question and the quotes. My
oral remark was merely referring to making an accurate forecast that includes all
potential human-made warming and cooling effects. The detailed discussion about
the climatic and environmental changes for the past 1000 years as deduced from
the collection of proxies I had studied was given in Soon et al. (2003). I can certainly
speak to the rate of warming or cooling at any given location or region when the
available proxy, with sufficient temporal resolution, is known or proven to be tem-
perature sensitive.
Question 22. Is there any indication that regional climate variations are any larg-
er or smaller at present than over the last 1000 years (with 2003, for example, per-
haps being a case with large regional variations from the normal)?
Response. I would not recommend considering the pattern of change from a single
year, i.e., 2003, and called it a climate change. But the fact is that in Soon et al.
(2003) we had carefully studied individual proxy records from various locations and
regions. As an example, the 2000-year bottom-sediment record from Moon Lake,
North Dakota, shows there is perhaps a distinct shift in the mode of hydrologic vari-
ability in the Northern Great Plain region starting around 1200 AD with the more
recent period being more variable from the past. But, as indicated in the chart
below, the author of this paper also noted that the severe droughts of the 1890s
and 1930s around this area are eclipsed by more extensive droughts before the be-
ginning of the instrumental period.
Question 23. In your oral presentation, you talked about [h]aving computer sim-
ulation. Could you please explain what you [as in your original] computer simula-
tion or modeling to which you are referring, and, (a) Has this model gone through
the appropriate set of model intercomparison studies like the various othe global
models? (b) What forcings have been used to drive it? (c) How does it develop re-
gional climate variations, and are these comparable to observations? and, (d) How
does it perform over the 20th century, for example?
167
Response. I apologize for any potential confusions.
In my oral remark, I said,
The entirety of climate proxies over the last 1,000 years shows that over
many areas of the world there has been, and continues to be, large climate
changes. Those changes provide challenges for the computer simulations of cli-
mate. The full models, which explore the Earth region by region, can be tested
against the natural patterns of change over the last 1,000 years that are de-
tailed by the climate proxies. Having computer simulations reproduce past pat-
terns of climate, which has been influenced predominantly by natural factors,
is key to making an accurate forecast that includes all potential human-made
warming and cooling factors.
So in the context of what I said, this question is clearly misdirected by someone
who did not understand my remark. I was speaking on the potential application of
works like Soon et al. (2003) for improving our ability to calculate with confidence
the potential effects from man-made factors by first and foremost having a climate
model that can at least reproduce some of the observed local and regional changes
of the past.
Personally, I am also conducting my research through the help of several climate
models (both simple and complex types) appropriate for my interests and I would
certainly apply what I found in Soon et al. (2003) to my own future studies using
climate models. Any additional comments will be beyond the simple context of my
oral testimony. But, it may be useful to take note of the comments by Green (2002,
Weather, vol. 57, 431439):
It has always worried me that simple models of climate do not seem to work
very well. Experts on numerical models say that this is because the atmosphere
is very complicated, and that large numerical models and computers are needed
to understand it. I worry because I do not know what they have hidden in those
models and the programs they use. I wonder what I can compare their models
with. Not with each other because they belong to a sort of club, where to have
a model that disagrees with everyone elses puts you outside. That is not a bad
system, unless of course they are all wrong. Another curiosity of complicated
models is that their findings are rarely used to improve the model that preceded
them. I would have expected that the more complex model would show where
the simpler one had got it wrong, and allow it to be corrected for that misrepre-
sentation.
Question 24. Based on the various comments of your scientific colleagues regard-
ing your paper, including the methodological flaws pointed out in that paper by the
former editor-in-chief of Climate Research, are you planning any reworking of your
study or any further studies in the paleoclimatic area?
Response. The use of a phrase like methodological flaws is a very convenient at-
tempt to dismiss the weight of scientific evidence presented in Soon et al. (2003) but
unfortunately without any clear nor confirmable basis. Thus far, the only formal
criticism of Soon et al. (2003) was by Mann et al. (2003, Eos, vol. 84(27), 256257)
and we had provided our response to that criticism in Soon et al. (2003b, Eos, vol.
84(44), 473476). My research interest and work to fully discern and quantitatively
describe the local and regional patterns of climate variability over the past 1000
years or so will certainly continue despite this mis-characterization.
It should however not be left unnoticed that several very serious problems in
Mann et al. (1998, 1999), Mann and Schmidt (2003) and Mann and Jones (2003) had
been found recently. Those unresolved anomalies are outlined in my answers to your
Questions No. 3, 4, 5, 6, 9 and 13. A careful reworking with a fully open access to
all data as well as a fully disclosed transparency of the actual methodologies and
detailed applications will be the next important step for paleoclimate reconstruction
research.
Question 25. You indicated that there would likely be relatively small climatic re-
sponse to even substantial increases in the CO2 concentration. Do you disagree with
the radiation calculations that have been done and the trapped energy that they cal-
culate, as per the peer-reviewed literature? If so, please explain.
Response. First, please consider the above discussion on climate forcing factors
and climate response sensitivities under Question No. 20 as part of the answers to
this question.
Second, I do not believe that I had made any strong claim, one way or another,
about the CO2 forcing and potential response in any specific quantitative term dur-
ing my testimony (since factually no one can). I do want to comment, as in my re-
sponse under Question No. 19, that CO2, as a minor greenhouse gas, is not a deter-
minant of Earths climate and therefore not entirely obvious a driver of its change.
168
Most calculations in peer-reviewed literature (or not) that focus on the CO2 factor
indeed would only like us to believe that CO2, especially under the realm of radi-
ative forcing, is the predominant factor for driving anomalous climate responses,
while the unavoidable and very difficult core subject about the actual dynamical
state of Earths mean climate is ignored.
Third, some 10 years ago, Lindzen (1994, Annual Review in Fluid Mechanics, vol.
26, 353378) pointed out a rather serious internal inconsistency regarding the role
of water vapor and clouds when the physics of greenhouse effect is normally evalu-
ated even among expert scientists or expert sources of information. (See e.g., the
comment without [the greenhouse effect], the planet would be 65 degrees colder
by Jerry Mahlman in the February 2004 issue of Crisis Magazine, http://
www.crisismagazine.com/february2004/feature1.htm) and the description of Green-
house Effect in the EPAs global warming for kids webpage: http://www.epa.gov/
globalwarming/kids/greenhouse.html.) Lindzen notes the artificial inevitability for
the predominance of CO2 radiative forcing as a climatic factor in the following pas-
sage.
In most popular depictions of the greenhouse effect, it is noted that in the
absence of greenhouse gases, the Earths mean temperature would be 255 K
[about 0 F], and that the presence of infrared absorbing gases elevates this to
288 K [59 F]. In order to illustrate this, only radiative heat transfer is included
in the schematic illustrations of the effect (Houghton et al. 1990, 1992) [IPCC
reports]; this lends an artificial inevitability to the picture. Several points
should be made concerning this picture: 1. The most important greenhouse gas
is water vapor, and the next most important greenhouse substance consists in
clouds; CO2 is a distant third (Goody & Yung 1989). 2. In considering an atmos-
phere without greenhouse substances (in order to get 255 K), clouds are re-
tained for their visible reflectivity while ignored for their infrared properties.
More logically, one might assume that the elimination of water would also lead
to the absence of clouds, leading to a temperature of about 274 K [or 278 K de-
pending on what value of the solar irradiation factor is used] rather than 255
K. 3. Pure radiative heat transfer leads to a surface temperature of about 350
K rather than 288 K. The latter temperature is only achieved by including a
convective adjustment that consists simply in adjusting vertical temperature
gradient so as to avoid convective instability while maintaining a consistent ra-
diative heat flux. . . . (p. 359361)2
Hu et al. (2000, Geophysical Research Letters, vol. 27, 35133516) added that as
the sophistication of parameterization of atmospheric convection increases, there is
a tendency for climate model sensitivity to variation in atmospheric CO2 concentra-
tion to decrease considerably. In Hu et al. (2000)s study, the change is from a de-
crease in the averaged tropical warming of 3.3 to 1.6 C for a doubling of CO2 that
is primarily associated the corresponding decrease in the calculated total atmos-
pheric column increase in water vapor from 29 percent to 14 percent.
Question 26. If you accept those radiation calculations as valid, please explain
why you seem to believe that the energy trapped by the greenhouse gases will have
a small effect whereas you seem to believe that small changes in solar energy will
have very large climatic effects?
Response. In addition to my answers under Questions No. 19, 20 and 25 above,
I would like to point out that the Suns radiation is not only variable but it varies
in the ultraviolet part of the electromagnetic spectrum often by factors of 10 or
more. The question about the relative effects of anthropogenic greenhouse gases and
the Suns radiation in terms of radiative forcing is certainly of interest but it does
not add much to my current research quest to understand the Earths mean climatic
state and its nonlinear manifestations.
Question 27. Please explain why you think the physically based climate models
seem to quite satisfactorily represent the seasonal cycles of the climate at various
latitudes based on the varying distributions of solar and infrared energy, but then
would be so far off in calculating the climatic response for much smaller perturba-
tions to solar radiation and greenhouse gases?
Response. As indicated below, the first part of this sentence about a satisfactory
representation of seasonal cycles of climate by computer climate models is not any
assured statement of fact. This is why the followup question cannot be logically an-
swered.
2 A more pedagogical discussion of the greenhouse effect is given by Lindzen and Emanuel
(2002) in Encyclopedia of Global Change, Environmental Change and Human Society, Volume
1, Andrew S. Goudie, editor in chief, p. 562566, Oxford University Press, New York, 710 pp.
169
For example, E. K. Schneider (2002, Journal of Climate, vol. 15, 449469) noted
that:
[a]t this writing, physically consistent and even flux-corrected coupled atmos-
phere-ocean general circulation models (CGCMs) have difficulty in producing a
realistic simulation of the equatorial Pacific SST [sea surface temperature], in-
cluding annual mean, annual cycle, and interannual variability. Not only do the
CGCM simulations have significant errors, but also there is little agreement
among models.
In a systematic comparison of the performance of 23 dynamical ocean-atmosphere
models, Davey et al. (2002, Climate Dynamics, vol. 18, 403420) found that no sin-
gle model is consistent with the observed behavior in the tropical ocean regions . . .
as the model biases are large and gross errors are readily apparent. Without flux
adjustment, most models produced annual mean equatorial sea surface temperature
in the central Pacific that are too cold by 23 C. All GCMs except one simulated
the wrong sign of the east-west SST gradient in the equatorial Atlantic. The GCMs
also incorrectly simulated the seasonal climatology in all ocean sections and its
interannual variability in the Pacific ocean.
Question 28. In regard to your answers to the previous questions, to what extent
is your indication of a larger climate sensitivity for solar than greenhouse gases due
to quantitative analysis of the physics and to what extent due to your analysis of
statistical correlations? Is this greater responsiveness for solar evident in the base-
line climate system, or just for perturbations, and could you please explain?
Response. Please see my answers to Questions No. 26 above and 30 below.
Question 29. Please explain why you seem to accept that solar variations, volcanic
eruptions, land cover change, and perhaps other forcings can have a significant cli-
matic influence, but changes in CO2 do not or cannot have a comparable influence?
Response. Please see my answers to Question No. 30.
Question 30. Could you please clarify why it is that you think the best way to
get an indication of how much the climate will change due to global-scale changes
in greenhouse gases or in solar radiation is to look at the regional level rather than
the global scale? How would you propose to distinguish a natural variation from a
climate change at the local to regional level?
Response. Questions No. 28, 29 and 30 seem to be based on the unreasonable pre-
sumptions that some special insights about the effects of solar irradiation or land
cover changes or even volcanic eruptions must be invoked or answered in order to
challenge the role of carbon dioxide forcing in the climate system. That presumption
is illogical. My basic view and research interest about carbon dioxide and the ongo-
ing search for the right tool for modeling aspects of the Earths climate system can
be briefly summarized by my answers to Questions No. 19, 25, 26, 27 and perhaps
20.
As to your specific question on distinguishing a natural variation (either inter-
nally generated or externally introduced by solar variation or volcanic eruption)
from a climate change by anthropogenic factors like land cover changes or carbon
dioxide at the local to regional level, there is possibly a somewhat surprising an-
swer. If one wish to single out the potential effects of man-made carbon dioxide
against other natural and anthropogenic factors as hinted by your question, then
the answer is clearthe CO2 effect is expected to be small in the sense that its po-
tential signals will be likely be overwhelmed when compared with expected effects
by other factors. It is a scientific fact that the signal of CO2 on the climate may
be expected only over a very long time baseline and over a rather large areal extent.
For example, Zhao and Dirmeyer (2003, COLA Technical Report No. 150; available
at http://grads.iges.org/pubs/tech.html), in their modeling experiments that at-
tempt to account for the realistic effects of land cover changes, sea surface tempera-
ture changes and for the role of added atmospheric CO2, found that
[w]hen observed CO2 concentrations are specified in the model across the 18-
year period, . . . we do not find a substantially larger warming trend than in
CTL [with no change in CO2 concentration], although some small increase is
found. The weak impact of atmospheric CO2 changes may be due to the small
changes in specified CO2 during the model simulation compared to the doubling
CO2 simulation, or the short length of the integrations. It is clear that the rel-
atively strong SST [sea surface temperature] influence in this climate model is
the driver of the [observed] warming.
Please also consider the point made by Lindzen (2002) under Question No. 8 above
concerning the difficulties in linking the observed warming trend of the deep ocean
(without challenging the quality and error of those deep ocean temperature data)
170
to anthropogenic CO2 forcing. Finally, I wish to note that Mickley et al. (2004, Jour-
nal of Geophysical Research, vol. 109, D05106) managed to use climate model sim-
ulations results to demonstrate the limitations in the use of radiative forcing as
a measure of relative importance of greenhouse gases to climate change. . . . While
on a global scale CO2 appears to be a more effective global warmer than tropo-
spheric ozone per unit forcing, regional sensitivities to increase ozone may lead to
strong climate responses on a regional scale.
Question 31. How does your recent article relate to your assignments at the Har-
vard Smithsonian Observatory? Is paleoclimate part of the task of this observatory?
Response. The publications of Soon et al. (2003) or Soon et al. (2004) are possible
because of research grants that I and my collaborators obtained through competitive
proposals to several research funding sources. I am a trust-fund employee at the
Harvard-Smithsonian Center for Astrophysics and the support of my position and
research work here is mainly through my own research initiative and proposal ap-
plication. The scientific learning about paleoclimatic reconstruction presented in
Soon et al. (2003) is related to my research interest in the mechanisms of sun-cli-
mate relation, especially for relevant physical pathways and processes on
multidecadal and centennial time scales. Additional fruit of my independent re-
search and labor in the area of sun-climate physics, funded or unfunded, is exempli-
fied by the March 2004 book The Maunder Minimum and The Variable Sun-Earth
Connection (see http://www.wspc.com/books/physics/5199.html) by W. Soon and
S. Yaskell (published by World Scientific Publishing Company). It might also be in-
structive to note that paleoclimate researchers have been speculating about long-
term variability of the sun as the cause of centennial- to millennial-scale variability
seen in their proxy records.
Question 32. In your testimony, you said that climate change is part of nature.
Please describe what you meant, since obviously, climate change have occurred due,
in part, to changes in various forcings, such as solar, continental drift, atmospheric
composition, asteroid impacts, etc. rather than being just completely random events.
Could you provide estimates of how large you consider future forcings might be and
how big the climate change they might cause could be?
Response. In this occasion, I am referring to the fact that any change or varia-
bility in climate is most likely a rule, rather than the exception, of the climate sys-
tem. But I was not speaking about or trying to imply the factors of change, either
naturally produced or man-made. I apologize for any potential confusion. It is cer-
tainly reasonable to suggest that those climatic changes may arise from forcings
but it would be unwise to rule out internally generated manifestations of climatic
variables that could be purely stochastic in origin. I would strongly recommend the
pedagogical discussion by Professor Carl Wunsch of MIT in Wunsch (1992, Oceanog-
raphy, vol. 5, 99106) and Wunsch (2004, Quantitative estimate of the
Milankovitch-forced contribution to observed Quaternary climate change, working
manuscript downloadable from http://puddle.mit.edu/cwunsch/).
I cannot speculate on future climate forcings and resultant climatic changes be-
cause I found no basis for doing so.
Question 33. Please provide a comparable estimate, with some supporting exam-
ples from the past, of how big you think the decadal (or 50-year if you prefer)
change in the hemispheric/global climate could be due to natural variability? If you
prefer to focus on the regional scale change, could you provide an indication of any
expected change in the degree of regional variability about the hemispheric and
global values, and what the mechanism for this might be?
Response. This question seems a related question trying to get at a quantitative
comparison of how large natural climate variability on regional or hemispheric scale
can be under the shadow of expected future changes. Again, with no intention to
devalue this interesting question, I do not have sufficient knowledge nor ability to
venture such an estimate. In fact, I would go so far to say that if the estimates of
variability for both the past and future are known within a reasonable range of un-
certainties, then the actual scientific research program to address questions about
the role of added carbon dioxide no longer require further funding or execution since
we have obtained all the relevant answers. But you may have judged from my an-
swers given throughout this Q&A that much remains to be quantified and under-
stood and the hard scientific research must continue.
Question 34. Please explain the scientific basis for your testimony that one
should expect the CO2 greenhouse effect to work its way downward toward the sur-
face.
Response. Co-answer to this question is given under Question No. 35.
171
Question 35. Do you believe that there is greater greenhouse trapping of energy
in the troposphere than at the surface and that the atmosphere has a low heat ca-
pacity? If so, how big is this temperature difference?
Response. It is broadly agreed and assumed that carbon dioxide, when released
into the air, has a tendency to get mixed up quickly and so is distributed widely
through out the whole column of the atmosphere. The air near the surface is already
dense and moist, so addition of more carbon dioxide will introduce very little imbal-
ance of radiation energy budget there. In contrast adding more carbon dioxide to
the thinner and drier air of the troposphere will cause a chain of noticeable effects.
First, the presence of more carbon dioxide in the uppermost part of the atmosphere
will cause more infrared radiation to escape into space because there are more car-
bon dioxide molecules to channel this infrared radiation upward and outward
unhindered. Part of that infrared radiation is also being emitted downward to the
lower parts of the atmosphere and the surface where it is reabsorbed by carbon di-
oxide and the thicker air there. The layer of air at the lower and middle tropo-
sphere, being more in direct contact with this down-welling radiation, is expected
to heat more than air near the surface. Thus, adding more carbon dioxide to the
atmosphere should cause more warming of the air around the height of two to seven
kilometers. (Please consider for example the discussion by Kaser et al. (2004) under
Question No. 8 about the ineffectiveness of an added longwave radiation from a di-
rect addition of atmospheric CO2 or atmospheric temperature change in explaining
the modern retreat of glaciers at Kilimanjaro.) In other words, the clearest impact
of the carbon dioxide greenhouse effect should manifest itself in the lower- and mid-
troposphere rather than near the earths surface. Here, I am mostly speaking on the
basis of expectation from pure radiative forcing considerations.
Such a qualitative description is not complete, even though that is roughly what
was modeled in the most sophisticated general circulation models (see e.g., Chase
et al., 2004, Climate Research, vol. 25, 185190), because it misses the key roles of
atmospheric convection and waves as well as all the important hydrologic processes
(please see e.g., Neelin et al., 2003, Geophysical Research Letters, vol. 30 (no. 24),
2275 and consider additional remarks about water vapor and atmospheric convec-
tion under Question No. 25 as well as discussion on climate forcing factors and cli-
mate response sensitivities under Question No. 20). Some theoretical proposals ex-
pect a warming of the surface relative to the low- and mid-troposphere because of
nonlinear climate dynamics (Corti et al., 1999, Nature, 398, 799802). That expecta-
tion is because of the differential surface response with the pattern of Cold Ocean
and Warm Land (COWL) that becomes increasingly unimportant with distance
away from the surface (rather than just the difference in heat capacity mentioned
in your question) [see Soon et al., 2001 for additional discussion]. Nevertheless, no
GCM has yet incorporated such an idea into an operationally robust simulation of
the climate system response to greenhouse effects from added CO2. In the latest
global warming work, Neelin et al. (2003), for example, still distinctly differentiate
between mechanisms for tropical precipitation that are initiated through CO2 warm-
ing of the troposphere and through El Nino warming rooted in oceanic surface tem-
perature and subsurface thermocline dynamics. (Further note that their model ex-
periments [see Figure 2b+2c and 10b+10c of Chou and Neelin, 2004, Mechanisms
of global warming impacts on regional tropical precipitation in preparation for
Journal of Climate; available at http://www.atmos.ucla.edu/?csi/REF/] also clearly
shown that the troposphere warmed significantly more than surface with the dou-
bling of atmospheric CO2 as discussed by Chase et al. 2004 below.)
But it is worth noticing that the current global observation shows that, at least
over the 19792003 interval, the lower tropospheric temperatures are not warming
as fast as the surface temperatures (see Christy et al. 2003, Journal of Atmospheric
and Oceanic Technology, vol. 20, 613629; for additional confidence on the results
derived by the University of Alabama-Huntsville group, please see Christy and Nor-
ris, 2004, Geophysical Research Letters, vol. 31, L06211). This observed fact is in
contradiction to the accelerated warming of the mid and upper troposphere relative
to surface simulated in current models (Chase et al. 2004). Chase et al. (2004) ar-
rives at the following conclusions, upon examining results from 4 climate models in
both unforced scenarios and scenarios forced with increased atmospheric greenhouse
gases and the direct aerosol effect3:
3 Such a study should also be consistently challenged by the discussion under question No.
20 about the adequacy of studying responses from a combination of incomp;ete forcings
through my primary purpose here is to illustrate the theoretical expectation of CO2 forcing de-
riving from state-of-the-art climate models.
172
Model simulations of the period representative of the greenhouse-gas and
aerosol forcing for 19792000 generally show a greatly accelerated and detect-
able warming at 500 mb relative to the surface (a 0.06 C decade1increase).
Considering all possible simulated 22 yr trends under anthropogenic forc-
ing, a strong surface warming was highly likely to be accompanied by acceler-
ated warming at 500 mb [i.e., 987 out of 1018 periods or 97 percent of the cases
had a larger warming at 500 mb than at the surface] with no change in likeli-
hood as forcings increased over time.
In simulated periods where the surface warmed more quickly than 500 mb,
there was never a case [emphasis added] in which the 500 mb temperature did
not also warm at a large fraction of the surface warming. A 30 percent accelera-
tion at the surface was the maximum simulated as compared with an observed
acceleration factor of at least 400 percent the mid-troposphere trend.
In cases where there was a strong surface warming and the surface
warmed more quickly than at 500 mb in the forced experiments, there was
never a case in which the 500 mb-level temperatures did not register a statis-
tically significant (p< 0.1) trend (i.e., a trend detectable with a simple linear re-
gression model). The minimum p value of approximately 0.08 occurred in the
single case in which the significance was not greater than 99 percent.
It was more likely that surface warmed relative to the mid-troposphere
under control simulations than under forced simulations.
At no time, in any model realization, forced or unforced, did any model sim-
ulate the presently observed situation of a large and highly significant surface
warming accompanied with no warming whatsoever aloft. (p. 189)
Question 36. The grants that are described as supporting your analysis seem to
have much more to do with the sun or unrelated pattern recognition that with cli-
mate history (Air Force Office of Scientific Research-Grant AF496200210194;
American Petroleum Institute-Grants 0100004579 and 2002100413; NASA-Grant
NAG7635; and NOAA-Grant NA96GP0448). Could you please describe how much
funding you received and used in support of this study, all of the sources and the
duration of that funding, and the relevance of those grant topics to the article?
Response. All sources of funding for my and my colleagues research efforts that
resulted in the publication of Soon and Baliunas (2003) and Soon et al. (2003) were
openly acknowledged. In other words, all sources of funding were disclosed in the
manuscripts when they were submitted for publication; all sources of funding were
also disclosed to readers in the printed journal articles. I am not the principal inves-
tigator for some of the grants we received (e.g., the NOAA grant was awarded to
Professor David Legates), so I am not in the privilege position to provide exact
quantitative numbers. But throughout the 20012003 research interval in which our
work was carried out, the funding we received from the American Petroleum Insti-
tute was a small fraction of the funding we received from governmental research
grants.
The primary theme of my research interest is on physical mechanisms of the sun-
climate relationship. This is why researching into the detailed patterns of local and
regional climate variability as published in Soon et al. (2003) is directly relevant to
that goal. Please also consider my research position listed under Question No. 31
above.
Question 37. Have you been hired by or employed by or received grants from orga-
nizations that have taken advocacy positions with respect to the Kyoto Protocol, the
U.N. Framework Convention on Climate Change, or legislation before the U.S. Con-
gress that would affect greenhouse gas emissions? If so, please identify those organi-
zations.
Response. I have not knowingly been hired by, nor employed by, nor received
grants from any such organizations described in this question.
Question 38. Please describe the peer review process that took place with respect
to your nearly identical articles published both in Climate Research and in Energy
and Environment, including the number of reviewers and the general content of the
reviewers suggested edits, criticisms or improvements.
Response. The Climate Research paper (Soon and Baliunas, 2003, Climate Re-
search, vol. 23, 89110) was submitted for publication and went through a routine
peer-review process and was eventually approved for publication. The main content
of the review was to propose: (a) reorganizing of materials including elimination of
discussions on ENSO and GCMs; (b) removing tone problems by eliminating criti-
cisms of previous EOF and superposition analyses; (c) reducing quotes especially
those by Hubert Horace Lamb to improve readability; and (d) reviewing changes in
each region with same thoroughness. The July 3, 2003s email (as Attachment I
173
below) from the director of Inter-Research, Otto Kinne, who publishes Climate Re-
search is enclosed below to confirm that the review process was fairly rigorous and
all parties involved had carried out their roles and duties in this time-honored sys-
tem properly.
The extended and more complete paper by Soon et al. (2003, Energy & Environ-
ment, vol. 14, 233296) was submitted to Energy & Environment for consideration
together with the accepted Climate Research manuscript. Energy & Environments
editorial decision was to send our manuscript for review, and after acceptance, in-
clude in its editorial in Energy & Environment, volume 14, issues 2&3, a footnote
referring to the Climate Research paper.
Finally, we wish to correct that the false impression introduced by Professor
Mann both during the testimony and in public media that his attack on the papers
by Soon and Baliunas (2003) and Soon et al. (2003), in a FORUM article in the
American Geophysical Union Eos newspaper (Mann et al., 2003, Eos, vol. 84, 256
258), were either rigorously peer-reviewed or represented widespread view of the
community. Contrary to Professor Manns public statements, a FORUM article in
Eos is said to be only stating a personal point of view (http://www.agu.org/pubs/
eosguidelines.html#authors). Whatever peer-reviewing that was done did not in-
clude soliciting comments from the authors of the papers being criticized. We first
learned of this FORUM article from the AGUs press release No. 0319 Leading
Climate Scientists Reaffirm View that Late 20th Century Warming Was Unusual
and Resulted From Human Activity (http://www.agu.org/scisoc/prrl/
prrl0319.html). See Soon et al. (2003b, Eos, vol. 84 (44), 473476) for our own re-
sponse to the Mann et al. FORUM article.
Two deeply flawed (and nearly identical) recent papers by astronomers Soon and
Baliunas (one of them with some additional co-authorsboth henceforth referred to
175
as SB) have been used to challenge the scientific consensus. I outline the 3 most
basic problems with their papers here:
Much research has described both the written and oral histories of the climate
as well as the proxy climate records (e.g., ice cores, tree rings, and sedimentations)
that have been derived for the last millennium. It is recognized that such records
are not without their biasesfor example, historical accounts often are tainted with
the preconceived beliefs and limited experiences of explorers and historians while
trees and vegetation respond not just to air temperature fluctuations, but to the en-
tire hydrologic cycle of water supply (precipitation) and demand (which is, in part,
driven by air temperature). Nevertheless, such accounts indicate that the climate
of the last millennium has been characterized by considerable variability and that
extended periods of cold and warmth existed. It has been generally agreed that dur-
ing the early periods of the last millennium, air temperatures were warmer and
that temperatures became cooler toward the middle of the millennium. This gave
rise to the terms the Medieval Warm Period and the Little Ice Age, respectively.
However, as these periods were not always consistently warm or cold nor were the
extremes geographically commensurate in time, such terms must be used with care.
A BIASED RECORD PRESENTED BY THE IPCC AND NATIONAL ASSESSMENT
In a change from its earlier report, however, the Third Assessment Report of the
Intergovernmental Panel on Climate Change (IPCC), and now the U.S. National As-
sessment of Climate Change, both indicate that hemispheric or global air tempera-
210
tures followed a curve developed by Dr. Mann and his colleagues in 1999. This curve
exhibits two notable features. First is a relatively flat and somewhat decreasing
trend in air temperature that extends from 1000AD to about 1900AD and is associ-
ated with a relatively high degree of uncertainty. This is followed by an abrupt rise
in air temperature during the 1900s that culminates in 1998 with the highest tem-
perature on the graph. Virtually no uncertainty is shown for the data of the last
century. The conclusion reached by the IPCC and the National Assessment is that
the 1990s are the warmest decade with 1998 being the warmest year of the last
millennium.
Despite the large uncertainty, the surprising lack of variability in the record gives
the impression that climate remained relatively unchanged through most of the last
millenniumat least until human influences began to cause an abrupt increase in
temperatures during the last century. Interestingly, Mann et al. replace the proxy
data for the 1900s by the instrumental record and no uncertainty characterization
is provided. This too yields a false impression that the instrumental record is con-
sistent with the proxy data and that it is error free. It is neither. The instrumental
record contains numerous uncertainties, resulting from a lack of coverage over the
worlds oceans, an under-representation of mountainous and polar regions as well
as under-developed nations, and the presence of urbanization effects resulting from
the growth of cities. Even if a modest uncertainty of a 0.1 C were imposed on the
instrumental record, the claim of the 1990s being the warmest decade would imme-
diately become questionable, as the uncertainty window would overlap with the un-
certainty associated with earlier time periods. Note that if the satellite temperature
recordwhere little warming has been observed over the last 20 yearshad been
inserted instead of the instrumental record, it would be impossible to argue that the
1990s are the warmest decade.
RATIONALE FOR THE SOON et al. investigation
So we were left to question why the Mann et al. curve seems to be at variance
with the previous historical characterization of climatic variability. Investigating
more than several hundred studies that have developed proxy records, we came to
the conclusion that nearly all of these records show considerable fluctuations in air
temperature over the last millennium. Please note that we did not reanalyze the
proxy datathe original analysis from the various researchers was left intact. Most
records show the coldest period is commensurate with at least a portion of what is
termed the Little Ice Age and the warmest conditions are concomitant with at
least a portion of what is termed the Medieval Warm Period.
But our conclusion is entirely consistent with conclusions reached by Drs. Bradley
and Jones that not all locations on the globe experienced cold or warm conditions
simultaneously. Moreover, we chose not to append the instrumental record, but to
compare apples-with-apples and determine if the proxy records themselves indeed
confirm the claim of the 1990s being the warmest decade of the last millennium.
That claim is not borne out by the individual proxy records.
However, the IPCC report, in the chapter with Dr. Mann as a lead author and
his colleagues as contributing authors, also concludes that research by Drs. Mann,
Jones, and their colleagues support the idea that the 15th to 19th centuries were
the coldest of the millennium over the Northern Hemisphere overall. Moreover, the
IPCC report also concludes that the Mann and Jones research show[s] tempera-
tures from the 11th to 14th centuries to be about 0.2 C warmer than those from
the 15th to 19th centuries. This again is entirely consistent with our findings.
Where we differ with Dr. Mann and his colleagues is in their construction of the
hemispheric averaged time-series, their assertion that the 1990s are the warmest
decade of the last millennium, and that human influences appear to be the only sig-
nificant factor on globally averaged air temperature. Reasons why the Mann et al.
curve fails to retain the fidelity of the individual proxy records are detailed statis-
tical issues into which I will not delve. But our real difference of opinion focuses
solely on the Mann et al. curve and how we have concluded it misrepresents the
individual proxy records. In a very real sense, this is an important issue that sci-
entists must address before the Mann et al. curve is taken as fact.
Our work has been met with much consternation from a variety of sources and
we welcome healthy scientific debate. After all, it is disagreements among scientists
that often lead to new theories and discoveries. However, I am aware that the edi-
tors of the two journals that published the Soon et al. articles have been vilified and
the discussion has even gone so far as to suggest that Drs. Soon and Baliunas be
barred from publishing in the journal Climate Research. Such tactics have no place
in scientific debate and they inhibit the free exchange of ideas that is the hallmark
of scientific inquiry.
211
CLIMATE IS MORE THAN MEAN GLOBAL AIR TEMPERATURE
In closing, let me state that climate is more than simply annually-averaged global
air temperature. Too much focus has been placed on divining air temperature time-
series and such emphasis obscures the true issue in understanding climate change
and variability. If we are truly to understand climate and its impacts and driving
forces, we must push beyond the tendency to distill it to a single annual number.
Proxy records, which provide our only possible link to the past, are incomplete at
best. But when these records are carefully and individually examined, one reaches
the conclusion that climate variability has been a natural occurrence, and especially
so over the last millennium. And given the uncertainties in the proxy and instru-
mental records, an assertion of any decade as being the warmest in the last millen-
nium is premature.
Im sorry that a discussion that is best conducted among scientists has made its
way to a U.S. Senate committee. But hopefully a healthy scientific debate will not
be compromised and we can push on toward a better understanding of climate
change.
I again thank you for the privilege of speaking before you today.
Mercury is clearly a global issue. Recent estimates are that, in 1998, some 2340
tons of mercury were emitted globally through industrial activity; of these, more
than half, or 1230 tons, came from Asian countries, primarily China1. These find-
ings are similar to those of other global inventories2. In addition, it is estimated
that another 1300 tons of mercury emanates from land-based natural sources glob-
ally, including abandoned mining sites and exposed geological formations. Another
1100 tons or so issues from the worlds oceans, representing both new mercury emit-
ted by undersea vents and volcanoes, and mercury cycled through the ocean from
the atmosphere previously. Recent findings from the large United States-Canadian
METAALICUS field study in Ontario, Canada showed that a fairly small amount
of deposited mercury, no more than 20 percent or so, re-emits to the atmosphere,
even over a 2-year period. The implications of this are profound: mercury may be
less mobile in the environment than we previously thought; once it is removed from
the atmosphere, it may play less of a role in the so-called grasshopper effect3
where persistent global pollutants are believed to successively deposit and re-emit
for many years and over thousands of miles.
For much of the twentieth century, mercury was an essential part of industrial
products, such as batteries and switches, or a key ingredient in such other products
as house paints. These industrial uses of the element declined significantly in the
latter half of the century, and are now less than 10 percent of their use of fifty years
ago.4 Professor Francois Morel of Princeton University and colleagues recently ana-
lyzed newly caught Pacific tuna for mercury5, and compared those results to the
mercury content of similar tuna caught in the 1970s. Despite changes in mercury
emissions to the atmosphere in those thirty years6, and a matching increase in the
mercury depositing; from the atmosphere to rivers and oceans below, Prof Morel
found that mercury levels in tuna have not changed over that time. One conclusion
is that the mercury taken up by such marine fish as tuna is not coming from
sources on land, such as utility power plants, but from natural submarine sources
of mercury, including deep sea volcanoes and ocean floor vents. The implications are
that changes in mercury sources on the continents will not affect the mercury levels
found in open ocean foodfish like tuna.
An estimate in 2001 by scientists of the Geological Survey of Canada and others7
estimated that geological emissions of mercury, as well as emissions from inactive
industrial sites on land, are five to seven times as large as had been estimated ear-
lier. Recent measurements in the stratosphere by EPRI researchers show a rapid
removal of mercury in the upper atmosphere, allowing for additional sources at the
surface while still maintaining the measured rates of deposition and removal needed
for a global balance of sources and sinks. As a result, it is now possible to attribute
a greater fraction of the mercury entering U.S. waters to background natural
sources rather than industrial emissions from the U.S. or elsewhere globally.
HOW COULD POTENTIAL MERCURY REDUCTIONS CHANGE MERCURY DEPOSITION?
4 Engstrom, D.R., E.B. Swain, Recent Declines in Atmospheric Mercury Deposition in the
Upper Midwest, Environ. Sci. Technol. 1997, 31, 960967.
5 Kraepiel, A.M.L., K. Keller, H.B. Chin, E.G. Malcolm, F.M.M. Morel, Sources and Variations
of Mercury in Tuna, Meeting of American Society for Limnology and Oceanography, Salt Lake
City, Utah: January 2003.
6 Slemr, F., E-G. Brunke, R. Ebinghaus, C. Temme, J. Munthe, I. Wangberg, W. Schroeder,
A. Stgeffen, T. Berg, Worldwide trend of atmospheric mercury since 1977, Geophys. Res. Ltrs.,
30 (10), 231 to 234
7 Richardson G. M., R. Garrett, I. Mitchell, M. Mah-Paulson, T. Hackbarth, Critical Review
On Natural Global And Regional Emissions Of Six Trace Metals To The Atmosphere, Inter-
national Lead Zinc Research Organization, International Copper Association, Nickel Producers
Environmental Research Association.
8 EPRI Technical Report 1005224, A Framework for Assessing the Cost-Effectiveness of Elec-
tric Power Sector Mercury Control Policies, EPRI, Palo Alto, CA, May 2003.
213
Current U.S. utility emissions of mercury are about 46 tons per year. At the same
time, a total of about 179 tons of mercury deposit each year in the U.S., from all
sources global and domestic. One proposed management scenario examined cutting
these utility emissions by 47 percent, to 24 tons per year. The analysis showed that
this cut results in an average 3 percent drop in mercury deposition into the U.S.
Some isolated areas totaling about 1 percent of U.S. land area experience drops of
up to 30 percent in mercury deposited. The cost model used in association with
these calculations showed utility costs to reach these emission control levels would
amount to between $2 billion and $5 billion per year over 12 years. This dem-
onstrated that U.S. mercury patterns are relatively insensitive to the effects of this
single category of sources.
In addition, most of the fish consumed in the U.S. comes from ocean sources,
which would be only marginally affected by a global reduction of 24 tons of mercury
per year due solely to U.S. controls. Wild fresh water fish in the U.S. would be ex-
pected to show a greater reduction in mercury content, but are a relatively small
part of the U.S. diet compared to ocean or farmed fish. When these changes were
translated into how much less mercury enters the U.S. diet, we found that 0.064
percent fewer children would be born at risk due to their mothers taking in less
mercury from consumed fish. These results were based on the Federal dietary fish
consumption data. So, a drop of nearly half in utility mercury emissions results in
a drop of 3 percent (on average) in mercury depositing to the ground, and a drop
of less than one-tenth of a percent in the number of children at risk.
DECISIONMAKING UNDER UNCERTAINTY
These recent findings on mercury sources, dynamics, and management are a small
part of the massive international research effort to understand mercury and its im-
pacts on the human environment. EPRI and others, including the U.S. Environ-
mental Protection Agency and the U.S. Department of Energy, are racing to clarify
the complex interactions of mercury with geochemical and biological systems, vital
to understanding mercurys route to human exposure and potential health effects.
With this improved understanding, informed decisions can be made on the best
ways to manage mercury.
Thank you for the opportunity to deliver these remarks to the Committee.
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
STATEMENT OF DEBORAH C. RICE, PH.D., MAINE DEPARTMENT OF
ENVIRONMENTAL PROTECTION, AUGUSTA, MAINE
I would like to thank the Committee for this opportunity to present information
on the adverse health consequences of exposure to methylmercury in the United
States. Until 3 months ago, I was a senior toxicologist in the National Center for
Environmental Assessment in the Office of Research and Development at the Envi-
ronmental Protection Agency. I am a co-author of the document that reviewed the
scientific evidence on the health effects of methylmercury for EPA, and which in-
cluded the derivation of the acceptable daily intake level for methylmercury.
I would like to focus my presentation on four points that are key to understanding
the health-related consequences of environmental mercury exposure. One: there is
unequivocal evidence that methylmercury harms the developing human brain. Two:
the Environmental Protection Agency used analyses of three large studies in its der-
ivation of an acceptable daily intake level, including the study in the Seychelles Is-
lands which found no adverse effects. Three: 8 percent of women of child-bearing
age in the United States have levels of methylmercury in their bodies above this
acceptable level. And four: cardiovascular disease in men related to low levels of
methylmercury has been documented, suggesting that a potentially large segment
of the population may be at risk for adverse health effects.
The adverse health consequences to the nervous system of methylmercury expo-
sure in humans were recognized in the 1950s with the tragic episode of poisoning
in Minamata Bay in Japan, in which it also became clear that the fetus was more
sensitive to the neurotoxic effects of methylmercury than was the adult. A similar
pattern of damage was apparent in subsequent episodes of poisoning in Japan and
Iraq. These observations focused the research community on the question of whether
exposure to concentrations of methylmercury present in the environment might be
producing neurotoxic effects that were not clinically apparent.
As a result, over half a dozen studies have been performed around the world to
explore the effects of environmental methylmercury intake on the development of
the child. Studies in the Philippines (Ramirez et al., 2003), the Canadian Arctic
(McKeown-Eyssen et al., 1983), Ecuador (Counter et al., 1998), Brazil (Grandjean et
al., 1999), French Guiana (Cordier et al., 1999) and Madeira (Murata et al., 1999)
all found adverse effects related to the methylmercury levels in the childrens bod-
ies. These included auditory and visual effects, memory deficits, deficits in
visuospatial ability, and changes in motor function.
In addition to the above studies, there have been three major longitudinal studies
on the effects of exposure to the mother on the neuropsychological function of the
child: in the Faroe Islands in the North Atlantic (Grandjean et al., 1997), in the
Seychelles Islands in the Indian Ocean (Myers et al., 1995), and in New Zealand
(Kjellstrom et al., 1989). Two of these studies identified adverse effects associated
with methylmercury exposure, whereas the Seychelles Islands study did not. Im-
pairment included decreased IQ and deficits in memory, language processing, atten-
tion, and fine motor coordination. A National Research Council (NRC) National
Academy of Sciences panel evaluated all three studies in their expert review, con-
cluding that all three studies were well designed and executed (NRC, 2000). They
modeled the relationship between the amount of methylmercury in the mothers
body and the performance of the child on a number of neuropsychological tests.
From this analysis, they calculated a defined adverse effect level from several types
of behavior in each of the three studies. These adverse effect levels represent a dou-
bling of the number of children that would perform in the abnormally low range of
function. The National Academy of Sciences panel also calculated an overall adverse
effect level of methylmercury in the mothers body for all three of the studies com-
bined, including the negative Seychelles study. Thus the results of all three studies
were included in a quantitative manner by the NRC.
The Environmental Protection Agency (EPA) used the analyses of the NRC in the
derivation of the reference dose, or RfD, for methylmercury. The RfD is a daily in-
take level designed to be without deleterious effects over a lifetime. The EPA di-
vided the defined deleterious effect levels calculated by the NRC by a factor of 10
in its RfD derivation. There are two points that need to be made in this regard.
First, the factor of 10 does not represent a safety factor of 10, since the starting
point was a level that doubled the number of low-performing children. Second, the
EPA performed the relevant calculations for a number of measurements for each of
the two studies that found deleterious effects as well as the integrative analysis that
included all three studies modeled by the NRC, including the negative Seychelles
study. The RfD is 0.10 g/kg/day based on the Faroe Islands study alone or the inte-
grative analysis of all three studies. The RfD would be lower than 0.10 g/kg/day
if only the New Zealand study had been considered. Only if the negative Seychelles
284
Islands study were used exclusively for the derivation of the RfD, while ignoring the
values calculated for the Faroe Islands and New Zealand studies, would the RfD be
higher than the current value of 0.10 g/kg/day. EPA believes that to do so would
be scientifically unsound, and would provide insufficient protection to the U.S. popu-
lation.
A substantial portion of U.S. women of reproductive age have methylmercury in
their bodies that is above the level that corresponds to the EPAs RfD. Data col-
lected over the last 2 years as part of the National Health and Nutritional Examina-
tion Survey (NHANES 99+) designed to represent the U.S. population (CDC, Web)
revealed that about 8 percent of women of child-bearing age had blood levels of
methylmercury above the level that the U.S. EPA believes is safe (Schober et al.,
2003). This translates into over 300,000 newborns per year potentially at risk for
adverse effects on intelligence and memory, ability to pay attention, ability to use
language, and other skills that are important for success in our highly technological
society.
I would like to further comment here on the use of a factor of 10 by EPA to derive
the allowable daily intake level (RfD) for methylmercury from the defined effect lev-
els calculated by the National Research Council. The RfD corresponds to roughly 1
part per million (ppm) of methylmercury in maternal hair, from the defined effect
level of about 11 ppm calculated by the NRC. But we know that there is no evidence
of a threshold below which there are no adverse effects down to about 23 ppm in
hair, the lowest levels in the Faroe Islands study. In fact, there is evidence from
both the Faroe Islands (Budtz-Jorgensen et al., 2000) and New Zealand (Louise
Ryan, Harvard University, personal communication) studies that the change in ad-
verse effect in the child as a function of maternal methylmercury level may be
greater at lower maternal methylmercury levels than at higher ones. Therefore, the
so-called safety factor almost certainly is less than 10, and may be closer to non-
existent. Babies born to women above the RfD may be at actual risk, and not ex-
posed to a level 10 times below a risk level.
There is an additional concern regarding the potential for adverse health con-
sequences as a result of environmental exposure to methylmercury. Several years
ago, a study in Finnish men who ate fish found an association between increased
methylmercury levels in hair and atherosclerosis, heart attacks, and death (Salonen
et al., 1995, 2000). Two new studies in the U.S. and Europe found similar associa-
tions between increased methylmercury levels in the bodies of men and cardio-
vascular disease (Guallar et al., 2002; Yoshizawa et al., 2002). Effects have been
identified at hair mercury levels below 3 ppm. It is not known whether there is a
level of methylmercury exposure that will not cause adverse effects. It is important
to understand that the cardiovascular effects associated with methylmercury may
put an additional, very large proportion of the population at risk for adverse health
consequences as a result of exposure to methylmercury from environmental sources.
In summary, there are four points that I would like the Committee to keep in
mind. First, at least eight studies have found an association between methylmercury
levels and impaired neuropsychological performance in the child. The Seychelles Is-
lands study is anomalous in not finding associations between methylmercury expo-
sure and adverse effects. Second, both the National Research Council and the Envi-
ronmental Protection Agency included the Seychelles Islands study in their anal-
yses. The only way the acceptable level of methylmercury could be higher would be
to ignore the two major positive studies that were modeled by the NRC, as well as
six smaller studies, and rely solely on the single study showing no negative effects
of methylmercury. Third, there is a substantial percentage of women of reproductive
age in the United States with levels of methylmercury in their bodies above what
EPA considers a safe level. As a result of this, over 300,000 newborns each year
are exposed to methylmercury above levels U.S. EPA believes to be safe. Fourth,
increased exposure to methylmercury may result in atherosclerosis, heart attack,
and even death from heart attack in men, suggesting that an additional large seg-
ment of the population may be at risk as a result of environmental methylmercury
exposure.
Thank you for your time and attention.
(c) EPA used a total uncertainty factor (UF) of 10 to derive the RfD, which is de-
signed to provide a margin of safety against adverse effects. EPA typically applies
an UF of 10 for inter-individual variability if the starting point is a no-observable-
adverse-effect-level (NOAEL). If the starting point is the lowest level that has been
demonstrated to produce an effect, with a NOAEL not identified, the EPA applies
an additional UF, usually 10. In the case of methylmercury, even though the start-
ing point was a level associated with an effect, only a total factor of 10 was applied,
rather than the more typical 100. In addition, the UF of 10 for inter-individual vari-
ability is presumed to account for differences in both metabolism and response of
the target organ (sensitivity) between individuals. The variability in metabolism of
methylmercury between women has been demonstrated to be about 3. The variation
in cord-maternal blood levels between women may be also about 3. These would be
multiplied together to equal about 10. That allows no room for any variation in re-
sponse of the fetal brain to methylmercury, which is undoubtedly not the case.
Therefore a total UF of 10 is almost certainly inadequate to protect the most sen-
sitive portion of the population.
The issue of whether the reference dose should be lowered, and if so, the appro-
priate value, requires thorough evaluation by a group of expert risk assessors and
other scientists. Any new evaluation of the RfD should also include evaluation of
the levels of methylmercury that produce adverse cardiovascular effects documented
in several studies of adult males. It is currently unknown whether these effects
occur at lower or higher levels than those that produce developmental neurotoxicity.
Question 2. What is a reasonable estimate of the approximate average mercury
concentrations in non-commercial fish in the U.S.?
286
Response. EPA keeps an extensive data base of fish tissue contaminant levels
from inland water bodies compiled by individual states (http://www.epa.gov/ost/fish/
mercurydata.html). Data for average levels of mercury for 19872000 are in the at-
tached figure. Average tissue levels vary significantly depending on species, such
that deriving an average for all species is not particularly informative. Averages
for different species range from 0.1 ppm for herring and whitefish to 0.9 ppm for
bowfin. As can be seen from the figure, the average level for many species is below
the 0.3 ppm level recommended by EPA (Water Quality Criterion for the Protection
of Human Health: Methylmercury, OST, Office of Water, 2001, EPA823-R01001).
Approximately one third of species have average concentrations above this. Even for
species with averages below 0.3 ppm, some samples will exceed this level. For spe-
cies with averages about 0.5 ppm, more than half the samples will exceed the EPA
recommended limit, whereas half the samples will exceed the 0.5 ppm action limit
set by many European countries and Canada. Ocean fish and sharks can have levels
that are considerably higher. For example, blue marlin average 3.08 ppm, with the
highest level for an individual at 6.8 ppm (Florida Marine Research Institute Tech-
nical Reports Mercury Levels in Marine and Estuarine Fishes of Florida. 1989
2001: FMRI Technical Report TR9, Second Edition, Revised, 2003). Sharks such as
white shark averaged over 5 ppm, with the highest value for a shark at 10 ppm
(ibid.) These are non-commercial sport-caught species.
Question 3. You indicated that the NHANES data does not adequately capture the
individuals or subpopulations that are likely to be the most exposed to non-commer-
cial fish mercury concentrations above the reference dose. Are you aware of any
work underway to collect this kind of data and hopefully protect these people from
overexposure?
Response. There have been a number of relatively small studies focusing on fish
intake by groups that consume large amounts of fish, specifically sports fishers and
subsistence fishing communities. Most of these efforts have been by individual
states or tribes. EPA is developing a data base of these studies, most of which are
unpublished and not in the public domain, a project which I managed before leaving
the agency. The data base currently includes about 70 studies (contact project officer
Cheryl Itkin, EPA/ORD/National Center for Environmental Assessment, Wash-
ington, D.C. at [email protected]).
There are also several published studies: Bellanger, T.M., Caesar, E.M.,
Trachtman, L. 2000. Blood mercury levels and fish consumption. J. La. Med. Soc,
152:6473; Burge, P., Evans, S. 1994. Mercury contamination in Arkansas gamefish.
A public health perspective. J. Ark. Med. Soc. 90:542544; Hightower, J.M., Moore,
D. 2003. Mercury levels in high-end consumers offish. Environ. Health Perspect.
111:604608; and Knobeloch, L.M., Ziamik, M., Anderson, H.A., Dodson, V.N. 1995.
Imported seabass as a source of mercury exposure: A Wisconsin case study. Environ.
Health Perspect. 103:604606.
Protecting individuals who may be at greater risk from over-exposure to
methylmercury presents significant challenges. Forty states have fish advisories for
inland waters, based largely on levels of mercury in fish. Some states have levels
that are specific to particular water bodies, others have statewide advisories for all
water bodies. Advisories typically are set with regard to species of fish, designating
them as e.g. no restriction, eat no more than once a week, or eat no more than
once a month. If a person eats a fish from one restricted category they are meant
not to eat fish from another restricted category in that month. Signs are posted by
some states at specific water bodies, and most if not all states distribute literature
related to fish advisories with fishing licenses. Some tribes have also performed sig-
nificant outreach related to issues of contaminants in wild foods. Immigrant commu-
nities are often the most difficult to inform, as a result of language and cultural
barriers. Minnesota, for example, has made a substantial effort to work with immi-
grant communities, publishing appropriate information in relevant languages, as
well as performing extensive outreach activities. A few other states have made ef-
forts in this regard as well. Some communities rely on fish as a significant protein
source for both cultural and economic reasons. It is unfortunate indeed that these
communities are risking adverse health outcomes by consuming what should be a
very healthful food.
Question 4. Please describe the purposes and intended uses of the various Federal
agencies exposure limits for methyl mercury.
Response. EPA, FDA, and ATSDR have set exposure limits for methylmercury.
The reference dose (RfD) set by EPA is designed to represent an estimate of a daily
exposure to the human population (including sensitive subgroups) that is likely to
be without appreciable risk of deleterious [non-cancer] effects during a lifetime
(http://www.epa.gov/iris/index.html).
287
The minimal risk level (MRL) of ATSDR is an estimate of the daily human expo-
sure to a hazardous substance that is likely to be without appreciable risk of ad-
verse noncancer health effects over a specified duration of exposure. MRLs may be
derived for acute (114 days), intermediate (15364 days) or chronic durations (over
364 days). ATSDR states that [t]hese substance-specific estimates, which are in-
tended to serve as screening levels, are used by ATSDR health assessors and other
responders to identify health effects that may be of concern at hazardous waste
sites. It is important to note that MRLs are not intended to define clean-up or action
levels for ATSDR or other Agencies. [bold original] (http://www.atsdr.cdc.gov/
mrls.html) It is critical to understand that ATSDR is involved in clean-up activities.
The MRLs are designed to identify chemicals that are important for clean-up deci-
sions. They are not intended as health-protective levels for the general population,
or for a lifetime.
The FDA acceptable daily intakes (ADI) is the amount of a substance that can
be consumed daily over a long period of time without appreciable risk (http://
www.fda.gov; http://www.cfsan.fda.gov/-acrobat/hgstud16.pdf). For contaminants in
food, FDA uses the ADI to derive an Action Level, which defines the maximum al-
lowable concentration of the contaminant in commercial food. In other words, the
Action Level is supposed to be health-based.
The RfD and the ADI are designed to protect the general population from adverse
effects from contaminants in food over a lifetime of exposure, including protection
of sensitive populations. In contrast, the MRL is designed for a different purpose:
identifying contaminants that may be important in making decisions regarding
clean-up of contaminated sites.
The exposure limits from U.S. agencies are as follows:
EPA RfD: 0.1 g/kg/day
ATSDR MRL: 0.3 g/kg/day
FDA ADI: 0.4 g/kg/day
Question 5. What is the preferred measurement methodology for most reliably de-
termining and predicting the effect on childrens developmental health of methyl
mercury exposure?
Response. There has been considerable discussion within the academic and regu-
latory communities regarding what might be a best test or test battery for deter-
mining adverse neuropsychological function in children exposed to methylmercury.
There are two basic strategies that have been used to assess methylmercury
neurotoxicity. The first is the use of standard clinical instruments such as measures
of IQ. These have the advantage of being standardized for the population, as well
as assessing a wide range of functional domains. However, because they may be
measuring a number of functions that are not affected in addition to those that are,
the results can be diluted, and therefore these tests may be less sensitive than
a more focused approach. The second approach is to choose domain-specific tests
based on the known effects of higher levels of the toxic chemical, if such effects are
known. This strategy has the advantage of being potentially more sensitive than
using broad-based clinical instruments. On the other hand, using domainspecific
tasks runs the risk of looking at the wrong functions.
The investigators of the Faroe Islands study used a number of domain-specific
tasks, based on the effects of high-level methylmercury exposure as well as the
pathological changes in specific brain areas produced by methylmercury. The Faroe
Island study found deficits in these tasks. The investigators of the Seychelles study
used standard clinical instruments that assessed a little bit of a lot of functions,
which were standardized for a U.S. population rather than the Seychellois popu-
lation. They found no effect of methylmercury. In contrast, the investigators of the
New Zealand study, also using standard clinical instruments, did identify mercury-
related deficits.
The consensus of the research community seems to be that a combination of both
approaches should be used, The standard clinical instruments (e.g. full-scale IQ) are
comprised of subscales (e.g. verbal, visuospatial) that can be used to explore more
specific functional domains. Researchers should also use what is known about the
behavioral and neuropathological effects of methylmercury to design domainspecific
tests, with the hope that these will be maximally sensitive. To date, deficits in mem-
ory, language processing, visuospatial ability, motor function, and attention have
been identified to be adversely affected by in utero methylmercury exposure. Hear-
ing may also be adversely affected. New studies, or continued testing of current co-
horts, should build on this knowledge to hone in even further on specific behavioral
functions.
288
Question 6. In 1974, the FDA established a mercury action limit of .5 parts per
million in fish. This was changed in 1979 to 1 part per million. What was the basis
for this change?
Response. FDA set an action level of 0.5 ppm for mercury in fish in 1969, in re-
sponse to the recognition. of the devastating consequences of fetal exposure to
methylmercury in the poisoning episodes in Minamata and Niigata, Japan. This
level was reaffirmed in 1974, citing concerns about damage to the fetus at lower ex-
posures than are harmful to the adult. The level was changed in 1979 as a result
of a lawsuit by the fishing industry that resulted in a court ruling based on socio-
economic impacts presented by the National Marine Fisheries Service (NMFS). They
argued that raising the action level would expand the number of fisheries available
for exploitation and expand the profits of the fishing industry (Fed. Reg. 3990, 3992,
1979). The notice was a withdrawal of the proposed rulemaking and terminated a
rulemaking procedure to codify the (then) existing action level limiting the amount
of unavoidable mercury residues permitted in fish and shellfish of 0.5 ppm. The FR
notice also indicates that [t]he Food and Drug Administration will continue to mon-
itor mercury levels in fish so that if there is any change in mercury residue levels
as a result of raising the action level, or if there is any other change in the informa-
tion regarding mercury in fish, the action level can be revised accordingly. Thus,
the action limit is not health-based, but was established for economic considerations.
Question 7. What, if anything, should consumers of fish in the Great Lakes region
and other areas that are downwind of major mercury emission sources such as coal-
fired power plants, chlor-alkali manufacturing facilities and; waste incinerators, be
advised to do with respect to limiting their methyl mercury exposure?
Response. Unfortunately, the majority of inland lakes and rivers are contaminated
with mercury. Methylmercury is created from mercury by microorganisms in the
water. Methylmercury is bioconcentrated as it is passed up the food chain, with
older and larger fish at the top of the food chain containing more methylmercury
than smaller fish or fish that are lower on the food chain. Methylmercury exposure
in humans is exclusively from eating contaminated fish. Forty states have explicit
fish advisories as a result of mercury contamination for consumption of fish based
on species, size, and in some states specific water bodies. There were 2,242
advisories in 2000, up 8 percent o from 1999 and up 149 percent from 1993. By far
the greatest number of fish advisories for mercury are around the Great Lakes and
in the Northeastern states. Consumers are advised to carefully follow State fishing
advisories for inland fish. There is an increasing recognition that commercial and/
or ocean fish may represent a significant source of methylmercury exposure. Cur-
rently, FDA advises pregnant women, nursing mothers and young children against
eating any shark, swordfish, tilefish, or king mackerel. Recent data indicate that
canned white (albacore) tuna may have substantial levels of methylmercury, and so
should be consumed seldom, especially by children. Other species such as fresh tuna
and halibut may also have significant levels of methylmercury. Intake of purchased
fish that are potentially high in methylmercury should be included by individuals
in determining safe fish intake over a specific time period. In other words, con-
sumers need to have detailed information on fish species from both commercial and
non-commercial sources to keep track of their potential methylmercury intake.
This is an unsatisfactory solution, since fish should be a very healthful food.
Moreover, sport fishing is an important economic resource in many areas, and some
individuals rely on fishing for a substantial portion of their protein, particularly in
certain immigrant communities. The ultimate solution is of course to decrease envi-
ronmental deposition of mercury.
289
290
291
292
293
294
295
296
297
298
299
STATEMENT OF DR. GARY MYERS, PEDIATRIC NEUROLOGIST
AND PROFESSOR, UNIVERSITY OF ROCHESTER
Thank you for the opportunity to present the views of our research group on the
health effects of methylmercury (MeHg) exposure. My name is Gary Myers. I am
a pediatric neurologist and professor at the University of Rochester in Rochester,
New York and one member of a large team that has been studying the human
health effects of MeHg for nearly 30 years. For nearly 20 years our group has spe-
cifically studied the effects of prenatal MeHg exposure from fish consumption on
child development.
MERCURY POISONINGS
In the 1950s, massive industrial pollution for over two decades in Japan resulted
in high levels of MeHg in ocean fish. Several thousand cases of human poisoning
from consuming the contaminated fish were reported. The precise level of human
exposure causing these health problems was never determined, but was thought to
be high. During that epidemic pregnant women who themselves had minimal or no
clinical symptoms of MeHg poisoning delivered babies with severe brain damage
manifested by cerebral palsy, seizures and severe mental retardation. This sug-
gested that MeHg crosses the placenta from the mother to the fetus and that the
developing nervous system is especially sensitive to its toxic effects.
In 19711972 there was an epidemic of MeHg poisoning in Iraq. Unlike the Japa-
nese poisonings, the source of exposure in Iraq was maternal consumption of seed
grain coated with a MeHg fungicide. Our research team studied the children of
about 80 women who were pregnant during this outbreak. We measured mercury
exposure to the fetus using maternal hair, the biomarker that best corresponds to
MeHg brain level, and examined the children. We concluded that there was a possi-
bility that exposure as low as 10 ppm in maternal hair might be associated with
adverse effects on the fetus, although there was considerable uncertainty in this es-
timate. This value is over 10 times the average in the United States, but individuals
consuming large quantities of fish can achieve this level.
MERCURY FOUND NATURALLY IN THE ENVIRONMENT
We do not believe that there is presently good scientific evidence that moderate
fish consumption is harmful to the fetus. However, fish is an important source of
protein in many countries and large numbers of mothers around the world rely on
fish for proper nutrition. Good maternal nutrition is essential to the babys health.
Additionally, there is increasing evidence that the nutrients in fish are important
for brain development and perhaps for cardiac and brain function in older individ-
uals.
The SCDS is ongoing and we will continue to report our results. Presently we are
examining a new cohort to determine specific nutrients that might influence the ef-
fects of MeHg.
AppendixNot read before the committee, but included in the handout.
Because of the public health importance of the question being studied by the
SCDS, the potential exists for differing opinions of scientific findings to become
highly politicized. The SCDS has received only one published criticism (JAMA,
280:737, 1998), but other points have been raised at conferences. These questions
are addressed here individually.
Why did the SCDS measure mercury in the hair rather than in the cord blood?
Hair mercury was used because it is the standard measure used in nearly all other
studies of this question. Mercury is thought to enter the hair and brain in a similar
fashion. Hair was also chosen because hair has been shown to follow blood con-
centrations longitudinally, and samples of hair can recapitulate the entire period of
exposure, in this case the period of gestation. As part of our research we have shown
that hair levels reflect levels in the target tissue, brain. Measuring mercury in blood
301
requires correction for the red blood cell volume (hematocrit) since the mercury is
primarily in red blood cells and reflects only very recent exposure. It can also vary
if recent meals with high mercury content are consumed.
Did the SCDS use subjects whose mercury values were too low to detect an as-
sociation? No, the studys goal was to see if the children of women who consume
fish regularly were at risk for adverse developmental effects from MeHg. Women in
Seychelles eat fish daily and represent a sentinel population with MeHg levels 10
times higher than U.S. women. Because of higher levels of exposure, their children
should be more likely to show adverse effects if they are present. These children
show no adverse effects through 9 years of age suggesting that eating ocean fish,
when there is no local pollution, is safe. However, we cannot rule out an adverse
effect above 1215 ppm since we had too few cases to substantiate a statistical asso-
ciation if one really existed.
Did the SCDS use the best tests available to detect developmental problems?
Yes, the SCDS used many of the same neurodevelopmental and neuropsychological
tests used in other developmental studies. These tests are deemed to be excellent
measures for determining development at the ages studied. The tests examined spe-
cific domains of childrens learning and were increasingly sophisticated as the chil-
dren become older.
Did the SCDS find expected associations between development and birth
weight, socioeconomic factors, and other covariates? Yes, expected relationships with
many covariates such as maternal IQ, family socioeconomic status and the home en-
vironment were found, indicating that our tests were sensitive to developmental dif-
ferences.
Did the removal of statistical outliers in the analysis bias the study? No. It is
standard practice among statisticians to remove statistical outliers. Outliers are val-
ues that are inconsistent with the statistical model employed to analyze the data.
Every statistical analysis depends on a model, and every statistical model makes as-
sumptions about the statistical (distributional) properties of the data that must be
satisfied if the results of the analysis are to be interpreted correctly. Sound statis-
tical practice requires that the necessary assumptions be checked as part of the sta-
tistical analysis. Examination of outliers constitutes one of these checks. Statistical
outliers are defined by the difference between the actual test score for a child and
the value predicted by the statistical model. Small numbers of such outliers oc-
curred in test scores for children with widely varying MeHg exposures. The results
of all analysis were examined both before as well as after the removal of outliers.
For analyses in the main study the removal of statistical outliers did not change
the conclusions.
What about the Faroe Islands study where prenatal MeHg exposure was re-
ported to adversely affect developmental outcomes? There are substantial dif-
ferences between the Faroe Islands and Seychelles studies. The exposure in the
Faroe Islands is from consuming whale meat and there is also concomitant exposure
to PCBs and other neurotoxins. There are also differences in the measurement of
exposure and the approach to statistical analysis. The Faroe Islands study reported
associations between cord blood mercury levels and several tests. After statistical
analysis they attributed the associations to prenatal MeHg exposure. Scientific stud-
ies are frequently open to different interpretations and some scientists do not agree
with the researchers interpretation. We believe the Seychelles study of individuals
consuming fish more closely approximates the U.S. situation.
Are the children in Seychelles too developmentally robust to find the effects of
MeHg if they are present? No, the children in Seychelles tested similar to U.S. chil-
dren on nearly all measures apart from motor skills where they were more ad-
vanced. There is no reason to think that they are too robust to show the effects of
prenatal MeHg exposure if any are present.
Are children in Seychelles exposed to PCBs or other food-born toxins that might
have confounded the results? No, sea mammals are not consumed in Seychelles and
measured PCBs in the childrens blood were low.
Should data from the Seychelles be considered interim? Maybe. Among develop-
mental studies, a 9-year followup is considered very long and should be adequate
to identify associations with most toxic exposures. However, very subtle effects can
be more readily tested in older individuals and there is evidence from experimental
animals that some effects of early mercury exposure may not appear until the ani-
mal ages.
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
August 18, 2003.
Hon. JAMES M. INHOFE, Chairman,
Committee on Environment and Public Works,
U.S. Senate,
Washington, DC.
DEAR MR. CHAIRMAN: Thank you for offering me the opportunity to respond to
certain comments that were made in the EPW committee hearing on Tuesday, July
29, of this year. I hope I can clear up any confusion that might have been caused
by incomplete, misleading or erroneous testimony that day.
The testimony in question by Dr. Michael Mann stated:
Its unfortunate to hear comments about the supposed inconsistencies of the
satellite record voiced here years after that has been pretty much debunked in
the peer-reviewed literature in Nature and Science. Both journals have, in re-
cent years, published . . . articles indicating that in fact, the original statement
that the satellite record showed cooling was flawed because . . . the original au-
thor, John Christy, did not take into account a drift in the orbit of that satellite,
which actually leads to a bias in the temperatures . . . Christy and colleagues
have claimed to have gone back and fixed that problem. But just about every
scientist who has looked at it says that this fix isnt correct and that if you fix
it correctly then the satellite record actually agrees with the surface record, in-
dicating fairly dramatic rates of warming in the past two decades.
Virtually all of this testimony is misleading or incorrect. I will touch on the major
problems, point-by-point, and I will try to be brief.
1. Certainly no one has debunked the accuracy of the global climate dataset that
we built at The University of Alabama in Huntsville (UAH) using readings taken
by microwave sensors aboard NOAA satellites. This dataset has been thoroughly
and rigorously evaluated, and has been published in a series of peer-reviewed pa-
pers beginning in Science (March 1990). The most recent version of the dataset was
published in May 2003 in the Journal of Atmospheric and Oceanic Technology after
undergoing a strenuous peer review process.
2. We, and others, are constantly scrutinizing our techniques to find ways to bet-
ter analyze the data. In every case except one we discovered needed improvements
ourselves, developed a method for correcting the error, and published both the error
and the correction in peer-reviewed journals. When Wentz, et al. (1998) published
their research on the effects of orbital decay (the one exception) they explained an
effect we immediately recognized, but which was partially counterbalanced by other
factors we ourselves discovered. Since that time we have applied the corrections for
both orbital decay and other factors, and have published the corrected data in peer-
reviewed journals.
3. The UAH satellite record does not show cooling in the lower troposphere and
hasnt shown a long-term cooling trend since the period ending in January 1998. I
cannot say where this chronic cooling misconception originated. Our long-term data
show a relatively modest warming in the troposphere at the rate of 0.133 Fahr-
enheit per decade (or 1.33 Fahrenheit per century) for the period of November 1978
to July 2003.
4. There is no credible version of the satellite dataset that actually agrees with
the surface temperature record for the past 25 years, nor one that shows fairly dra-
matic rates of warming. The as-yet-unexplained differences between the surface
and satellite data are at the heart of the controversy over the accuracy of the sat-
ellite data.
While much of the surface data remains uncalibrated and uncorroborated, we
have evaluated our UAH satellite data against independent, globally-distributed at-
mospheric data from the U.S. and the U.K. (Hadley Centre) as shown in the figure
(enclosure 1). We published the results of those comparisons in numerous peer-re-
viewed studies (enclosure 2). In each case we found excellent consistency between
the satellite data and the atmospheric data. One should note that such independent
corroboration has not been performed on the other satellite temperature datasets al-
luded to in the quoted testimony.
This consistency between two independent datasets gathered using very different
techniques gives us a high level of confidence that the UAH satellite dataset pro-
vides a reliable measure of global atmospheric temperatures over more than 90 per-
cent of the globe. (By comparison, one of the most often quoted surface temperature
datasets achieves partial-global coverage only by claiming that certain isolated ther-
mometer sites provide representative temperatures for an area roughly equaling
two-thirds of the contiguous 48 states, an area that would reach from about Browns-
ville, Texas, to Grand Forks, North Dakota.)
324
5. A final point relates to numerous comments elsewhere in the testimony in
which an appeal to a nebulous mainstream climate community was made to sup-
port what was stated. First, the notion that thousands of climate scientists agreed
on the IPCC 2001 text is an illusion. I was a lead author of IPCC 2001, as was Dr.
Mann. There were 841 lead authors and contributors, the majority of whom were
not climatologists and who provided input in the area in which they have expertise
only to their tiny portion of the 800+ page document. These 841 were not asked to
approve nor where they given the opportunity to give a stamp of approval on what
was finally published.
Although I might be outside the mainstream, according to Dr. Manns perspec-
tive, I have never thought a scientists goal was to achieve membership in the
mainstream. My goal is to produce the most reliable climate datasets for use in
scientific research. Whether they show warming or cooling is less important to me
than their reliability and accuracy. That these datasets have been published in nu-
merous peer-reviewed venues is testimony to accomplishing this goal and, by infer-
ence, would place me inside the mainstream climate community. In addition to
being an IPCC lead author, significant achievement awards from NASA and the
American Meteorological Society along with my recent election as a Fellow of the
AMS are evidence of my impact on the community of scientists.
I hope this clears up any confusion you or your committee members might have
had about the UAH global temperature data. If you or any of your committee mem-
bers have any questions, I will be delighted to answer them to the best of my abil-
ity.
Thank you again for offering me this opportunity. I remain,
Sincerely,
JOHN CHRISTY, PH.D.