Next Generation Space Telescope
Next Generation Space Telescope
Next Generation Space Telescope
Editors:
Garth D. Illingworth
University of California, Santa Cruz
Barbara EUer . /
Foreword 1
Sage Advice 7
In Space Science in the Twenty-First Century, the Space Science Board of the Na-
tional Research Council identified high-resolution interferometry and high-throughput
instruments as the imperative new initiatives for NASA in astronomy for the two decades
spanning 1995 to 2015. In the optical range, the study recommended an 8 to 16-meter
space telescope, destined to be the successor of the Hubble Space Telescope (HST), and to
complement the ground-based 8 to 10-meter-class telescopes presently under construction.
It might seem too early to start planning for a successor to HST. In fact, we are late.
The lead time for such major missions is typically 25 years, and HST has been in the
making even longer with its inception dating back to the early 1960s. The maturity of
space technology and a more substantial technological base may lead to a shorter time
scale for the development of the Next Generation Space Telescope (NGST). Optimistically,
one could therefore anticipate that NGST be flown as early as 2010. On the other hand,
the planned lifetime of HST is 15 years. So, even under the best circumstances, there will
be a five year gap between the end of HST and the start of NGST.
The purpose of thisworkshop dedicated to NGST was to survey its scientific po-
first
tential and technical challenges. The three-day meeting brought together 130 astronomers
and engineers from government, industry and universities. Participants explored the tech-
nologies needed for building and operating the observatory, reviewed the current status
and future prospects for astronomical instrumentation, and discussed the launch and
space support capabihties likely to be available in the next decade. To focus discussion,
the invited speakers were asked to base their presentations on two nominal concepts, a
10-meter telescope in space in high earth orbit, and a 16-meter telescope on the moon.
Artist's view of these two concepts are shown in Figures 1 and 2, and their specifications
are summarized in Table 1.
The workshop closed with a panel discussion focused mainly on the scientific case,
siting, and the programmatic approach needed to bring NGST into being. The essential
points of this panel discussion have been incorporated into a series of recommendations
that represent the conclusions of the workshop.
Speakers were asked to provide manuscripts of their presentation. Those received
were reproduced here with only minor editorial changes. The few missing papers have
been replaced by the presentation viewgraphs. The discussion that follows each speaker's
paper was derived from the question and answer sheets, or if unavailable, from the tapes
of the meeting. In the latter case, the editors have made every eflFort to faithfully represent
the discussion.
We are most thankful to the speakers for their very thoughtful and valuable con-
all
tributions. Their vast experience in science and engineering wiU be essential for the
successful completion of a project of this scale. Thanks are due to Roger Angel, Jack
Burns, Don Duccio Macchetto, Joe Miller, Jean OHvier, Peter Stockman, Dominick
Hall,
TenereUi, and Rodger Thompson for chairing the various sessions or for their participation
in the panel. We would also Uke to thank John Bahcall who introduced the workshop
by sharing some of his experiences with the HST project. His pertinent remarks about
the dedication of those involved in the development of HST emphasized the deep and
widespread commitment needed to bring about its successor.
We would particularly Hke to thank Riccardo Giacconi for his keen interest and sup-
port of the workshop. He had urged us for some time to think of the long-term needs of the
astrophysics community and to explore the scientific potential and technical challenges of
a successor to HST. We also greatly appreciate the support given by Peter Stockman. He
contributed invaluable advice and assistance throughout, ensured that the appropriate
Institute resources were available, and gave an excellent summary of the meeting.
We would also Hke to extend our gratitude to Charles Pellerin for personally support-
ing the meeting and providing NASA funding for the publication of these proceedings.
This workshop isbut an early step on the long journey to the completion of NGST.
As evidenced by these proceedings, however, the spectacular views of the heavens to be
provided by this telescope and a deeper understanding of our universe and its origin are
a worthy destination for this complex and challenging journey.
The Editors
Table 1. Nominal Next Generation Space Telescope (NGST)
Figure 1. a 10 meter telescope in high earth orbit. The telescope is
Artist's concept of
relaxation of sun, moon
very compact thanks to a fast primary and the short baffle that a
bright earth avoidance angles in high orbit permits. Solar panels are fixed on the rear
and
of the spacecraft to minimize mechanical disturbances.
mirror is supported by a hexapod mount, the legs of which are extendable for pointing and
tracking. A coude-like arrangement is used to feed the scientific instruments which are
located underground for radiation and meteorite protection. The rails are for a hangar-type
shield which is rolled over the telescope during lunar day.
CONCLUSIONS OF THE WORKSHOP
Formal conclusions were not drawn explicitly before adjournment of the Workshop.
However, a consensus clearly emerged, especially during the panel discussion. The follow-
ing statements and recommendations which were developed after the workshop, based on
tapes of the panel discussion, are beheved to reflect the spirit of this collective opinion.
1. Scientific objectives:
There will be a definitive need to continue and extend the observational capability
offered by HST beyond its predicted hfetime. A gap of more than 5 years would be a
blow to the vitaUty of forefront astronomical research.
The scientific potential of an HST follow-up mission with enhanced flux collecting
power and spatial resolution, and with spectral coverage extended through the near-
infrared is enormous. It is viewed as complementary to large-baseline space interferometry
missions which emphasize high spatial resolution imagery. An observatory providing
high sensitivity and high-throughput spectroscopic capability at diffraction-limited spatial
resolution from the UV to beyond 10 microns is vital for the study of the most fundamental
questions of astrophysics. These include the formation and evolution of galaxies, stars
and planets, and the nature of the young universe.
2. Technological readiness:
3. Siting:
Both the moon and high earth orbits are suitable sites for a next generation space
telescope. Low earth orbits are undesirable because of high disturbance levels, insufficient
passive cooHng and low observing efficiency. Compared which would
to a high earth orbit
likely not be serviceable, a lunar site would permit maintenance and upgrading and thus
longer amortization for any major scientific and financial investment. In addition, it would
provide a very stable platform for the demanding pointing and tracking requirements, as
well as having advantages for the shielding of detectors. Although the moon appears very
attractive in view of the current Lunar Outpost initiative, an immediate commitment to
that program or other NASA infrastructures is not required at this time. Space-based
and lunar-based designs should be pursued in parallel for the next few years to clarify the
observational, technical, space logistical and cost tradeoffs.
4. Programmatic approach:
m m
A 10-16 (space-based) to 16 (lunar-based) aperture is considered a realistic goal.
Future workshops should concentrate on further definition of the scientific objectives,
review of preliminary studies and the identification of critical technologies. Strawman
designs should be prepared to refine the various concepts and ideas and focus discussion.
A "Telescope Design Group" should be formed to guide and oversee the preliminary
design work. This group, comprised of astronomers and technologists, should address the
and technical requirements together, paying particular attention to system
scientific goals
is the result of many compro-
engineering. In projects of this complexity, efficient design
mises that can only be developed by successive iterations and by system-level analyses.
The importance of this iterative process involving astronomers, physicists and engineers
in the science-engineering tradeoffs and in defining the requirements was emphasized by
many participants. The involvement of these different groups needs to occur during all
phases of the project, from concept development, through technology development and
fabrication, and finally during system-level testing.
Once clearly identified by the preliminary design process, the development of the key
enabhng technologies should be integrated with the appropriate long-term program of
the national and international Space Agencies (e.g., NASA's Technology 21). This will
ensure both that the technological requirements are addressed at the proper level, and
that benefits are obtained from interaction with other programs.
5. International cooperation:
Like HST, the next generation space telescope project should be carried out coop-
eratively as an international program. Cost sharing renders such major missions more
affordable for each participating country, and international collaboration often enhances
quaUty and performance. Complex and pioneering space missions also benefit from the
exchange of ideas and variety of approaches afforded by multicultural collaborations.
SAGE ADVICE
"It's not often that we have a chance to participate in history". Danielson (as quoted by
Bahcall)
"In comparison to ground-based uses, adaptive optics in space should be easy". Angel
"Don't underestiniate the difficulty of achieving the required surface error". Swanson
"We should not simply accept a traditional R-C design for a 10 meter telescope". Korsch
"The more money that we put up front, the cheaper the next telescope will be". Breck-
inridge
"One of the things which should come from this workshop is a better understanding of
the merits of filled apertures and interferometers". Breckinridge
"I recommend that you think beyond Shuttle C for the next ST". Furnas
George Field
Center for Astrophysics
1. Introduction
In 1984, NASA requested the Space Science Board to undertake a study to determine
the principal scientific issues that the disciplines of space science would face during the
period from about 1995 to 2015.^ Six Task Groups were established, including one on
Astronomy and Astrophysics, chaired by Bernie Burke. As Bernie could not be here, and
Ihad served on both the Astronomy and Astrophysics Task Group and the Steering Group
of the Study, I agreed to report on its activities in his absence here today. Roger Angel,
a participant in this Workshop, was also a member of the Astronomy and Astrophysics
Task Group.
2. The Space Science Board Report: Space Science in the Twenty-First Century
11
"A large range of scientific problems could be undertaken only by a telescope of this
type. The combination of light-gathering power and resolution offered by such a telescope,
equipped with advanced spectrographs and detectors, would lead to a quantum leap in
our understanding of some of the most fundamental questions in astronomy."
There follow six pages of dicussion of scientific objectives for an 8- to 16-m telescope.
On p. 55 of the same Report^ there is a section on technological developments,
including a discussion of methods for fabricating the telescope. In particular, it is stated
that,
"Two possible avenues are available for orbiting the large telescope. It seems hkely
that in the time frame under consideration large vehicles could launch a prefabricated
telescope up to 8 m
in diameter. Alternatively, for a 16-m diameter telescope, construction
probably the best route. Mirror segments would be polished and tested on the
in orbit is
ground and assembled onto a frame structure built in space."
"Large telescopes designed to operate in a zero-g environment, but which do not
have to withstand launch, are an exciting challenge to designers and engineers. Given
a well-directed technology development program, the task group anticipates that an 8-
to 16-meter telescope will be within closer reach than a simple extrapolation from HST
would suggest."
Thus, the assessment of an 16-m space telescope by the Task Group on As-
8- to
tronomy and Astrophysics is It should be noted that as explained
relatively optimistic.
above, the large telescope is only one of the recommended elements of the program for
astronomy and astrophysics in the 1995-2015 time frame. There are four other large-area
telescopes for submillimeter. X-ray, and gamma-ray astronomy, and three facilities for
imaging interferometry at optical and radio wavelengths, including one, the Large Space
Telescope Array of nine 1.5-meter optical telescopes mounted on a tetrahedral structure
100 m or so in diameter, aimed for milliarcsecond resolution with about 20 m^ collecting
area, about 1/4 that of the large telescope.
The Task Group briefly addressed the question of cost, and assumed (p. 68) that for
each wavelength regime only one observatory class facility will operate at a time, that
new facilities will be developed soon enough to prevent the occurence of substantial gaps
in observational capability, and that new facilites will cost about twice as much as the
ones they replace. On this basis, the Task Group concluded that the base program (Great
Observatories and their replacements) can be carried out if the annual real-dollar NASA
astrophysics budget increases over FY 85 at 2.3 % per year. If the large telescope is taken
to be the replacement for HST, one infers that the Task Group believes that it can be
accomplished for about $4B in 1986 dollars. I will retum to the question of cost in the
last section, below.
The Astronomy and Astrophysics Task Group wrote an Appendix entitled, "Astro-
physics at a Lunar Base," which, however, does not appear as part of its Report.
In Overview volume,^ the Steering Group accepted the main outlines of the Task
its
Group Repon, including the 8-m to 16-m telescope. In chapter 9 of the Overview there
is a discussion of Human Presence in Space in which it is argued that if human presence
extends further into the solar system, brief forays of human beings to remote places are
not very useful for science. On the other hand, much useful science can be done by humans
located at various stations (earth orbit, Moon, Mars) for long periods. It concludes (p.
77) that
12
"space science experiments, tended in space by human beings, may provide the most
important rationale for the staging, assembly, maintenance, repair, and operation of ma-
jor space facilities (e.g., space astronomical telescopes, earth science experiment pay-
loads/platforms, launch vehicles for planetary missions)."
The Space Science Board Study was concluded in June, 1986, and the Report was
published in 1988.
3. The Report of the National Commission on Space: Pioneering the Space Frontier
The recommendations of the Space Science Board 1995-2015 Study were transmuted
(p. 52) into one for
"a large space telescope array composed of 25-foot (8-m) diameter telescopes [to]
operate in the ultraviolet, visible, and infrared. The combination of large diameter tele-
scopes with a large number of telescopes would make this instrument 100 times more
sensitive than Hubble Space Telescope. Because the image would be three times sharper,
the limiting faintness for long exposures would increase more than 100 times."
This recommendation isbroadly consistent with the Space Science Board recommen-
dations for both the 8- to 10-m optical space telescope and the large space telescope
array.
NASA Administrator Dr. James Fletcher formed a task group to define potential U.S.
space initiatives, and to evaluate them in the light of the current space program and the
nation's desire to regain and retain space leadership. Its report, Leadership and America's
Future in Space,^ published in August 1987, can be viewed as a NASA response to the
13
Report of the Rogers Commission to investigate the Challenger accident and the Paine
Commission on the future of the space program. It narrowed the field of new initiatives
beyond the current base program to four options:
(1) Mission to Planet Earth
(2) Exploration of the Solar System
(3) Outpost on the Moon
(4) Humans to Mars.
astronauts on three round trips to land on the surface of Mars, leading to the eventual
establishment of a permanent base. In view of the fact that astronomy is not mentioned
in the Ride Report under options (1) or (2), one may surmise that the space astronomy
program would not be highlighted if they are adopted. Astronomy is mentioned under
Outpost on the Moon (p. 30), where it is stated that, "The Moon's unique environ-
ment provides the opportunity for significant scientific advances; the prospects for gains
in lunar and planetary science is abundantly clear. Additionally, since the Moon is seis-
mically stable and has no atmosphere, and since its far side is shielded from radio noise
from Earth, it is a very attractive spot for experiments and observations in astrophysics,
gravity-wave physics, and neutrino physics, to name a few." Astronomy is not mentioned
under Humans to Mars. In concluding chapters, the Ride Report recommends that NASA
"should embrace the Mission to Planet Earth," that "although not necessarily at the pace
suggested in this initiative, planetary exploration must be solidly supported." The Ride
Report is not enthusiastic about an early effort to land humans on Mars, but is favorable
toward the timely establishment of an Outpost on the Moon.
Apparently the National Space Council is moving toward a decision as to which, if
any, of the Ride Report initiatives to recommend to the President. This would have
profound implications for the 10- to 16-m telescope being discussed in this Workshop. If
the choice is to retum to the Moon, one may anticipate substantial increases in the NASA
budget. Although Figures 14 and 15 on p. 46 of the Ride Report are not labelled in dollar
amounts, one can surmise that starting in the mid 90's, NASA's budget would have to
triple for either the Outpost on the Moon or the Humans to Mars programs to be carried
out. On
the other hand. Mission to Planet Earth or Exploration of the Solar System
would apparently require only about a 30% increase in the NASA budget. I conclude
that in the latter case, funds available for the 8-m to 16-m telescope will be little more
than for the telescope it replaces, the HST. What might a telescope in Earth orbit actually
cost? In connection with a study of the cost of space-based laser ballistic missile defense I
carried out with David Spergel,^ I had occasion to look into the cost one might estimate
for diffraction-limited optical systems in low earth orbit, including the associated systems
for target acquisition and tracking (but not launch or maintenance). Our best estimate
for the cost of a system of aperture D (meters) was $3.6 (D/10)^-^ billion in 1984 dollars,
or $3.8 (D/10)l-7 billion in 1986 dollars, consistent with the $4B implied by the Space
Science Board Study (see above). Hence I agree that a 10-meter in low earth orbit seems
14
reasonable on fiscal grounds.
For a 16-m on the Moon the scaling law indicated above would suggest an increase
by a factor of 2.2, up to $8.4B in 1986 dollars, w^hich might be acceptable under a much
expanded (about three-fold) NASA program driven by an Outpost on the Moon. However,
there will be major cost differentials between basing in earth orbit and on the Moon.
Clearly the question of cost will be vital to address as planning proceeds.
References
1. Space Science in the Twenty-First Century: Imperatives for the Decades 1995-2015.
In seven volumes; Volume I, Overview. Report of the Study Steering Group, Space
Science Board, Commission of Physical Sciences, Mathematics, and Resources, Na-
tional Research Council (Washington, D.C.: National Academy Press), 1988.
2. Ibid.., Volume II, Astronomy and Astrophysics. Report of the Task Group on As-
tronomj' and Astrophysics, Space Science Board, Commission on Physical Sciences,
Mathematics, and Resources, National Research Council. (Washington, D.C.: Na-
tional Academy Press), 1988.
3. Pioneering the Space Frontier: The Report of the National Commission on Space
(New York: Bantam Books), 1986.
4. Leadership and America's Future in Space, by Sally K. Ride (Report to the Adminis-
trator of NASA, August 1987).
5. "Cost of Space-Based Laser Ballistic Missile Defense," by George Field and David
Spergel, Science, 231, 1387-1393 (1986).
DISCUSSION
Pilcher : There was another NASA report following the Ride Report entitled Beyond Earth's Bound-
aries, the first annual report of the Office of Exploration (OESP). OEXP was formed in 1987 at Sally
Ride's recommendation to provide an institutioned home within NASA for human exploration of the
solar system. In Beyond Earth's Boundaries, OEXP reported on a group of case studies including one
on a lunar observatory. That case study was based in part on papers presented at a series of workshops
including Future Astronomical Observatories on the Moon (held in 1986 at JSC, proceedings published
as orange covered NASA Conference Proceedings); Lunar Bases and Space Activities of the 21st Cen-
tury (held in 1984, proceedings published in 1985 by the Lunar and Planetary Institute, Houston); and
A Lunar Farside Very Low Frequency Array (held at Univ. New Mexico, Albuquerque; Proceedings
in Press).
15
Status and Future of NASA Astrophysics
Edward J. Weiler
NASA Headquarters
PART1
SPRING 1990.
I
16
GAMMA RAY OBSERVATORY
(GRO)
EGRET • OSSE
COMPTEL • BATSE
\
ADVANCED X-RAY ASTRQPHYICS FACILITY
(AXAF)
17
SPACE INFRARED TELESCOPE FACILITY
(SI RTF)
J
ASTRO - 1 .2
• 200 -
300 INDEPENDENT OBSERVATIONS DURING TYPICAL 9-10 DAY MISSION
18
EXTREME ULTRAVIOLET EXPLQRFR
(EUVE)
• WILLPRODUCE AN ALL SKY SURVEY COVERING 80 TO 900 A IN 4 BANDS SPATIAL
RESOLUTION OF 30 ARC SECONDS
• DEEP SURVEY WILL SCAN REGION TWO DEGREES WIDE BY 180 DEGREES LONG ALONG
THE ECLIPTIC; COVERS 100 TO 500 A RANGE IN 2 BANDS
• MISSION DURATION AT LEAST 3 YEARS; EXCHANGED WITH XTE PAYLOAD VIA ON-ORBIT
SERVICING FROM SHUTTLE
J
ORBITING AND RETRIEVABLE FAR AND
EXTREME ULTRAVIOLET SPECTROMETER
(ORFEUS)
19
HUBBLE SPACE TELESCOPE 2ND
GENERATION INSTRUMENTS
(HST 2ND GENERATION)
THREE NEW INSTRUMENTS
. NEAR INFRARED CAMERA AND MULTI-OBJECT SPECTROMETER (NICMOS)
. TO 2.5 MICRON IMAGING AND SPECTROSCOPY
0.8
. ADVANCED DETECTORS; FULLY DIFFRACTION LIMITED OPERATION -
20
PART 2
TECHNOLOGY 21
21
METHOD OF IDENTIFYING
CRITICAL TECHNOLOGIES
THE "LDR" METHOD HAS BEEN ADOPTED BY THE ASTROPHYSICS DIVISION AS THE
BEST WAY TO IDENTIFY CRITICAL TECHNOLOGIES. THIS INVOLVES:
22
PART 3
LUNAR PLAN
•
SURVEY ADJACENT STRIPS (OR SAME STRIP) OF SKY AT 1 MONTH INTERVALS
•
A WIDE-FIELD ZENITH-POINTED TELESCOPE OF =2 M APERTURE
• FOCAL PLANE READ OUT AT APPARENT SIDERAL RATE
SCIENTIFIC RATIONALE:
•
100 TIMES MORE AND 20 TIMES MORE RESOLUTION
SENSITIVITY (V>25) (=0.1 ARC
SECONDS AT 0.5 n) THAN PALOMAR SURVEY
•
1000 SQUARE DEGREE FIELD OF VIEW. SAMPLED ANNUALLY
• NO POINTING REQUIRED
pRnr.RAMMATIC RATIONALE:
•
TECHNOLOGY WELL DEVELOPED
•
LOW COST EASY TO INSTALL AND OPERATE
23
16 METER UV/VIS/IR TELESCOPE
DESCRIPTION:
SCIENTIFIC RATIONALE:
• IN-SITU ASSEMBLY
PROGRAMMATIC RATIONALE:
24
LUNAR-BASED SYNTHETIC APERTURE
INTERFEROMETER
DESCRIPTION:
• OPTICAL EQUIVALENT TO RADIO VLBI
•
TOTAL EVENTUAL COLLECTING AREA 50 - 1 00 SQUARE METERS
SCIENTIFIC RATIONALE:
• IMAGE ACCRETION DISKS AROUND STELLAR OBJECTS. NEUTRON STARS, BLACK HOLES
•
LOW NIGHTTIME TEMPERATURE (PASSIVE COOLING OF OPTICS)
PROGRAMMATIC RATIONALE :
•
A 16 METER TELESCOPE CAN BE A POWERFUL INTERFEROMETER ELEMENT
25
DISCUSSION
niingwotth: Can you comment on how the science goals in NASA's response to the President's July
20 initiative will be iterated with the science community following the submission of the report to the
NSC.
Pilcher : NASA's 90 day study (in response to the President's July 20 initiative) must be broad, but
its scope and brevity limits its depth. The process of defining the science content of human exploration
missions must be a continuing one. Mike Duke has been detailed to the OSSA front office to work
with all the science Divisions and the external communities on defining science for the 90 day report.
That process must continue beyond the 90 day study to provide scientific input of increasing maturity
Weiler : We have agreed within Astrophysics that we will iterate and that we will use the advisory
structure (the MOWG's), but we will also get help from the Astronomy and Astrophysics Survey
Committee - from a broader community.
Breckiiuidge: Is the "crater" telescope considered to be a "suitcase" telescope mission to the moon?
A 1 meter telescope is more like a large steamer trunk than a "suitcase."
Weiler : Yes, "suitcase" means requiring little or no assembly from the astronaut. Deployment and
pointing (rough) are acceptable. Larger than a suitcase is ok.
26
ESA Long Term Plans and Status
Duccio Macclietto
European Space Agency
Introduction
The European Space Research Organization, the predecessor of ESA, was estabhshed
some 25 years ago. The main goal then was to coordinate the work of European as-
tronomers and scientists to design and build a telescope to be placed in Earth's orbit. As
it turned out, the first satellite that was built was not an astronomical satellite, it was a
space plasma physics satellite. Since those early days, ESA has conducted a very active,
although budget limited, research program in astronomy. The first astronomical satellite
that was launched was TD-IA, which carried an ultraviolet telescope. It was followed by
COS-B, which conducted an all sky survey of gamma-ray sources. That was followed by
a joint mission with NASA, the International Ultraviolet Explorer satelUte (lUE), well
known to all of you, and then lately the EXOSAT, which explored the X-ray region.
The program is built around what are called the four "corner stone" missions. These
typically cost upwards of 500 miUion dollars, while ESA's budget for scientific research is
of the order of 300 million dollars per year. What that means is that ESA
can build only
one of these satellites at a time. While one mission is winding down a start wiU be made
on the next. The first cornerstone to be built is the SOHO/CLUSTER mission, again a
collaboration with NASA. It will be followed later by the X-ray mission XMM.
The smaller boxes in Figure 1 are missions of the 200 million dollar type. Such
missions include satellites such as Hipparcos and the ESA's contribution to the Hubble
Space Telescope. There are also a few empty boxes, and these are opportunities for
future missions which I will briefly describe little later on.The circular central region
in the center represent low-cost missions, on the order of 100 million dollars or less.
These are either throw away type missions, assorted missions, or joint missions with
NASA or other agencies where ESA's contribution would be less than 100 million dollars.
The outer rectangular areas indicate missions which will be ready at a time well beyond
Horizon 2000. In addition, the plan identifies a number of missions that have considerable
scientific interest in the European community but are of a complexity, or cost or require
technological advances of such a magnitude that it is impossible to carry them out in this
century.
ISO/XMM
If we turn to astronomy, the mission that is currently being built is the Infrared Space
27
A Solar Probe Figure 1
HOEM
S.T.P.
Solar/Plasma
Heliospheric
Missions
ULYSSES
(ISPM)
HIPPARCOS
High Throughput
X-Ray
Spectroscopy
Mission
Dewar which keeps the temperature and the detectors at about 4 degrees
of the telescope
Kelvin. It will carry about 2,000 liters of hehum. The total mass is about 5,000 kilos and
it is expected to be launched in about 1996.
Following ISO, other approved programs, which are not being built yet but are in a
detailed definition phase, are: Cassini, a joint mission withNASA, that will explore the
atmosphere and the surface of Titan which is scheduled for launch in 1996; XMM, which
isa high X-ray spectroscopy mission to be launched in 1998 with the Ariane launcher; and
the Columbus polar platform mission, which is mainly devoted to study solar terrestial
physics.
Of particular relevance to this conference, XMM consists of a series of nested tele-
scopes and a number of spectrometers and imagers. The imaging quality is only about
30 arc seconds, the emphasis being to build a hght bucket to carry out intermediate
resolution spectroscopy.
FIRST
The next cornerstone astronomy mission is the Far Infrared Space Telescope (FIRST).
It is a heterodyne spectroscopy mission. It will cover the wavelength range between a
hundred microns and one milHmeter. This is an important range because of the availability
of both continum radiation, and a number of atomic and molecular lines. The basehne
is to have a passively cooled antenna about 8 meters in diameter, able to carry out high
spatial resolution observations. The FIRST antenna can be either a deployable antenna or
a segmented antenna, and that will depend on what type of technology is the best suited
at the time of launch. The Dewar at the focal plane is of the same type of technology
that has been developed for ISO and will keep the experiments at about 4 degrees Kelvin.
The launcher should be an Ariane 5, and the orbit is a 24 hour type orbit. The pointing
accuracy will be one arc second with a stabihty of about half an arcsecond. The design of
the antenna is a Cassegrain with a prime focal ratio of F/1.35. It will not have a chopping
secondary, because of the great added complexity that this will require. The material of
the antenna will be carbon fiber and have a very high surface accuracy of about
it will
6 microns. It will have an error of about 100 microns, and a random error of about 20
microns. The model payload detectors includes heterodyne spectroscopy with SIS mixers,
spectrophotometry with a grating combination, a far infrared spectrometer to observe
the wavelength range between 100 and 250 microns, and a far infrared photometer. The
FIRST mission is currently planned for year 2010.
Interferometry
ESA
has estabhshed a space interferometry study team to discuss aperture synthesis
in space. The terms of reference of that team were to define the main science goals, to
establish a strategy for ESA involvement in the project, to identify and discuss differ-
ent mission concepts, to identify science trade-offs and advise on the technical research
program that is needed to carry out this long-term program. This team met a number
of times and they have recently issued a report. They compare a number of different
concepts from rigid structures with a number of free flying elements to floppy structures
of the VLA type. The final two concepts that have emerged are Oasis and Float. Oasis
has a large coherent field of view, while Float is a fiber-linked optical array. The problem
29
here is of course to develop fiber-optics which have good transmission in the ultraviolet.
Tlie two studies, one was a Michelson-type arrangement, and the other is a Fizeau-type
arrangement for beam recombination. In one case, the structure is floppy, in the Fizeau
case is semi-rigid with active optics. The array shape in the case of Michelson would be
a VLA type - a wide type array, with non-redundant spacing and Fizeau again would be
a non-redundant space ring with about 12 different telescopes. In one case the structure
is a space inflatable structure with optical elements connected through rigid tubes, the
other an inflatable structure. These two types of structures are being developed by in-
dustry in Europe for other applications at this time. So there is some hope that it will be
useful for this project. In the Michelson case the ultraviolet is clearly excluded from the
arrangement, whereas in the Fizeau there is a possibility to go down to below 0.1 micron.
The recommendations and the conclusions of this team were to build a 30 meter
optical interferometer with as many as 12 different telescopes in a single structure in a
stationary orbit. important to reach visual magnitudes below 14, and actually a 30
It is
meter array will go down to about magnitude 23. It was felt that it was important to have
a large coherent field of view to include reference stars within the same field of the object
for stabilization and calibration purposes. was essential to cover a large optical range,
It
and particularly to cover the ultraviolet domain. The study team indicated that they felt
that the infrared was better studied from the ground, at least in this kind of configuration.
It was important to have imaging capabihty and high spectral resolution. An additional
study will be carried out starting this year with the aim to define technologies and begin
a development program in those that are the most demanding.
have been carried out and are going to be carried out in collaboration with NASA. I do
hope that discussions are continued across the Atlantic in the mission that you are now
studying. There is a large interest in the astronomical community in Europe for HST
and also in a future HST. It is therefore important to include the European astronomical
scientific community in a possible joint venture in a 10 to 16 meter next generation space
telescope.
30
The Next Generation UV-Visible-IR Space Telescope
Garth lUingworth
Synopsis
Context
With the launch of HST webe entering a new era in astronomy. Even with
will
the major discoveries made two decades, we have only touched
in space over the last
upon the true potential of astronomical observations from space. The broad-ranging
capabihties, the sensitivity and the long Hves of the Great Observatories will demonstrate
the importance of space telescopes for addressing a wide range of fundamental issues in
all areas of astrophysics.
I coming decade will come to be seen as the dawn of the golden age of
believe that the
astronomy. While the theoretical and observational base has matured to the point where
significant inroads are possible into many fundamental problems, four developments will
play a critical role in the coming decades. These key elements are:
- The Great Observatories - the new generation of long-lived, versatile, and highly-
capable space observatories;
- Large ground-based telescopes - the light gathering power of this new generation of
telescopes will provide essential complementary capabihty for the Great Observatories
(primarily in spectroscopy), as well as bringing unique capabihties in their own right;
31
Detectors - the promise of CCDs
about to become reality with the likely near-term
is
Rationale
With these tremendous one can ask what sort of faciHties and capabilities are
gains,
needed for the decades beyond the coming one. Do we need substantial gains in capability,
or will we be in a consolidation phase, using existing facilities to carry out long-term
programs? The answer is "both of the above". Some problems will best be tackled with
long-term effort with existing capability, while others will need substantially improvements
beyond HST and the other Great Observatories. Furthermore, such long-term programs
can only continue for the life of the Great Observatories, and these are projected to be in
the range 5-15 years. Their hmited hfetime alone forces the astrophysical community to
look beyond, to the missions that will succeed the Great Observatories.
The long lead times for such major missions gives a sense of urgency to this process.
The direction in which we need to move will be governed by the scientific goals that will
develop as a result of using the facihties and capabihties listed above. While it would
be presumptuous, and essentially impossible, to generate a comprehensive list of those
problems that will be at the forefront, it is clear already that substantial gains in capa-
bility willbe needed to tackle some of the questions that he at the heart of astronomy.
For example, how stars form, and how galaxies form and evolve are problems that must
be tackled on a broad observational and theoretical front. They involve very complex
processes, and demand telescopes with large light-gathering power, high resolution, broad
spectral coverage, the lowest possible background, and efficient spectroscopic capabihty.
They need versatile telescopes, because the path to understanding wiU depend upon pre-
vious results. The questions to be answered are stiU quite uncertain. These fundamental
issues wiU not be solved by "physics-Hke" experiments.
This is, however, an approach that is stiU questioned. Great concern about the cost
of major facihties has lead to many scientists asking if it wouldn't be cheaper and even
more community to press for more specialized projects. These may have
effective for the
the collecting area of HST, or even of the class of the telescopes being discussed here,
but be instrumented more simply, less capably, and be hmited in wavelength coverage. I
remain skeptical about the real cost savings to be made with this approach. For example,
by asking for a diffraction-hmited large optical system, surely a minimum goal for a space
observatory, one already has set a substantial floor to the cost of the program.
Furthermore, I think such an approach is ill-advised for observatory-class missions. By
their very nature they will, alone and synergisticaUy with other space observatories and
large ground-based telescopes, make substantial discoveries that will change the direction
of science programs, even if the overall goal remains the same. There is no doubt that
the formation of stars and the formation of galaxies will remain high priority programs
through the decades to come. Yet I defy anyone to map out a strategy for solving these
32
problems that they are confident will survive the coming decade or two of observations.
These problems are not amenable to defining an "experiment" that will lead to their
solution. Broad ranging capabiUty is needed that will allow us to build upon a growing
base of understanding - to branch out to follow the leads shown by observations.
Interestingly, I think that we have a consensus by example within the astrophysical
community that is consistent with this view. There are large numbers of universities
around the country that are planning telescopes with their hard-won private money. Al-
most invariably, when the funds allow, they choose large versatile telescopes, and are the
envy of their colleagues for so doing. Astronomers are "voting with their feet" for large,
versatile telescopes.
I think that this view will be strengthened as the community experiences the Great
Observatories and the new 8-10+ m large ground-based telescopes.
This is all that are needed.
not to say that such telescopes are Problems such as
surveys in wavebands inaccessible from the ground, the structure on the smallest scales in
AGNs and quasars, defining the fundamental distance and reference systems, high energy
events, etc. are going to require astrometric systems, interferometers, survey telescopes,
specialized x-ray and 7-ray telescopes and so on. COBE is a current example of a very
impressive and valuable experiment. These facilitiesprovide an essential complementary
element to the large versatile telescopes. The large telescopes can be thought of as pro-
viding the heart of an organic program which is supported by a wide variety of more
directed, more focused capabilities.
Over the next 15 years or so of HST's lifetime, it is clear that the astrophysical
community will become "addicted" to HST's capabihties - namely, its:
- resolving power,
- UV coverage,
- near-IR capability,
- and the predictabiHty of its performance.
It will not only have a major impact on what we do, but also on how we do it.
Yet, for all the wonderful new capabihties offered by HST, it is clear that there will
be a component of great frustration. It is simply too small to do spectroscopy at the
level demanded by many of the outstanding problems, and almost certainly too small
for foUowup of many of its discoveries with its powerful imaging systems. The new,
large ground-based telescopes wiU be able to provide the necessary spectroscopic foUowup
in many cases, but their poor imaging capabihties by comparison with those on HST,
their lack of UV coverage, and their high IR background will be a Hmiting factor, and a
substantial one.
While these general considerations provide a broad rationale for a next generation
large UV-Visible-IR telescope, it is the scientific case itself which leads one to great
enthusiasm for such an observatory. As can be seen in the subsequent science papers,
a particularly compelling scientific case can be made for a 10-16 m class space telecope
with the proposed performance capabilities noted in the introduction to the Workshop.
Words such as "awesome" and "astonishing" have been used by more than one scientist
upon thinking about or being shown the capabilities of such a telescope.
33
The goal is for a 16 m class passively-cooled, diffraction-limited, wide-band telescope.
Its instrument complement would be sensitive from the UV at ~ 0.1 /xm to beyond 10
fim. With passive cooling of the structure and optics to less than 100°K, the background
would be lowered in the 3-4 fim zodiacal "window" and at longer wavelengths to less than
10~^ of that from the ground. The diffraction-limited images would be 7 milliarcsec at 0.5
fim. It could reach 32 mag in the visible at 10:1 S/N in less than 10^ s. State-of-the-art
mosaics of detectors would give diffraction-limited imaging and spectroscopy over a field
of > 2 arcmin from 0.3 fim to beyond lO/xm, and to nearly an arcmin in the UV.
The combination and wide field
of high resolution, low background, wide bandwidth,
are what give unique capabilities. For example, the low background
this telescope its
across the wavelength region covered by this telescope, ~0.1 - 20 ^m, allows for measure-
ments to be made of much, if not most, of the baryonic matter in the universe, assuming
the dark matter to be non-baryonic. Furthermore, the resolution provides an excellent
match to some natural length scales in the universe. For example, the telescope is partic-
ularly well-matched to direct observations of the structure in galaxies and proto-galaxies
at high redshift. Structures in galaxies (e.g., star forming regions, spiral arms, disk/bulge
length scales, merger "arms and tails") have characteristic scales of 100 pc - 1 kpc. With
10 mas resolution we can resolve structures in galaxies at any redshift with the resolution
that we now study the nearest cluster of galaxies, the Virgo cluster, from the ground.
This is an astonishing capability and is shown very explicitly in the paper by Jim Gunn
where he simulates images at a redshift z = 1 of a spiral galaxy with HST and with a 15
m telescope.
The question of the detection of earth-like planets in stellar systems within 10 pc of
the sun and the subsequent spectroscopic observations of those planets with the goal of
detecting ozone and other molecules indicative of life is addressed by Roger Angel in this
volume. It is an extremely challenging observational program, and one which requires a
16 m telescope with all the capabilities summarised here. Yet it is an immensely exciting
goal and one which captures the imagination not only of astronomers and life scientists
Times cales
HST has a nominal life of 15 years. It is now approaching 20 years since the start
of Phase B for HST. It is clear that the pre-Phase A conceptual and technology develop-
ment needs minimize the gap between HST and its successor. The
to start very soon to
maturity of astronomy, and the resultant difficulty and complexity of the scientific issues
that are now at the forefront of the field, requires long-term observational capability. A
substantial gap, greater than 5 years, for example, would have a very deleterious effort on
the productivity in the field and would be a waste of the scientific talent and resources
that will build up around HST and the Great Observatories.
An added concern, of course, is the tough environment in which spacecraft operate,
especially low earth orbit spacecraft with the attendent thermal cycling. If HST, an
immensely complex instrument, degrades faster than expected in a way which cannot
be accommodated by the M&R (Maintenance and Refurbishment) program, we could
potentially be facing a large gap before the next mission. This would also impact the
productivity of AXAF and SIRTF, because the multiwaveband synergy would be lost,
undercutting one of the pillars of the Great Observatory program. A gap in capability in
34
the central UV-Visible-IR wavelength region of 10 years would be far too long. The goal
should be for a gap of less than 5 years between HST and its successor.
Cost
One scaling "law" that has been used in the past for telescopes has cost rising as
the 2.7 power of the diameter, i.e., (£>i/Z>2)'^-^. Applying such a factor for a 10-16 m
class telescope based on HST's cost leaves one gasping. However, such an approach is
inappropriate given changes in technology and the gains that can be accomplished by
experience and attention to detail. Recent ground-based large telescopes have broken
the cost-curve for 1950-70's telescopes by a factor of four, and further gains are in the
pipeline. The German science community is seriously looking at a structure, the Hexapod
structure like that shown in Figure 2 in the Introduction, that is based on experience with
flight simulators. This weight savings, and hence cost savings, even beyond
offers large
the space frame designs currently being used for theKeck Telescope. For a 12 diameter m
telescope, they project total weights comparable to previous-generation 4 m
telescopes.
Another area where major gains in performance with attendent weight savings, and
hence cost savings, can be made is in the area of lightweight optics. As we can see from
the discussions at this meeting, this is an area where major improvements in fabrication
and polishing technology are occuring. The combination of improved performance and
lower weight for the optical segments will directly and dramatically affect the final cost
of the NGST.
We are clearly very early on the learning curve for observatory-class missions, and
should have every expectation for substantial gains in lowering the cost of such missions.
HST can be an extremely valuable experience base for such gains. I hope that as we
pass launch, as HST becomes a powerful, productive observatory, that we can revisit the
construction of HST in a very objective, non-accusatory way with the goal of improving
our ability to do such missions faster and cheaper. The computer industry and the
Japanese automobile industry have made remarkable strides by learning from experience,
by attention to details, and by maximizing the product per dollar. There is no reason
why we cannot do the same.
Turning to the revenue side of the equation, I think that we are ripe for a renewed
and expanded committment to space, and to space science. With the remarkable political
changes that are taking place in the world, I see a renewed emphasis on space as the focus
of our high-technology effort. Even modest real (after inflation) annual increases can very
substantially increase the total budget over a multi-year period. For example, just by
matching the GNP growth, say 3% per year, plus adding another 5% above inflation,
for a period of fifteen years through to 2005, the annual budget for space activities as
carried out through NASA could grow from $13 billion to $41 billion in 1990 dollars. The
space science program that could be carried out with the usual fraction of such a budget,
combined with the increased capabilities that could be obtained per dollar through the
technology and potential fabrication improvements noted above, is exciting, to say the
least.
35
and of the Institute of Medicine. The study group was tasked with developing a program
for Space Science in the 21st century. The reports of the study groups and an overview
volume have recently been published (1988) under the title Space Science in the Twenty-
First Century: Imperatives for the Decades 1995 to 2015.
The report contains a recommendation for an 8-16 m, passively-cooled, UV-Visible-
IR telescope. The report's concluding remark (from the section discussing an 8-16 m
telescope in space) notes that: Given a well-directed technology development program,, the
task group anticipates that an 8 to 16m telescope will prove to be within closer reach than
a simple extrapolation from, HST would suggest.
While such a telescope, and I think re-
this report recognizes the scientific value of
alistically appraises the situation vis-a-vis the needed technology developments, it is but
a start to a wide-ranging program leading to the development of a 16 m
class telescope
within a broad space science program. This workshop takes the next critical step.
36
Session 2
Scientific Potential
NGST and Distant Galaxies
James Gunn
Princeton University
the morphologies of quite distant galaxies, distant enough that the look-back time is a
substantial fraction of the age of the universe. It appears that with substantial expenditure
of observing resources, crude morphological information can be obtained out to redshifts
approaching unity, with quite good images at, say, z=0.5.
Most theories of galaxy formation, and indeed the growing body of ground-based
spectroscopic evidence, put the epoch of galaxy formation earlier, though I think the
weight of modern evidence is that it is not much earHer, and the redshift range 2-3
promises to be of enormous interest. HST, by reason of its small collecting area and,
to some extent, hmited resolution, cannot address this problem directly. The awesome
capability of a fifteen-meter diffraction-limited telescope in high orbit, however, would
allow us to see what is going on at these epochs directly and with considerable ease-if
stiU requiring a great deal of observing time.
It is perhaps worth a few words to say why one expects to see something interesting,
and to discuss what one might see. On the theoretical front, the popular Cold Dark
Matter scenario requires that galaxy formation be late, and furthermore in general quite
chaotic. The model of Baron and White and the newer models of Katz(1989), the latter
incorporating real hydrodynamics and an attempt at a physically reasonable star for-
mation description, produce objects which at the present epoch look very much Uke the
galaxies we see (the former an ellipical galaxy, the latter both ellipticals and convincing
disk spirals). The gross morphology of these systems changes little past z=l, but between
z=2 and z=3 the objects look very different from galaxies today, and in fact in most
surveys, if they could be seen, would be counted as multiple objects, there being several
more-or-less isolated islands of star formation in merging, mostly gaseous, blobs. If
still
one takes the predicted brightnesses of these objects seriously (which one does, of course,
at one's peril- the star formation rates are are, well, very uncertain), they should be visible
in thevery deep ground-based surveys of Tyson and others, and if so, could well explain
the mystery of the counts of galaxies. The counts are several times too high fainter than
about 25th magnitude to be explained by conventional models, and extreme evolutionary
models which do fit them fail the observed redshift distributions at
spectacularly to fit
brighter levels. If, in fact, one were observing not galaxies at very faint levels, but pieces
of galaxies putting themselves together to make galaxies, the problems with the counts
would be much less severe. It is interesting (but again, regard with caution) that the
visual brightness of these objects is predicted to change little between z=2 and z=l, the
integrated brightness corresponding to about V=23-24 for the progenitor of a big (Mv =
-21) spiral at the present epoch. But that brightness not lumped in a single concentra-
is
tion until about z=1.5, and prior to that is quite complex, with several separated regions
of star formation contributing comparably to the total luminosity.
39
What manner would it require to investigate this birth process?
of instrumentation
First of all, the lumps in which stars first form are very small, so one needs high resolution,
ideally about 100 pc at z=l-2, or about 15 milliarcseconds. Secondly, one needs high
sensitivity; in the 'turn-on' phase when the object is quite chaotic, the total brightness
may correspond to V=26 but might consist of several lumps spread over a few arcseconds.
It becomes clear on a that a space-based telescope several times the
little reflection
aperture of HST is needed, and furthermore that such an instrument would do very nicely
indeed. I will end this discussion by presenting some simple simulations of observations
done with an instrument with the following properties:
- 16-meter aperture
- f/20 modified Ritchey optical system; the scale is 1.45 microns/mas
- 5 arcmin diffraction-limited field; this can be easily achieved with a single- element
refractive astigmatism corrector
- FWHM = 10 mas or about 15 microns at V
- Camera with 3 reflections and 1 transmissive element (astig. corr.)
- 8192x8192 CCD
with 7.5 micron pixels; the focal plane might be paved with 25 of
these devices; each has a field 40" square
- Read noise of 2.5 electrons RMS for the CCD.
- For the near IR, a 1024x1024 HgCdTe device with 40 micron pixels would give a field
of 30 arcseconds with 25 mas pixels; it seems likely that a read noise of 10 electrons
will be attainable.
A comment about the assumed technology is perhaps in order... the tallest order in
the almost certainly the diffraction-limited 15-meter primary, which others in this
list is
conference wiU address with much more expertise than I possess. The optical layout is
strjiightforward, and the detector properties probably a much-too-conservative extrapo-
lation of the current art. I have assumed a high-earth-orbit environment, which is opera-
tionally very much simpler than the low-orbit environment of HST. It will allow passive
cooling of the optical detectors (though the IR imagers and other IR instrumentation
which will doubtless be aboard wiU certainly require cryogens or other active cooling.)
The is relatively benign. With moderate shielding, such as is
radiation environment
used in the Galileo detectors, the proton rates are of order 3 per square centimeter per
second and the electron rates much lower. The proton flux can rise by four orders of
magnitude during a very energetic solar flare event, but the dose and duty cycle of such
events ismuch lower than the South Atlantic anomaly which one has to deal with in
very
low orbit. The typical rates are higher than the WF/PC rates, for instance, but not by
much. With the 7.5 micron pixels assumed for the opticail sensors, the mean time for a
given pixel to be hit about 6 x 10^ seconds, while the detected sky photon rate is of the
is
order of .015/second in the visible and near IR with broad-band filters. Thus exposures of
the order of 1000 seconds will have sky levels of order 15 electrons and will be well into the
shot-noise limited regime while of order 1 pixel in 600 will have suffered a cosmic-ray hit;
it seems straightforward that a three-exposure sequence might be standard, with a simple
voting scheme to eliminate hits. The situation in the IR is much trickier; the assumed
40-micron pixels are 28 times larger than the assumed optical ones, the read noise worse,
and the background fluxes lower. Dividing the exposure into roughly 200-500 second
segments will still put one close to the shot-noise limit and have only one or two percent
of the pixels hit during any one sub-exposure, but one would be much more comfortable
40
with a lower noise floor (which may well be possible by the time this instrument is built.)
for an object with an energy distribution like an FO star. The sun coxdd be detected
with 10:1 S/N at 3.5 Mpc, and a lO-day Cepheid at 200 Mpc. A z=2, Mv=-21.5 normal
gcdaxy containing an 18th magnitude QSO could be detected at 5:1 S/N. (AU the above
for 2-hour exposures).
41
Keck after 3 hrs Keck after 30 hrs
42
Similar spectacular gains are to be had with spectroscopy. It is easily possible to do
dynamics on this galaxy at z=l with a spectrograph optimized for this sort of problem,
which is an instrument much more like low-resolution spectrographs on large ground-based
telescopes than the instruments typically used in space; one would hope that the dynamics
of distant galaxies is an important enough problem that such an instrument might be part
of its focaJ-plane complement. A spectrograph with a 0.1 arcsecond-wide slit critically
sampled onto a CCD of the same sort as the imagers (i.e, with an f/2 camera) would have
a resolving power of 2000. A 24-hour exposure would result in an exposure level in the sky
of about 3500 counts per pixel, with the galaxy at 10 kpc radius contributing a signal of
300 counts. If one binned 4 pixels along the slit for an effective spatial resolution of 2 kpc,
the resulting S/N per resolution element is 12:1; farther in, at 2.5 kpc radius, the S/N in
a single pixel row is 25:1 per resolution element, either sufficient to measure velocities to
a few tens of km/sec at this resolution (and, of course, to explore the stellar population
and HII region chemistry in some detail.). The galaxy uses some 40 pixels (4 arcseconds)
along the slit, so if, for instance, the spectrograph were fed by fiber ribbons, one could
look at of order 50 galaxies at once if the field of the spectrograph could be made big
enough, or using a fiber hydra, one could completely map a 4-arcsecond square region,
essentially the whole galaay. The latter possibility I find quite incredibly exciting; the
dynamics of protogalaxies is certain to be complex, and 2-dimensioncd spectral mapping
at relatively high spectral resolution would be a wonderful tool to study them, as weU as
a vast other suite of problems in astrophysics.
DISCUSSION
Gnnn :The cosmic ray environment is a problem for HST because of the high readout noise of the
detectors - with the much lower noise expected for future detectors, repeated exposures can be made
and cosmic rays identified and rejected.
Bahcall : I am surprised at the conservatism of these discussions regarding detectors and the field of
Gnnn : We are looking at mosaics with a large number of pixels to cover the 5 arcmin field so it is a
large gain.
Jackson : Can we support the data rates form these very large pixel sizes?
Gnnn : Computer power increases so switfly that I do not see this being an issue in the future.
43
—
My
of the
topic
The
is
first is
not well described in terms of objects;
exploration. Voyager
it is
44
a
in the —
deep atmosphere by looking through the clouds. These observations combining visible
—
imagery and infrared spectroscopy would benefit greatly from the small infrared diffraction-
limited footprint, w^hich v^^ould be about 400 kilometers at 4 AU distance.
Spectrometers with higher resolving power would be needed to perform critical studies of
planetary atmospheres using a large space telescope. In the infrared, where the molecular con-
stituents are spectroscopicaUy active, the resolving power should be greater than about 10,000 —
value somewhat higher than that discussed by Garth niingworth in his introduction to this work-
shop. In general, planetary studies require optimized instrumentation to derive fuU value from
advances in the basic telescope itself.
Comet Nuclei. We believe that comets consist of interstellar grains and condensed gases
conglomerated in the protosolar nebula. While the comet nucleus is thus an archaeological record,
the coma and tail are not, because the gasses released from the nucleus are quickly modified by
chemical reactions. Therefore, gathering the information locked in the nucleus requires address-
ing it or its immediate environment directly.
The acuity of large space telescope would allow studies of freshly produced parent molecules
near a comet nucleus. For a 10-meter aperture, for example, one Nyquist-sampling resolution
element at 0.6 micrometers would correspond to the apparent size of a typical 10 kilometer nucleus
at a distance of 1 AU, and many comets come much closer to the Earth. Imagery obtained at this
spatial resolution could discover how the morphology and rotational characteristics of the nucleus
are related to its activity. One could, for example, place a spectrograph slit at various distances
from the nucleus, measure parent molecules, and document their evolution in distance, which, in
free flight, is time.
A critical requirement for observing comet nuclei —
one shared by most planetary studies is —
accurate target acquisition and tracking. The apparent motion of the target is complex, because it
is a superposition of three components. One component is due to heliocentric orbiting, another is
parallax caused by the telescope orbiting the Earth, and a third is possible planetocentric motions
—
of the target rotation or satellite orbiting, for example. The capabilities, first, to put the footprint
of the telescope down on a definite place on the changing field, and, second, to hold it there, are
prerequisites to interpreting the observational data v^th confidence. HST has taught us how de-
tailed and challenging are the special pointing requirements for moving targets.
Mars Habitability. The President has declared human exploration of Mars to be a national
objective, and that will require a deeper understanding of the geology, climate, and weather of
Mars. One can envision major research programs on each of those topics.
A 10-meter space telescope would have approximately 6 kilometer resolution on the face of
Mars at visible wavelengths. Using that, one could, for example, map geological units at a higher
detail than has been possible to now. This would establish a basis to extend horizontally what
robotic missions leam from in situ samples. Another application would be studies of martian dust
storms, which regularly veil the entire planet. They originate in very small regions and become
global in a matter of days. How do dust storms arise and grow? Can we understand them weU
enough to be confident about human habitation of Mars? A large space telescope could study
Mars dust storms in an early growth phase with about 6 kilometer resolution a powerful im- —
provement in documenting their evolution.
Dust storm studies would require very rapid rescheduling performance by the operational
system for a large telescope. The onset of a dust storm could be learned from groundbased obser-
vations. In a matter of hours, the large space telescope would need to acquire the planet and
follow the storm's development continuously for a few days. This ability to respond rapidly to
ephemeral events would benefit a wide range of planetary studies, and it could be designed into
the planning and scheduling system of the large space telescope.
lo &
Jupiter's Magnetosphere. My next example uses the largest structure in the solar system,
namely the magnetosphere of Jupiter. We have learned from four spacecraft flyby's and from
groundbased observations that the jovian magnetosphere is dominated by heavy ions that origi-
nate in the volcanoes of the innermost major satellite, lo. We know many details about lo and its
45
environment, but our understanding of its interactions are minimal. The phenomenology is ex-
—
tremely complex and variable as in any great geophysical system. If we better understood the
processes at lo, we would better understand the entire Jupiter magnetosphere.
A 10-meter telescope could grasp lo and its phenomena in detail. For example, with approxi-
mately 45 kilometer resolution at visible wavelengths, it could resolve the volcanoes and track its
evolution. One could address such questions as: How exactly do the volcanoes develop? How
does the volcanic material escape into the magnetosphere?
Complex, dynamic objects like lo call for complicated and flexible observing programs. It
makes no sense to study just one facet of a complex problem, then to study some other aspect at a
different time. An integrated campaign can study various aspects of the problem in one time frame.
For lo, a campaign could follow the appearance of its surface, take spectra of its atmosphere, and
coordinate with thermal ir and visible observations from the ground. This coordinated mode of
research is used by the planetary community on a planetary missions. Synergistic investigations
give a more complete picture of a complex problem. The #1 recommendation of the planetary
— —
panel on the STAC the Space Telescope Advisory Committee was to develop this ability to
integrate multiple, separately-justified observations in one observing suite. It requires additional
integrative capability in the scheduling system, which HST does not currently possess. With
forward planning, it could be designed into the operation of a future large telescope in space.
Extra-Solar Planetary Science. My final topic relates to a current intellectual revolution in
—
planetary science prompted by the new abUity to study other planetary systems forming and
—
already formed around other stars. This new research area is based on generalizing the tradi-
tional questions about how the solar system formed. Now we can now ask: What is the preva-
lence of systems around other stars. Is the Kant-Laplace paradigm of planetary system formation
valid? The answers are increasingly accessible in the astronomical record. For example, much of
the Kant-Laplace evolutionary sequence of planetary systems has been exemphfied by observa-
tions of collapsing molecular clouds, protostellar nebulae in Keplerian rotation, and disk structures
around young stellar objects. Furthermore, there is every expectation that groundbased telescopes
— —
and HST will find self-luminous planets young Jupiters and "superplanets" around other stars.
We will soon know how common and how various other planetary systems are.
A large space telescope could contribute two key things to extra-solar planetary studies. One,
it could view the structures around stars that relate to planet forming. Two, it could detect mature
planets like those in the solar sytem. I want to discuss those two topics separately.
Viewed from the nearest star-forming regions, about 150 pc distance, the Nyquist pixel at 0.6
micrometers for a 10-meter telescope subtends approximately 1 AU. This resolution would be
powerful for studying, as examples, the evolution of protoplanetary disks and wind-disk interac-
tions in T Tauri stars, and the nature of mature disks like (3 Pictoris. In the near infrared, this
telescope system could study hot, young planets.
Based on what we know about HST, could a large space telescope discover extra-solar planets
using only reflected star-light? The graph shows HST's performance for an optimistic test case:
the Sun- Jupiter system as viewed from 5 parsecs. At its greatest elongation of about 1", Jupiter
would be 26th visual magnitude, and the Jupiter/Sun flux ratio would be about 10"'. The central
surface brightness of Jupiter is about 4 orders of magnitude lower than the wing of the Sun image,
which would be the dominant background. Simply from the standpoint of information theory, to
detect Jupiter under these conditions with a signal-to-noise ratio of five would require a 58 day ex-
posure time. That is longer than the expected stability time of HST and its instruments.
For HST at visible wavelengths, the wing will consist of approximately equal contributions
from aperture diffraction and micro-roughness scattering. This means that apodization cannot
improve the contrast between the planet and star. (The diffracted component can be lowered, but
the light scattered by surface errors cannot be removed.)
Using apodization, a future large space telescope can be made to operate with roughness scat-
tering dominating the background. In that case, the contrast ratio of the planet to the stellar wing
46
1 1 —— —
Airy diffraction
^ Roughness
lar zone around the star being
searched for planets. Therefore, the
'>_ scattering
V) contrast ratio improves approxi-
CMi
J
mately a factor of 16 going from HST
E to a 10-meter telescope. Further, if
o the large space telescope has 25 times
smoother optics, the contrast ratio
LL
would improve to unity very—
favorable for searching a variety of
0. nearby stars for planes.
X At the time the HST mirrors were
3 polished, the roughness power
spectrum was not explicitly con-
strained by formal requirements.
This was because no analysis existed
of the scientific benefits associated
with ultra-low scattering. With a
future large space telescope, this
could be corrected. The results
better light management in the
telescope and lower scattering
w
CMi
J
would help many types of planetary
E observations. Indeed, lowering the
o veil of vm wanted light around bright
objects— quasars, stars, and planets
will open up one of the last imex-
CO
plored regions of astronomical
Q. discovery.
Acknowledgements. Thanks to Hal
X Weaver, Marc Buie, and Ben Zellner for
J3
T -1 1 1
r their contributions to this paper. Support
for this work was provided by NASA
0.5 1.0 under Contract NAS526555 through the
e (arc-seconds) Space Telescope Science Institute, which
is operated by Aura, Inc.
Summary
Solar System Inventory Immediate access to all deep images
Jupiter Atmosphere Special Instrumentation
Comet Nuclei Accurate pointing and tracking
Mars Habitability Rapid response to ephemeral events
lo/Jupiter Magnetosphere Integrated observing campaigns
Extra-Solar Planetary Science Low-scattering optics
47
DISCUSSION
Danly: Would you like to comment on the tradeoffs between separate telescopes dedicated to particular
problems versus a very versatile telescope?
Brown : I we could justify both a special purpose planetary 10 m telescope and a special
don't believe
purpose "deep space" 10 meter telescope. The special requirements of various communities could be
incorporated into the telescope at the beginning with little extra cost. It is expensive to add new
requirements in the middle of a development project.
Diner : An exciting use of a 10 meter telescope would be the high resolution study of Venus at 2
microns which would have implications for the Greenhouse effect - as well as the study at 10 microns
of high latitude clouds for dynamical processes.
Brown : There is certainly no end to the planetary problems awaiting investigation. Discussing the
inner planets also raises another requirement, and that is the need to point the telescope closer than
40-50 degrees to the sun - which has implications for baffling.
48
Star Formation Studies with a Large Space Telescope
Leo Blitz
Astronomy Program
University of Maryland
College Park, MD 20742
Introduction
Star formation not a single topic. It encompasses virtually all of astronomy and
is
virtually all interesting astronomical scales. This short paper can therefore be only the
most cursory overview of the science that can be done with the next generation successor
to the Hubble Space Telescope. Nevertheless, the broad range of science that can be
addressed with this proposed instrument (which 1 will refer to as the NGST) in attacking
questions of star formation is absolutely overwhelming. Furthermore, it is clear from
other chapters in this report that many of the most interesting questions really devolve
into questions of how stars form on various scales.
The first point that must be emphasized is that the actual process of star formation is
hidden from view at optical wavelengths. The dust that veils the evolution from protostars
into stars makes it impossible for any telescope working at optical wavelengths to probe
into the placental environment to view the actual workings of the birth of a star. It is
for this reason that much of the most important work that has been done in the last
twenty years has occurred at millimeter and optical wavelengths. It is therefore largely
as an infrared telescope that the NGST will obtain its most important observations for
teaching us how stars form. There are nevertheless important questions that can be
addressed only at optical wavelengths. The primary emphasis, then will be on the infrared
and secondarily the optical capabilities of the NGST.
Inwhat follows, I first discuss the general areas that make up the topic of star forma-
tion,and then address specific questions that astronomers are currently struggling with.
The last part will be a discussion of the detailed capapbilities of the NGST in addressing
some of these questions.
The basic questions that are generally asked on the topic of star formation can be
outlined as follows;
This is the basic question of star formation. What are the detailed physical processes
that control the formation of stars? What controls the formation of duplicity and multi-
plicity in stars? Why
do some stars form in clusters? What controls the low end of the
initial mass function? What, in fact does the low end of the initial mass funtion look like?
Why has it been so hard to detect the collapse of the interstellar medium into protostars?
49
How common are planetary systems around other stars? Do planetary systems form
at the same rate as do stars themselves, or are they much less common? What governs
the formation of planets around new stars? Is there hope of detecting other planetary
systems through direct imaging?
All stars currently form from molecular clouds, most form from Giant Molecular
Clouds (GMCs). How do GMCs form? What controls the formation of stars within a
GMC? What determines whether some stars are born in bound clusters and others in
unbound associations? Does the initial mass function form all at once within a given
GMC or is it the product of different modes of star formation (i.e. is star formation
bimodal?)? Why are there apparently no globular clusters forming at the present epoch
in the Milky Way, while at the same time there appear to be many forming in the Large
Magellanic Cloud, one of the two satellite galaxies of the Milky Way? How do globular
clusters form in the first place?
WhcLt role do the spiral arms have in the formation of stars in a galaxy? W^hat
determines the efficiency of star formation as a function of radius in the disk of a spiral
galaxy? How does the process of star formation differ in the galaxies that have well defined
spiral arms and those that have fiocculent spiral structure? Similarly, what controls the
star formation in SO and Sa galaxies?
related question regarding star formation in normal galaxies are is, for example,
A
whether star formation is episodic in dwarf or other galaxies? Does star formation con-
tinue at a reduced rate in elliptical galaxies, especially now that we know that there are
molecular clouds in such galaxies? K so, how does the star formation proceed in elipticals?
Stars are known to form with particular ferocity in starburst nuclei, and some active
galactic nuclei such as Seyfert galaxies are also known to be starbursts. How do the
conditions differ in the centers of starbursts from those in ordinary galaxies? Do the stars
form differently in the nuclei of starburst galaxies? Is there an evolutionary connection
between Seyfert nuclei and starbursts? What feeds the starburst? Are the ultraluminous
galaxies simply this same phenomenon at a higher rate? Is the formation of quasars
related to the starburst and Seyfert phenomenon?
do we need to revise early cosmology to account for the first generation of stars?)? Was
the initial mass function of the first generation of stars different from the present initial
mass function?
The above questions by no means exhausts the set of important areas of investigation
50
in the general area of star formation, but simply outlines some of the more obvious
questions as I see them. To answer some of these questions we need to have specific
methods of attack, particular observations that will answer as many of the questions as
possible, and others that will illuminate the correct answers if a direct approach is not
feasable. Let us consider some of the observations that can be made with the NGST to
address a few of the questions outlined above. I shall indicate whether these appear to
be best accomplished in the infrared or optical portion of the spectrum.
lar clouds. Work on infrared arrays, especially the NICMOS system promise improvements
of orders of magnitude from what is currently possible, so that imaging of embedded stel-
lar populations can be carried out even in other galaxies with unprecedented sensitivity,
resolution, and spatial coverage.
4) Work
with these arrays, as well as the unprecedented sensitivity and resolution in
the optical portion of the spectrum should make it possible to determine how the initial
mass function behaves even below the hydrogen burning limit in both optically visible
which are not currently possible at any wavelength, are the following:
1) Resolution of H2 shock fronts in protostellar regions. This is not currently posi-
ble in the millimeter region, and furthermore, because the H2 can be observed directly,
information on the primary molecular constituent is directly obtainable with the NGST.
2) Observation of the proper motions of protostellar and neostellar jets in 1-2 years.
3) Determination of the rotation curves of protoplanetary nebulae.
4) Direct imaging of rings, voids, and clearing of disks similar to the Kirkwood gaps
in protoplanetary disks.
5) The near infrared spectroscopy of cocoon stars will be possible to distances 10^
times farther than is currently attainable.
51
7) Determination of the initial mass function of clusters down to A stars in galaxies
as distant as M51.
8) Observations of CO lines at 2.3 fim for stellar typing. It will be possible then to
obtain the velocity dispersions of embedded star clusters.
9) Observations of the star cluster at the galactic center. It will be possible to resolve
essentially all of the stars, determine the velocity dispersion of the stars as a function of
galactic radius and confirm or refute the presence of a black hole at the galactic center.
Having the stage for what astronomers might Uke to observe with the NGST, we
set
now turn our attention to the capabihties of the instrument. The science and the projects
outUned above are necessarily an extrapolation from the interests and activities taking
place at present. By the time the NGST is actually in operation, the science will have
no doubt undergone a considerable change, and some of the questions posed above will
have been answered. However, the representative capabiUtes discussed below should be
expected to change very httle, as long as the design parameters of the instrument do not
change significantly.
In what follows, the design parameters outhned in the technical sections of this report
are used.
100,000 stars! The major technical problem in such an observational undertaking would
be to separate the image of a Jupiter-Uke planet from the central star. This problem is
discussed in some detail in the contribution from Roger Angel.
Protoplanetary Disks
52
anywhere Milky Way! For the more nearby objects, one could detect structure in
in the
the disks, and possible see the early stages of clearing in Kirkwood gaps caused by a large
Jupiter-like planet. Furtheremore, at distances within one kpc or so, it will be possible
to obtain the detailed temperature structure of protoplanetary disks which would in turn
provide important data to aid in understanding the chemical and physical evolution of
the early solar system.
Protostellar jets have been the subject of much recent study because they seem in-
evitably to be a by-product of the star formation process and because understanding the
physics of these objects may help to clarify the nature of extragalactic jets. These ob-
jects would primarily be observed in the optical portion of the spectrum. A typical size
scale is 1 pc, which subtends 10 milliarcsec at a distance of 20 Mpc. Optical protostellar
jets could thus be observed in the star forming regions in all of the galaxies in the local
superclusterl
stellar centroids to 0.1 milliarcsec. Thus one could detect the motion of the centroid of a
star to ~ 50 pc. For more massive planets, or for those at larger distances from the central
star than Jupiter is from the Sun, the limiting distances are correspondingly larger. If
it were possible simultaneously to image the planet for which the stellar wobble were
detected, good planetary mass estimates are possible for the nearer objects, as well as
the possibility of inferring the presence of additional planets too faint to be imaged, but
nonetheless massive enough to induce a wobble of the stellar centroid.
An interesting range to determine the low end of the stellar initial mass function
consists of stars with absolute magnitudes in the range Mi, = 15-20 mag. With the
NGST, one could have a complete volume limited sample of stars to distances of 100
pc, a volume containing ~ 10^ stars. Thus the initial mass function could be completely
determined to M,; = 20 mag to this distance. Furthermore, since a number of well known
clusters lie within or near 100 pc, it will be possible to determine whether the initial mass
function for the field stars is the same as that for clusters at the lowest stellar masses.
DISCUSSION
Bender : What can one hope to say about the earliest star formation from 10 meter telescope data?
Bliti There are a number of observational approaches to the problem. The most direct is to try
:
to determine the metal content of galaxies at the highest z. No instrument will be better than the
proposed telescope for this purpose. Other approaches are to look for stars in what appear to be
otherwise primordial gas clouds, and imaging of starbursts at the highest redshifts attainable.
53
QUASI-STELLAR OBJECTS AND ACTIVE GALACTIC NUCLEI:
PROSPECTS FOR A 10 METER SPACE TELESCOPE
Joseph S. Miller
Lick Observatory, Board of Studies in Astronomy and Astrophysics
University of California, Santa Cruz, California 95064
I. Introduction
It is sobering to realize that, in spite of the fact that nearly 30 years have passed
since the discovery of quasars, we still know very little about them. In consideration of
what could be learned about them with a very large space telescope (VLST), I will start
with an examination of the major issues associated with QSOs and active galactic nuclei
(AGNs) as they appear at present. This will be followed by a short listing of the assumed
performance of the VLST. I will conclude with an attempt to outline the major areas in
which I would expect the VLST to make a large contribution.
We still do not know what the central source of energy in QSOs and AGNs is or
how it works. It is not even clear that they all have the same kinds of central objects,
luminosity. Accretion onto a massive black hole is the generally accepted hypothesis for
the energy source, but direct observational evidence for this picture is slim at best; it
We still do not understand the mechanisms that produce the continuum from x-rays
to the infrared, even at the level of deciding between thermal and non-thermal processes.
The only exceptions for this are the blazars, where the observations strongly support a
synchrotron emission process. We do not understand the origin of the emission-line clouds
or their motions. In fact, the only thing that appears to be well established are that these
54
clouds are photoionized by a flat spectrum source, and that the clouds come in both high
density and low density versions (or perhaps a continuous range of densities).
The radio sources associated with quasars are also poorly understood. What produces
them, and how is energy supplied? Why are most QSOs radio-quiet, while a small
B. Evolution
While AGNs are clearly located in the centers of galaxies and there is a fair amount
of direct evidence that QSOs are too, the relationship between the active nucleus and
the surrounding galaxy is unclear. Are QSOs and AGNs a natural result of galactic
evolution? That is, if we clearly understood the formation and evolution of galaxies,
would we conclude that QSOs arise naturally and expectedly out of galactic processes,
or are they pathological accidents resulting from some other influence, e.g., interactions
Turning the question around, what is the influence of an active nucleus on the
surrounding galaxy? Does it affect galactic structure or evolution? We don't know.
Since QSOs can have very large luminosities and can produce large amounts of
ultraviolet radiation, they may be important in affecting the ionization level of the
intergalactic medium, but our information about this is rather limited. It is even possible
QSOs have
that played a role in galaxy formation. In any event, we can use distant
QSOs as probes of the intergalactic medium and intervening galaxies, a field of research
that continues to be very active and rewarding, even if we learn nothing about QSOs
themselves from these studies.
Finally, as very distant objects with obviously extreme physical conditions, QSOs have
the possibility of revealing new physics. While I don't consider this a likely possibility,
it is still something that should be acknowledged.
55
In the end we are not even clear about whether or not QSOs and AGNs are important.
By important I mean that they play some vital role in the universe, such as, for example,
important and necessary components in galaxy evolution or formation. Alternatively, they
may be fascinating objects with bizarre properties whose real impact on the universe
beyond them is of little consequence.
Anticipating the discussion in the last section, I expect that a VLST would have
its major impact on A above, although some contribution to the remaining issues
would also be expected.
ni. Performance
For the discussion in Section IV, I will adopt the following performance characteristics
for the VLST:
A Resolution
5OOOA 14 mas
3/i 75 mas
10// 0:75
100/x 2 '.'5
vastly simpler. In addition, for a number of studies as will be indicated, the full light
gathering power of a 10 meter aperture is extremely important.
56
IV. Observational Prospects
Even in the nearest AGN the expected size of the black hole and inner luminous
accretion disk (if they exist) would be unresolved (sizes ~ 10~^ — 10~^ pc). Therefore
we could not expect to see resolved the source of the central continuum, unless of course
our current picture of what is going on is wrong. However, the broadline region (BLR)
could be resolved in the nearest AGNs such as NGC 1068. Current models suggest that
the BLR extends over a region roughly covering from 10~^ to of order a few parsecs,
depending on the object. For NGC 1068, 4 mas corresponds to about 0.3 pc, and the VLST
should BLR. Information about the geometrical organization
at least partilaly resolve the
Though the BLR would be resolved only partially in the nearest objects, the VLST
would be expected to resolve fully the narrow line region (NLR) out to redshifts of about
0.1. Even for 3C 273 at z = 0.16 one would expect to resolve the NLR partially. This could
be of great potential value. For example information on the NLR ionization conditions as a
function of position could give direct information on the properties of the central ionizing
continuum source. Velocity structure within the NLR could direct information on the origin
of the clouds and the nature of the central gravitational potential.
Our studies of NGC 1068 at Lick with a spectropolarimeter have revealed the presence
of an obscured Seyfert type 1 region in this type 2 Seyfert galaxy. We have interpreted
these results as indicating that the nucleus of NGC 1068 contains an obscuring torus
that blocks our direct view of the type 1 region. Scattering of light from this region by
electrons located outside of the torus allows us to see the hidden region, the scattered
light being highly polarized. The VLST would resolve the scattering region and perhaps
parts of the obscuring torus, giving us direct, detailed information about the structure of
this object if spatially-resolved polarimetric observations were made. For example, if the
ionizing radiation emerging from the central region is anisotropic in flux and hardness, as
some models predict, this could be studied directly. Observations of the blue bump as seen
reflected in a resolved scattering region could provide information on the origin of this
feature, but this would be a difficult observation even with a VLST. It is possible that the
IR emission region could be resolved at some wavelengths, which would provide us with
an understanding of the mechanisms which produce the IR radiation.
57
B. The central regions of normal galaxies and those with AGNs
Recently evidence has been accumulating that some nearby galaxies (e.g.,M31) contain
massive compact objects in their centers, raising the possibiltiy that black holes may be
common features in the nuclei of galaxies. This could also be interpreted as support for the
view many or even most massive galaxies go through an AGN phase, and
that in these
nearby objects we are seeing the relic black holes of the dead QSOs. Two kinds of data are
important for the investigation of this problem: high spatial resolution velocity profiles of
the nuclear stars and accurate high -resolution light distributions. Clearly the Hubble Space
Telescope (HST) will make very important contributions to this study, but principally only
for relatively nearby objects. It is also important to carry out similar studies of AGNs and
QSOs. For these the higher spatial resolution of a VLST is crucial, as one must overcome
a large by large brightness contrast between the AGN and the stellar light.
As part of this type of study, there should also be a search for evidence for mergers.
While it is well-established that quite a few active galaxies are involved in interactions
or mergers with nearby galaxies, it is also the case that many active galaxies appear to
be quite isolated. Thus it remains unclear how important interactions are in producing
AGNs. This question could be addressed at some level with VLST observations of the
nuclear stars in AGNs and normal galaxies.
Recentiy work by Sargent, Filippenko, and others has shown that the AGN phenomenon
extends to very low luminosity active nuclei, nuclei that would be thought to be "normal"
or non-active without a careful study. How much can this result be extended? Do most
or perhaps all galaxies contain a mini or micro-mini AGN? Once again HST will make
important contributions to this question, but mainly for nearby galaxies. To reach out to
more distant objects, important for evolutionary studies, will require the VLST.
For nearby AGNs such as Seyfert galaxies, it is fairly easy to study the general
properties of the surrounding galaxies. But for more distant, luminous QSOs the nature
of the associated galaxies remains unclear. This is an area of research where the HST
will obviously make important new contributions, but again mainly for the nearer objects.
Current evidence suggests that AGN activity increases with redshift, but nothing is
known about the galaxies associated with distant luminous quasars and radio-quiet QSOs.
From the limited studies by Boronson, Oke, Stockton, and others that are possible with
ground-based telescopes, we have learned that the galaxies associated with QSOs often
show distinct peculiarities. A good example, is 3C 48, with its very high luminosity
from early-type stars. I believe we will find that the galaxies associated with luminous
58
quasars are in fact often abnormal; that is, if the AGNs were removed, we would still
Currently there is available some evidence from the studies of van Breugel, Miley,
and others that radio jets can interact in important ways with the interstellar media of the
galaxies in which they are embedded. Visible manifestations of this are emission regions,
perhaps excited by shocks, and regions of enhanced star formation. The VLST could make
unique contributions to the study of radio-source-galaxy interactions. Out to a redshift
of ~ 0.1 it would be able to resolve the regions studied with VLBI; even for 3C 273
some resolution of the inner jet region could be achieved. This range of distance includes
several superluminal sources such as 3C 84. At present we know nothing about the optical
characteristics of these galaxies at similar scales to those observed with VLBI , and HST
will not have enough resolution to do much on this problem.
Further out in the galaxies (d ~ 100-100,000 pc), the jets studied with VLA resolution
could be studied with HST, but the limited Ught-gathering power of that telescope would
be a limitation. Similarly, searches for optical radiation from radio lobes requires large
apertures more for light gathering than for resolution, and the brightness of sky is a
major handicap for ground-based telescopes.
There is currently a rapidly developing area of study concerning distant luminous radio
galaxies. Much can still be done with ground-based telescopes, and the HST will also make
important contributions in this area. I am sure a VLST would be needed to push this study
to very distant objects, where available observations indicate strange things are going on.
Several QSOs with z > 4 have already been identified. In the context of current
thinking about galaxy formation, these QSOs must be associated with quite young galaxies
or galaxies still in formation. This prospect makes these objects all the more interesting.
Careful studies with a VLST operating at the limit of its performance in light-gathering
power and resolution could provide unique and extremely valuable data on the earliest
stages of galaxy evolution, data which would be unavailable by any other means.
59
V. Concluding Remarks
I feel I have hit only a few highpoints in this discussion of the impact of a VLST on
AGN and QSO research. As I pointed out, for many studies the large light gathering power
compared to die HST is as important as or more important than the resolution. I believe
the greatest impact would be on smdies of the structure of central objects themselves,
about which we know embarrassingly little when one considers the effort put into their
study. However, as is generally the case, the greatest discoveries will likely come in
DISCUSSION
Stockman : You mentioned the need for a full aperture - can you expand upon that and can you
comment on the dynamic range that you need?
Miller : The dynamic range will be very large, virtually unlimited. It is hard to be quantitative but
dynamic ranges of 10* — 10^, maybe 10' will be very common. For the other part, the filled aperture,
the structure in AGNs will be rather complicated and it is the dettiils that you care about, not the
overall structure. For example, if it is elongated or two blobs instead of one blob would be interesting,
but the details could be even more interesting than a simple model. I see an interferometer and a filled
Stockman : Is there a specific dynamic range that is required by a VLST to study the surrounding
regions of QSOs?
Miller would expect that a dynamic range of about 10' would be very useful if one wanted to study
: I
regions within 0.1" (or even less) of an active nucleus, but I have not done any detailed calculations,
so this a rough guess.
Thompson: Could you comment on the relative roles of polarization and infrared imaging in studying
embedded sources?
Miller : We really do not know enough about the potentitds of infrared imaging. I expect that
polarization and infrared imaging data could be quite complementary, as they have been for star
formation regions.
Dlingworth: Can one see directly into the nucleus in the infrared? That is, at a wavelength of order
10 microns, is the optical depth low enough?
60
SXEUAR POTTOAnCKS IN GMAXIE5: "IHE SdEITFIFIC PCnHWIAL
FOR A 10-16 m SEACE TEEESOOPE
J. S. GKLLftOffiR
NJRA, Inc. and Spacse Telesoape Science Institute
I. XNTRODUCnC^
Some exanples of the types of questions that we would like to answer are:
* When and hew did galaxies form? Was this process relatively
coeval as outlined in older models, or do galaxies form over a
wide range of times?
61
. . .
Having set the stage on v*iich extragalactic astroncmy currently must play,
let me turn to the iirportance of stixiies of stellar pc^xilations in galaxies
for further progress on fundamental astrcfiiysiccLL issues. As stars form
they produce an analogy to a geological record by trapping some of the
conditions at the time of their formation, such as elemental abundances or
scmetiroes the state of the galactic gravitational catponent. Ihe
evolutionary histories of galaxies and therefore of the universe can in
principle be recovered by making apprcpriate cbservations of stars within
galaxies (see Norman, Renzini, and Tosi 1986 for an overview on stellar
populations)
1. Overview
The current limitations inpose severe constraints. For exaitple, the only
precise ages for very old stellar populations are for Milky Way globular
star clusters. We must apply a kind of generalized cosmological principle
v*ien we think of star clusters (or abundances of radioactive elements and
their decay products) as giving an "age" of the universe. Other galaxies
could well be older than the Milky Way if galaxy fonnation is not coeval,
vAiich would increase the difficulty in reconciling ages from nuclear
stellar clocks with those found fran the ejqansion rate of the universe. An
irreconcilable difference in these two time scales would obviously have
profound intellectual consequences.
looking beyond the HST, higher angular resolution and more collecting area
are desirable to push such investigations much more deeply into the stellar
strata of Local Group galaxies as well as to rich stellar populations well
beyond the Local Group. In justifying such a major effort it is useful to
62
recall the basic advantages gained by cisserving the individual members of
stellar populations:
stellar remnants (e.g. white dwarfs; Iben and Laui^in 1989), long
lived stars and elemental abundances give us a clue to the past
history of the star formation process. Cotparisons between the
evolutionary histories of nearby galaxies and that deduced from in
situ observations of younger galaxies at hi^ redshifts will
provide a key check on our still rudimentary theoretical models
for galaxy evolution. As these models inprove, we will make
detailed tests of our ideas about galaxy formation and evolution
by establishing that they are in agreement with c±)servable
characteristics of galaxies at the present epoch.
* Test Particles.
Si:f}erb Stars are good approximations to point
masses. Ihey also have the advantage of sharp spectral line
features vAiich allow velocities to be determined at the kni/s level
even for apparently very faint stars. Furthermore, since stars
have tiny angular sizes they can be used to define precise
coordinate frames that are useful for prtper motion studies that
in combination with radial velocities to yield information on the
full three dimensional dynamical structures of astrc^ysical
systems.
63
.
TABLE 1
Stellar Ages
Ihe color magnitude diagrams for star clusters v*iere more than about 5 Gyr
has elapsed since stars last formed have well defined prcperties. The main
features on such cfcservers' Hertzsprung-Russell (H-R) diagrams are as
follows:
* Red giant brancii. After stars have left the main sequence
evolutionary phase, they relatively rapidly change structure to
have compact cores and distended convective envelopes. Stars in
the red giant jiiase of their lives are primary optical luminosity
sources in older stellar populations. The location of the red
giant branch on color magnitude diagrams is a function of metal
and helium abundances, but for masses below those of the sun, is
less sensitive to stellar mass. CSDservations of the structure of
the giant branch are therefore particularly useful as probes of
element abundances in galaxies with old stellar populations.
64
.
H-R diagrams for young stellar populations are in some ways less clean than
those for very old stellar populations . While the core hydrogen burning
main sequence is well defined, stars may cross regions of lower surface
tenperature more than once during their lifetimes in response to the effects
of mass loss. Because main sequence stars radiate most of their flux in the
ultraviolet, young stellar ccarplexes eire dominated at optical wavelengths by
the cxxjler and intrinsically more luminous blue stpergiants that evolve
from massive main sequence stars. Yellow and red si:pergiants are rare among
very massive stars, and can be mixed up with asyiiptotic giant branch stars
at all but the hicfiest red sipergiant luminosities. Variables are ccramon
among evolved massive stars, and include the classic:al C^^heid variables
that are primary standard candles in determining extragalac±.ic distances.
65
,
Ihere is, however, a key assuirption: that the distribution function for
initial stellar masses (initial mass function) v*iich sets the relative mass
,
Ihus to understand the current rate at vrtiich galaxies are turning their gas
into stars, we must check the validity of the assuirptions about the initial
mass function. Ihis will be done to rather faint magnitude limits in the
Magellanic Clouds (M^>9) with the HST but stars vAiose main sequence
lifetimes exceed galactic ages cannot be reached even elsev^iere in the local
Group of galaxies. Similarly, in the nearest star burst galaxies the HST
angular resolution corresponds to 1-2 pc, vAiich is insufficient to over coanne
crawling effects and allcw us to directly study stellar content within
active starbursts.
A 10-16 m space telescope has several attributes v*iich make it well suited
to studies of stellar populations: hi^ angular resolution, large collecting
area, and raulti-^wavelength coverage. In considering saitple programs for
such a facility, I chose to errphasize photometric cAsservations (vtiere the
gains of such a telesoc^je over ground based facilities are huge) that could
be made in a survey mode. Other powerful capabilities for stellar
pc^xilation studies include UV/IR spectroscc^jy, especially in multi-cfcject
modes (for measurements of stellar kinematics, chemical abundances, mass
loss rates, etc.). These areas will cill be pioneered by the HST. To be
conservative, I have based the projects on a 10 m aperture; even so the
results would be incredible!
66
accessible to hundreds of galaxies in the neariay universe thereby testing
assuitption that the MiUq'^ Way is a typical galaxy. Ihe methods that now can
be used only to probe galaxies in the local universe will similarly be
extended to hi^ redshifts and thus to younger galaxies. A few prograins are
listed below to v*iet the observer's appetite:
67
l^ble 2
10 B 'Delesccpe Rssoluticn Rxjlprints in rsiiayip«
69
,
I would like to thank the Flagstaff members and associates on the Hul±)le
Space Telescope WF/PC team. Bill Baum, Hu^ Harris, Deidre Hunter, and Dave
Monet, for many stimulating discussions about ctoservations of galactic
stellar populations with hi^ angular resolution from space observatories.
REraRENCES
Hunter, D. A., Galla^er, J. S., and Pautenkranz, D. 1982, ;^.J. Si:?3pl., 49,
53.
70
.
Rich, R. M., Mould, J. R., Picard, A., Frogel, J. A. , and Davies, R. 1989,
Ap.J. Lett., 341, 151.
van den Bergh, S. eind de Boer, K. eds. 1984, lAU Symposium No.
,
108.
Structure and Evolution of the Magellanic Clouds (Reidel, Dordrecht). .
DISCUSSION
Thompson: Could you clarify your comment about the angular resolution of ground-based telescopes
at 10 microns.
Gallagher: Even though the seeing on the ground at 10 microns is good enough that a large telescope
can be diffraction-limited, the high background will limit the problems that can be studied compared
to a cooled large space telescope.
71
Quasar Absorption-Line Studies with HST Successor
Richard F. Green
The study of quasar absorption lines has occupied substantial amounts of time on
large- aperture telescopes in the last 20 years. The importance of the topic is reflected
in its being an accepted Key Project for HST and a major scientific driver in proposals
sical questions: What is the history of chemical enrichment in various galaxy environ-
ments? What is the major ionizing energy input to the clouds? Competing sources are
the ultraviolet flux and shock energy from stellar winds and supemovaein local regions
of active star formation, as contrasted to the diffuse UV background from quasars and
proto-galaxies. How are the clouds distributed with respect to the large-scale structure
traced out by luminous material? The spatial distribution of the absorbers at higher red-
shifts may hold the clue to determining the evolution of large structures with cosmic
time.
The current state of knowledge has absorption systems classified on the basis of
the visibility of metal-line features and apparent velocity with respect to the quasar sys-
temic redshift. Those systems with velocities within -10,000 km/s of the quasar red-
shift are the associated systems. The Broad Absorption Line systems are interpreted as
arising in energetic outflows in the vicinity of the quasar, typically manifested in broad
tems could be attributable to halos of galaxies in clusters associated with the quasars,
consistent with the finding of Yee and Green (1987) of an increasing tendency at higher
that the interaction of the radio source with the intracluster medium produces a number
of ionized clouds distributed in velocity along the line of sight.
Absorption systems with redshifts less than those of the quasars by velocities
exceeding 10,000 km/s are generally considered to be intervening. For typical abun-
dances and ionizations, metal lines become visible when the optical depth at the
72
hydrogen Lyman limit approaches unity. The detected systems are about evenly split
among those with C II stronger than C IV (found only among the systems with the
highest H I columns), those with C IV stronger than C II, and those with the two transi-
tions about equal. Strong line features detected at moderate spectral dispersion are
ionization varying strongly among components. These complex systems are observed
too frequendy to be attributed to lines of sight through the centers of rich galaxy clus-
ters, intercepting multiple galaxy halos. In a photoionization model, the diffuse cosmic
potentials lower than that of hydrogen; a local source of additional energy is required,
such as hot, young stars. As shown by several statistical studies (e.g., Lanzetta et al.
1987), the mean free path to absorption in both C IV and Mg II implies a very high
Bechtold et al. (1984), and later confirmed by Sargent et al. (1988a), the distribution in
redshift of the Lyman limit absorption systems is consistent with a uniform space den-
sity (in comoving coordinates) with a constant cross-section. All these lines of evidence
are consistent with associating the optically thick absorbers with galaxies.
If that association is indeed true, then the absorption line systems contain valuable
information about the evolution of physical conditions in the gas phase of galaxies with
cosmic time. Those systems selected in surveys for the C FV doublet were shown by
Sargent et al. (1988b) and Khare, York and Green (1989) to increase in numbers more
rapidly than a uniform distribution back to z = 2, then decrease toward earlier limes.
Although the evidence is not yet conclusive, Sargent et al. argue that the early epoch
represents the build-up of the total C abundance. Trace elements in the most optically
thick systems such as chromium and zinc give a measure of the depletion onto grains
and net heavy element abundance ([Fe/H]) in the range of 0.1 to 0.01 solar. The dust to
gas ratio inferred from spectral reddening of quasars by Fall et al. (1989) is about 10%
of the Galactic value, while Chaffee et al. find molecular to atomic gas ratios below
A paradigm for interpreting some of the metal-line systems was offered by York et
al. (1986), in comparing the complex velocity and ionization structure to a line of sight
through a region of active star formation (see Walborn et al. 1984). Turbulence and
ionizing energy input comes from supernova shocks and strong winds from massive
stars, which also provide an intense near-ultraviolet radiation field. The large cross-
dwarf galaxies, in the gravitational potential of what will become a more condensed
73
system through dissipation and collapse.
The plethora of absorption lines observed at wavelengths less than that of the
Lyman a emission-line of the quasar cannot all be associated with metal lines from the
few strong systems, and are attributed to Ly a. The connection with the metal-line sys-
tems is not clear: at the high column density end, about 20% show weak C HI and Si
the difficulties of component blending and profile fitting. There is a strong observed
evolution in the number density of absorbers with redshift, and no evidence for a break
in that trend to the highest redshifts observed (z=4.7 by Schneider et al. 1989). There is
no strong signal for velocity clustering in the Ly a only systems as there is for the
metal-line systems. Study of absorption in quasar pairs with small angular separation
shows some evidence for the existence of voids (Crotts 1989), although the results for
effect", in which the density of Ly a clouds drops near the quasar redshift. This effect
is interpreted in analogy to a Stromgren sphere, in which the quasar ionizes the sur-
rounding intergalactic clouds; the radius (inferred from redshift) out to which the quasar
has measurable influence allows a determination of the diffuse intergalactic radiation
field.
Several theoretical models for the Ly a forest clouds are consistent with the exist-
ing data. It is possible that the clouds are composed of primordial material and are not
associated with galaxies. They could be in pressure equilibrium with a hot, diffuse
intergalactic medium. That equilibrium could depend on size and shape; some systems
could be collapsing under self-gravity, while others could be expanding and "evaporat-
What will HST (and FUSE) teach us about these different absorbing clouds? The
general answer is that the space observations will allow direct comparison of physical
conditions in low-redshift absorbers with those in high redshift absorbers as inferred
tems, which can be interpreted in terms of a changing extragalactic radiation field and
changing galaxy cross-section. HST spectra can be searched to build statistics on the
low-redshift Ly a forest systems. Their distribution can then be compared with the
galaxy distribution in the same volume to see if those clouds populate voids, or trace
the galaxy density. The proximity effect can be derived statistically to determine a
74
low-z value of the diffuse extragalactic background radiation density. From observing
lines of sight through the Milky Way halo and local group galaxies, the depletion pat-
tern can be determined as a function of metallicity. Because the low-z absorbers can be
associated directly with host galaxies, the problem of current gas-phase metallicity as it
Ultraviolet spectroscopy with HST (and FUSE) will have great value because of
the large number of line diagnostics found at wavelengths shortward of 1200 A that
yield densities and effective temperatures. The problem with working in the same spec-
tral region redshifted to optical wavelengths is the extreme difficulty in de-blending and
making unambiguous identifications, because the line density of metal lines and Ly a
forest lines is so high. Key physical diagnostics such as C III XX971, 1175, can be
nearly impossible to disentangle from the large number of Lyman series and other lines
from lower redshift systems. With the smaller number of systems, and sharply
decreased number of Ly a forest lines at low redshift, HST has a distinct advantage.
The key to relating quasar absorption lines to galaxies lies in the redshift range in
which both can be well studied (z < 1). HST cannot adequately address this region
As a benchmark, take a 16th mag, low-redshift quasar. Then the flux in the UV is
4 X 10"^ photons cm"'^ s'^ A"^ The second generation spectrograph, STIS, will have an
effective area of about 300 cm for a spectral resolution of 10,000, yielding a count rate
mag, there are ~ 200 quasars with 151 > 30 °; the median z = 0.15 !
probe redshifts beyond 0.15, probably to z > 0.5. Yee and Green (1987) have argued
that by z = 0.6, there has been a significant brightening in the 4000 A radiation of the
typical galaxy, which would certainly correspond to a change in the local component of
ionizing flux above the Lyman limit. That redshift also marks a change in the dynami-
cal properties of clusters that host radio galaxies and quasars.
To derive physical conditions and make detailed velocity comparisons with inter-
vening galaxies requires the high dispersion spectroscopy, which then limits HST to
objects around 16th mag. However, the expectation is that only 1 quasar in 65 will
have a line of sight intercepting an absorber with z < 0.15 (implying 3 systems for our
sample) and only 1 in 15 will catch an absorber with 0.15 < z < 0.75, or some 5 sys-
tems amenable to detailed study. The steepness of the quasar surface density as a
75
function of magnitude gives a strong advantage to larger aperture for these programs.
For throughput comparable to that of HST, a 10-m telescope would allow high-
dispersion spectroscopy of 19th mag objects, with 2 / square degree; a 16-m would
Several gains will be realized with larger aperture. The physical conditions, such
determined in direct relation to the abundance gradients and local radiation fields
inferred firom composite spectra and Hn regions of the host galaxies. Increased aper-
ture will allow access to other probes; for example, unreddened late B stars will be
observable beyond the distance of M 33. The interstellar medium of local group galax-
ies can then be examined for velocity structures in star-forming regions, for metallicity
as a function of star formation rate and history, and for depletion as a function of local
metallicity. High quality telescope optics that yield high spatial resolution would pro-
vide an added bonus. With the capability of resolving "spectroscopic" binaries, we
would have continuous spatial probing of adjacent sight lines as the orbital separation
changes. It would be possible to separate turbulent, systematic and thermal motions
along Galactic and Local Group sight lines at resolutions up to 100,000. The specified
images can be used to probe the interstellar medium in the lensing object. At z=0.2, 20
mas separation corresponds to 100 pc, sufficient to probe individual clouds and com-
plexes.
absorption line studies to the level of determining the chemical and dynamical evolu-
76
—
1
I I
I I I
I
I \ 1 r
150 —
100
c
o
u
1080
200
150
100
c
p
o
u
I
Ly /3{Z= 0.007)
HD u on cn' L J H (Galaxy)
I L I I I I
I- I I
I I
I I
I I I
I I I I I I I L I I I I I I I I I 1 L
1032 1034 1036 1038 1040 1042 1044 1046
\ (Angstroms)
complex of spec-
Synthetic quasar spectrum from Jenkins and Green showing the
tral features anticipated near the VI Une for the Galaxy and a possible low red-shift
absorbtion complex. The resoltuion is 30 000.
77
3200 3400 3600 3800 4000 4200 4400
Wavelength (A)
An example showing the necessity for high dispersion. The upper plot is from Chaffee
et al., while the high-dispersion (R=20,000) observation from Green and collaborators
shows the complex velocity structure and blending of the Si IV Une.
78
References
Bechtold, J., Green, R.F., Weymann, R.J., Schmidt, M., Estabrook, F.B., Sherman,
R.D., Wahlquist, H.D. and Heckman, T.M. 1984, ApJ., 281, 76.
Fall, S.M., Pei, Y.C. and McMahon, R.G. 1989, Ap.J.(Utt.), 341, L5.
Foltz, C.B., Weymann, R.J., Peterson, B.M., Sun, L., Malkan, M.A. and Chaffee, Jr.,
Khare, P., York, D.G. and Green, R.F. 1989, ApJ., 347, 627.
Lanzetta, K.M., Turnshek, D.A. and Wolfe, A.M. 1987, ApJ., 322, 739.
Sargent, W.L.W., Boksenberg, A. and Steidel, C.C. 1988a, ApJ.SuppL, 68, 639.
Sargent, W.L.W., Steidel, C.C. and Boksenberg, A. 1988b, ApJ., 334, 22.
Schneider, D., Schmidt, M., and Gunn, J.E. 1990, ApJ.(Lett.), in press.
Walbom, N.R., Heckathom, J.N. and Hesser, J.E. 1984, ApJ., 276, 524.
York, D.G., Dopita, M., Green, R.F. and Bechtold, J. 1986, ApJ., 311, 610.
79
DISCUSSION
Miller : What is the advantage over the ground of a large space telescope for absorption line studies?
Is it working in the UV on low redshift objects?
Green : Yes, it is the ability to use the diagnostics we know (CIV, SilV, OVI) in the redshift range
where we can also study the galaxy (z = 0.25-0.75). Then we need access to the UV with the same
light-gathering power.
Gunn : The calculations I described about galaxies around quasars indicate that if the absorbers are
galaxies, and dwarf galaxies, they can be seen to redshifts of about 2 with the NGST.
Green : These observations will be key to closing the loop in establishing the relationship between the
absorbers and distant galaxies. I would still emphasize the lower redshift systems, because we would
probably be able to study the galaxies in more detail spectroscopically.
Stockman : Couldn't you use HST or NGST to study the actual clouds seen at redshifts of 2 ~ 2-3?
This would be the complement of your desire to study clouds at redshifts where the galaxies/clouds
can be observed from the ground.
Green : The high redshift clouds can be related to NGST galajcy imaging, as pointed out by Jim Gunn.
In order to understand the metallicity and velocity structure of a cloud in the context of its location
and local history of star formation, we require detailed spectroscopic study of the associated galaxy;
lower redshift galaxies will be much more amenable to spectroscopy with high spatial resolution.
80
USE OF A 16 M TELESCOPE TO DETECT EARTHLIKE PLANETS
Roger Angel
Steward Observatory
University of Arizona
Tucson, AZ
Abstract
there are other planets similar to the Earth, in orbit around nearby stars
If
like the sun, they could be detected through their thermal emission, and their
atmospheric compositions measured. A
large telescope in Earth orbit is necessary,
cooled to around 80 K. In order to overcome the limitations set by diffraction and
zodiacal background emission, a primary mirror area of some 200 m^ is required,
and an overall dimension of some 20 m. The 16 m
telescope under consideration
at this conference is could be configured as a filled
ideally suited to the task. It
round aperture, and the required very high contrast ratio could be obtained with
focal plane instruments that block or interfere sections of the wavefront.
Alternatively, the primary surface could be made from four slightly separated 8 m
monoliths. The and contrast when
latter configuration yields the highest resolution
operated as an interferometer, with little negative impact on performance for
direct imaging. A properly optimized telescope should yield spectra of good
quality (A/AX = 100, signal/noise ratio of 50) of earthlike planets as far away as 10
pc, given integrations of a week or so.
Introduction
A popular T-shirt bears the now familiar image of the Earth from space and the
caption, "Good planets are hard to find". It is the sad fact that the exploration of the
solar system has found the Earth to be unique. The differences are perhaps no more
clearly evident than in the beautiful spectra of thermal emission obtained by Hanel
and his colleagues and collected together in figure 1. Mars atmosphere shows only
carbon dioxide, Venus has also absorption by sulfuric acid. Only in the Earth's
spectrum are found the features of water and oxygen in the form of ozone. The
strong features of methane and steeply rising spectra of the outer planets reflect their
very cold and inhospitable temperatures.
81
1 1 1 1 1 1
2.00
Venus
0.00 1 1 1 1 1 1 1 1 1
[
II 1 1 1 II 1
1
1 1 r 1 1 1 1 1 1 1 1 M 1 1 1 1 [| 1 1 II 1 1 1 1 i|ii II 1 1 II i|
5 10 15 20 25 30
1 1
2^ ^!
2.00
lijuth
0.00 1 1 1 1 1 1 1 1 1 1 [
1 1 1 1 1 1 1 1
1
1 1 1 1 1 1 1 1 II I II 1 1 1 1 1 1 1 1 1 1 1 1 1 II 1
1 1 1 1 1 1 1 1 1
1
5 10 15 20 25 30
Mars
Siitiirn
0.00 " n 1 1 II I II I m 1 1 1 1 1 nn 1 1 1 1 1 1 1 1 I n I I 1 1 n 1 1 1 [ II 1 1 n M I
I
5 10 15 20 25 30
Wavelength in Microns
82
consider the far more difficult task of searching for planets similar to the Earth in size
and temperature, and orbiting solar type stars. Such earthlike planets, with mass one
millionth that of the sun, will be completely undetectable by their gravitational
influence, and can only be found by direct imaging. The difficulty arises because of
their extreme faintness and their very close proximity to their bright host star. The
,
nearest stars that are like the sun and not in binary systems lie at a distance of about
4 pc, and a survey to include a dozen or so excellent candidates would need to extend
to 8 pc. It follows that the angular separation of planets with radius of one a.u. will
be 1/8 to 1/4 of an arc second.
83
—
12 r--| —1 —
I I t —r — T— I f f""*"*
» f~'
^_. Sun
Og, C02.
Earth
-2
_l I I L_
00 05 15
log X (nm)
Figure 2 Fluxes of the sun and Earth as seen from 4 pc distance. From Angel,
Cheng and Woolf, 1986.
A large baseline
is essential to resolve the planet's thermal emission from the
Even given a cold 16 m space telescope, it will still be necessary to pay very
careful attention to telescope design. The natural diffraction limited resolution for
a circular aperture 1.2 \/d, where d is the diameter. The Umit is 0.15 arc seconds
is
for a circular 16 m
aperture at 10 microns wavelength, the angular separations of
interest are right at this limit. However the planet will be typically ten million times
(17.5 magnitudes) fainter than the star. The optical design must include a trick to
84
strongly suppress the diffracted light from the star, if there is to be any hope of
finding a planet. It is also necessary to pay very careful attention to the accuracy of
the mirror surfaces, for tiny figure errors will produce weak features in the diffraction
pattern that masquerade as planetary images. In the following sections we consider
in more detail these technical aspects.
Focal plane instruments are needed to alter the normal Airy pattern of a 16
m telescope. Two options are available. The transmission or phase across the pupil
can be modified to produce very dark regions in the diffraction pattern (apodization).
Alternatively an interferometric beam-splitter can be arranged to cancel the starlight
at one of a pair of foci.
a search of a complete annular region in one image. However, it will not allow
detection of planets closer in angular separation than about 1.6 X/d, or 0.2 arc seconds
at 10 microns wavelength. Its capability to find planets with 1 au orbits will be limited
to only a few solar type stars. It is possible at least in principle to obtain nearly as
strong star suppression still closer, by using interferometric instrumentation at the
focal plane of the 16 m
telescope. Suppose that the full aperture is masked into two
elliptical apertures, as shown in figure 4a. The diffraction pattern then has a central
narrow bright fringe, with the first dark fringe lying only 0.9X/d from the central peak
(0.11 arc seconds). The resolution is higher, but the dark fringe is too narrow to be
useful. In this case, as in Michaelson stellar interferometer, the
the classic
interference fringes between the two apertures are formed in the focal plane.
85
Much more efficient suppression of the stellar flux is obtained without loss of
resolution the two wavefronts are exactly superposed with the aid of a semi-
if
transparent beam splitter. In this case the fringes are projected onto the plane of the
sky, and a point stellar source centered on a dark fringe gives no light at all at the
focus where the interference is destructive. All the energy appears from constructive
interference in the other beam. The situation Reversed for a planet located at angle
is
0.9 A/d from the star. This is the principle of the interferometer proposed by
Bracewell and MacPhie (1979).
Figure 3a and 3b
Laboratory images of
sources differing by Vf in
intensity, obtained with a
mirror apodized as
described in the text.
The weaker source is
marked "planet".
t
Planet
t
Planet
86
V
Figure 4a Apertures in
the pupil of a 16 m
telescope that define
the two elements of an
interferometer.
The theoretical limit of starlight suppression for this type of two element
interferometer of spacing s is set by the finite angular size of the parent star. The
response at angle 6 off the null axis varies as sm\i[ds/\) where s is the spacing
between centers of the interferometer elements. Suppose the planet is located at a
maximum, and the center of the star is on the null axis. If the star has the same
diameter as the sim, and the planet is at the furthest separation of a 1 a.u.radius
orbit, then the stellar light will be reduced by a factor 1.3x10', compared to the in-
phase signal. This background is about 4 times worse than in the apodization case.
Realization of the interferometric null in practice imposes a severe constraint on
pointing. If the telescope points off by one stellar diameter, about 1 miUiarcsecond,
the uncancelled image will brighten by an order of magnitude, and be about 1000
times brighter than the planet. In practice it will not be possible to point the 16 m
telescope to the required accuracy, and it will be necessary to servo control the null
via optical starlight, adjusting the interferometer path lengths to the required accuracy
of a few nanometers.
87
Some care is needed to devise a two beam interferometer for the present case,
in which the individual diffraction patterns are not much larger than the fringe
spacing. In the scheme of Bracewell and MacPhie, for example, the individual images
of an off-axis object do not overlap, but separate on either side of the residual star
image. This is of little consequence for widely spaced interferometer elements. In
the present configuration, it prevents full constructive interference of the planet's flux,
and increases the image area and hence sky background noise. Not only is the signal
to noise ratio degraded, but it may be impossible in practice to distinguish a planet
from the much brighter zodiacal light local to the star, which will be symmetrically
distributed on either side of the star. The same type of problem arises in the rotating
shearing interferometer, as described by Diner (1989) at this conference.
from earthlike planets beyond this distance would lead to extremely long and difficult
observations, even if the interferometer gave adequate starlight suppression. For the
nearer systems, there is a distinct disadvantage in using two element interferometer
baselines longer that are needed to give more than half a fringe spacing between star
and planet. If there were one and a half fringes, for example, then the star disc gives
nine times more uncancelled background emission.
88
of a single paraboloidal surface. A single paraboloidal secondary would be used to
make an afocal telescope, which would directly feed an interferometric beam
combiner for planetary work. This arrangement is preferred over an MMT style
telescope, because there is no obscuration by individual secondaries and the field of
view is larger. The
y^^ -^t-
X
Ti
k-n, -^
wavefronts from the two pairs of diagonal elements would be superposed in phase
with interferometers like that shown in figure 4b, and the resulting wavefronts then
combined out of phase with a third interferometer. The effect is to superpose on the
sky two sets of sine-squared fringes at oblique angles.
If the mirror centers are at distances r, and r^ from the telescope axis, then
the intensity of the image of a point source at angle B to the axis and position angle
is given by
89
This reduces near the axis to
mas.
telescope can be oriented so that the planet lies close to an interferometric maximum
when the star lies on the null axis. We find that with r, = 8 m and r, = 4.85 m,
points in the pattern with I > 0.6 can be found for all angular radi >. 0.1 arc second
(for \ = 10/im), with I being typically > 0.9. (Complete constructive interference of
all four wavefronts corresponds to I = 1).
Performance Estimates
which cannot be estimated without detailed system modelling. We can estimate here
the errors due to photon noise in the thermal background, which are quite
predictable, and set an absolute lower limit to the integration times that will be
needed to get reliable data.
90
into the new one and out again. The zodiacal emission in our own solar system is
well represented by a grey body of temperature 275K and optical depth 7x10*.
Under these assumptions, the flux incident from the total zodiacal background
on a 16 m telescope in a beam large enough to capture the central bright spot of the
Airy pattern is 23,000 photons/sec in a 1/i bandwidth at 10^ wavelength. The photon
flux from a planet the same as the Earth incident on the telescope is about 500
photons/sec if it is at 4 pc distance, 80 photons/sec if it is at 10 pc.
91
other observations. To get the nearly perfect diffraction limited performance at lOfi
wavelength, the surface quality (on large spatial scales) is such as to realize normal
diffraction limited imaging at 0.1/i (ACW).
For normal imaging and spectroscopy, the afocal beam from the telescope
would be brought to a direct focus with an auxiliary telescope in place of the
interferometric beam combiners. The diffraction pattern of the diamond array has
higher resolution than that of a 16 m
circular aperture, but has significant side lobes.
A comparison is given in figure 6. The fraction of the total energy in the central
maximum is 84% for the Airy pattern compared to 60% for the diamond array. Four
of the sbc prominent side lobes each contain 5% of the total energy, with 2% in each
of the remaining two. For imaging and background limited spectroscopy, the
advantage of higher resolution offsets the disadvantage of lower concentration. Only
a modest degree of image cleaning would be required to remove to sidelobe patterns.
The full width at half maximum power in visible light (5000A) is 7.7 mas for the 16
m filled aperture, and 5.2 mas for the diamond array. Interferometric methods would
allow a doubling of this resolution, as would halving the wavelength of observation.
New Technology
smaller segments, but needs to be demonstrated explicitly in the off-axis case. Glass
honeycomb substrates with zero thermal expansion at 70-80K need to be developed.
Borosilicate glass of suitably chosen composition has this property. The facilities for
fabrication into 8 m substrates are now being developed (Angel 1989) and could be
upgraded to handle the more refractory compositions needed for this application.
Given the present state of technology, none of these tasks is beyond our reach.
The search for a planet like our own is a project of immense appeal, and could be the
central goal of an international space telescope project, as envisaged by Bahcall et al.
(1989).
92
I M I 1
I..
I I I I I I I I I I
I 1 I 1 I I I t 1 I I I I
I I 1 I Ill] I.. .
Ml
i
I I :
1 1 .
I . .. I J ) ! I.. .1 2 I..
I I I < SIMZII 4 < I I 1 I.
I I 1 mivizmmii 5 I . I I 1.
I I .. 'tiiiviiwi7iia I.. I I I.
I I.. I «0!J!S»?'<H5SJ?2O J I ..II
I I .11
I I , Ji;jisi;n;cc?;?'5mi! 3 .11
I I , J1U6"'H»»«»««<<!SII 1 . I I
I I I I I I 1 I I I I I I I
. I I I I I I I I 1 I I..
I 1 t 1 1
... 1 1 1..
....II I.
... I I 2 I I
. . I I I I I
... I I I.
1 2 2 2 2 2 1. .11...
III. I 4 5 < 5 < 2 I .111.
111. S t 1 2 9 « 5 2 111.
I I.. tlOII 9III9 t 3
II212 1. «I212 ' 3
71110 ».. . 3 « ) .. ilOtI 1 2
2 3II1»II 3 2 2 9 5 1
.
2 I <
2 s s
3 S t
.
I
mwnmw
«1S2«CCM522I
13 5 5 2..
14 6 5 3.. (b)
.255: 3n<87r»27148r 13 5 5 2..
. 2 < < I Ii3s;s375»)3;i: .2442.
.12 2 1 .. 2 I . t2]<*53«23 . I 2.. 12 2 1.
.. I I 2 < 1I125J2251I
3 . .312. 1 I..
[5)2 3III6I1 3
2 2 7 9 5 1
... 1 1 1.
.1111
.112 1
... I I I.
... I I I.
93
the Airy pattern for a filled 16 m aperture, (b) is the pattern with the same angular
scale for a diamond array of 8 m telescopes with r^ = 8 m and r^ = 4.8 m.
Acknowledgments
References
Angel, J.R.P., 1988, in "Bioastronomy -- The Next Steps," ed. G. Marx, Kluwer,
153.
Angel, J.R.P., Cheng, A.Y.S. and Woolf N.J., 1986, Nature, 232, 341.
Bahcall, J.N., Giacconi, R., Rees, M., Sagdeev, R. and Sunyaev, R., 1989, Nature
339 574.
.
Conrath, B.J., Hanel, R.A., Kunde, V.G. and Prabhakara, C, 1970, J. Geophys.
Res. 75, 5831.
Lippincott, E.R., Eck, R.V., Dayhoff, M.O. and Sagan, C, 1966, Ap. J., 147, 753.
Martin, H.M., Angel, J.R.P. and Cheng, A.Y.S., 1988, Proceedings of the ESQ
Conference on Very Large Telescopes and Their Instrumentation, ed. M.H.
Ulrich, Garching, Federal Republic of Germany, 353.
Space Science Board, 1988, "Space Science in the 21st Century: Imperatives for
in
the Decades 1995-2015". National Academy Press, Washington, D.C.
94
DISCUSSION
Bahcall : How do you know that ozone is only the result of life.
Angel: The oxygen in our own atmosphere is now taken to be due to the presence of life. However,
it would be good to look more closely at this approach.
Pilcher : It might also be good to look at other molecules (e.g., methane) that appear to be in
chemical disequilibrium in our oxidizing atmosphere. These molecules in disequilibrium may also be
useful diagnostics of life. Methane, for example, is in disequilibrium by 7 orders of magnitude while
for oxygen it is about a factor of 20.
Nelson : You stated that the gaps are not important. Can you expand on that?
Angel: The energy &om the gaps spreads out over a large area - and so with narrow gaps (say 2%
of the area) the level of the scattered light at the 0.25" ring should be small, even compared to the
Unidentified : Doesn't working in the visible where the Airy disk is so much smaller offset the contrast
gain in the Infrared? That is, doesn't the 400X gain in the Airy disk area offset the 600x gain for the
IR in contrast?
Angel: The contrast is not only better in the IR, but the photon flux is much higher, and this
observation is photon-flux limited. The 10 micron O3 line is also much stronger than the visible O2
line.
Dlingwortli: One would also be concerned about the need for a surface that is signiflcantly better
than that required for the 10 micron observations - and that is already tough.
FalL Could you do better by looking at brighter (earlier-type) stars at larger distances from the stars.
Angel: The problem is that the number of available stars is extremely smeill, and that it gets very
Rehfield : Has scatter from the structure been considered in the telescope design?
Angel: Most of the complex structure is outside of the aperture and field of view. Only simple
secondary supports contribute scatter.
Korsch : You need to consider the scattering properties of the additional optics to make the pupU
plane for the apodizer.
Angel: It is really quite straightforward because a pupil plane can be obtained very simply most
anywhere with such a small field of view (0.25").
95
Session 3
97
The Keck Telescope Project
Jerry Nelson
50-351 Lawrence Berkeley Laboratory
Berkeley, CA 94720
Introduction
The Keck Observatory is a project to build a 10-m optical and infrared telescope for
ground based astronomy. The primary is composed of 36 hexagonal segments that
together form an f/1.75 hyperbola. The segment positions are actively controlled to remove
the adverse effects of gravitational and thermal deformations on the supporting steel
structure.
Details of the project are described in a number of internal reports and memos and
periodic status are given in recent SPIE proceedings. A review of the characteristics of the
Project with emphasis on the segment fabrication and the active control system was
presented at this conference. Since most of this information is available in the recent SPIE
proceedings information of a general nature is not repeated here. We only repon here some
interesting aspects of segment fabrication that may be of interest for large high accuracy
telescopes in space.
Segment Fabrication
Our mirrors are being polished at Itek Optical Systems in Lexington, MA. They are
polished as circular mirrors using stressed mirror polishing (SMP). With this technique,
spheres are polished into an elastically deformed mirror so that when the externally applied
deforming forces are removed, the sphere relaxes into the desired off axis section of a
hyperboloid. Our hyperboloids vary up to about 100|im from a sphere.
After SMP the mirrors are cut into hexagons. Our experience is that the mirrors
warp a significant amount from this cutting (0.5|im) and this error along with the
imperfections in the polishing itself makes the mirror unacceptable optically. Our
specification is that the mirrors should concentrate 80% of the light into 0.24 arcsecond
diameter. After cutting the mirrors are typically about five times worse than this.
Originally our plans were to test these hexagons and make final correction of the
surface using a technique calledComputer Controlled Optical Surfacing (CCOS) where a
small (10cm) tool is driven over the surface with a dwell time controlled by computer to
remove the high spots. We tried this technique with two mirrors and found the technique
to be wanting for our application.
There were several problems with CCOS. First, the method only converged slowly
to the desired surface, so many were necessary to produce the final
(-10) polish-test cycles
mirror. This of course makes the process relatively expensive. Second, the polishing
process generally roughened the edges of the mirror up rather severely. Edges here mean a
region about the diameter of the tool (10cm) wide going around the entire segment
perimeter. This is about 25% of the segment surface area, so it is important. The edge
surfaces had high spatial frequency errors with amplitudes in the 50nm rms range. This led
to substantial broadening of the image size. Finally, even in the interior there were
apparent grooves or lines caused by the tool motion pattern. These could be easily seen in
99
interferograms of the surface. Their amplitude was relatively small (~10nm ) so they were
not a concern for observations in the red end of the spectrum, but in the blue-ultraviolet, it
became a concern.
As a result of these difficulties we have chosen to make improvements to our mirror
surface by deforming them gendy with a set of corrective forces applied to the back of the
mirror through our passive suppon system. This approach has some merit because the
largest part of the surface error is in the form of very low spatial frequency errors that can
be removed with these corrective forces. We call this set of springs that applies the
correcdve forces "warping harnesses". Our experience is that warping harnesses are very
inexpensive and stable and they correct most of the errors of interest. At the moment,
having made six mirrors, we suU are about a factor of two away from our final polishing
goals. We hope that as our fabrication experience grows our image quality will improve to
the desired level.
DISCUSSION
Johnson: What is the reliability and performance life of the actuators on the segments of the Keck
Telescope?
Nelson: The most significant problem noted is the possibility of leaking hydraulic fluid. Initially,
many leaks were noted. Tightening up eliminated the leaks for now. In operation, it may be necessary
to replace some actuators periodically. This replacement (and leakage) would be a problem in space,
much more than on Earth.
Nelgon : We can safely apply forces up to about 300 N per support point. Typically we warp our
mirrors with forces a factor of 5 less than this.
Manhart : Do you warp each mirror as it is being tested in its own harness, then mount it in the
telescope and hope that it doesn't change? How often do you check its figure?
Nelson : Each mirror is tested after cutting at Itek. Our experience with warping harnesses is that their
performance is predictable to within measurement errors. We can (and will) also test each segment in
the telescope using a Hartmann-Shack camera with 240 points/segment.
Swanson : Is the Hartman test sufficient to phase the primary mirror in an absolute sense? You are
measuring relative edge developments rather than absolute piston errors.
Nelson : Yes, it is adequate. The Hartman test uses four colors determined by filters to eliminate
fringe ambiguity.
Korsch : Are the segments cut slightly diflferent according to their position on the surface, as regular
hexagons do not ideally fill a parabolic surface?
Nelson : The segments appear as regular hexagons as seen by a star. Viewed normsd to the opticeJ
surface, one sees the segments are actually non-regular by typically few millimeters.
Kahan : Can you provide ball park segment production costs for generating/polishing and coating
boules?
Nelgon : The boules would go for about $100k each, fabrication would be about $50k, and coating
(aluminum) would be minor. All this presumes successful continued use of the warping harness.
100
Stressed-lap Polishing
Abstract
We present a method for polishing fast aspheric mirrors to high accuracy. It is based
on an actively stressed lap whose shape is changed continuously make it fit the mirror
surface as it moves across that surface and rotates. A stressed lap 600 mm in diameter
has been built to polish a 1.8-m f/1 paraboloid. It consists of a rigid aluminum disk which
changes shape continuously under the influence of 12 moment-generating actuators. These
actuators are programmed to produce the shape changes necessary to give the correct off-
axis parabolic surface at all times. In this paper we describe the principles and design of
the lap, and progress to date in polishing the 1.8-m mirror.
1. Introduction
At the meeting Angel gave a discussion of stressed lap polishing as part of a more
general description of the Columbus project. The Columbus telescope is the largest planned
by the US community, to be located at 3200 m elevation on Mt. Graham in Arizona. It will
have two 8 m mirrors on a common mount, each having a focal ratio of 1.2, and an image
quality of 0.125 arcsec FWHM. First light (for the first mirror) is projected for 1994. This
report focuses only on the mirror polishing method to be used for the Columbus telescope,
which is also of direct relevance to future space telescopes. A recent description of the
Columbus and other ground-based telescopes is given by Angel et al. (1989).
Looking beyond the Hubble telescope, it is clear that telescopes in space will need
mirrors that are not only larger, but also smoother and more accurate. The natural
short wavelength limit for normal-incidence telescopes is at 91 nm. Diffraction-limited
performance at this wavelength, with a Strehl ratio of 90%, requires a surface accuracy
of 2 nm rms. Very much higher Strehl ratios, > 1 — 10"^, are needed to avoid scattered
light for the detection of Earthlike planets at 10 /xm wavelength. In this case a surface
accuracy of 2.5 nm rms must be maintained over the largest spatial scales. To avoid
diffraction spikes, the mirror surfaces should ideally be off- axis segments of an aspheric
parent surface. For compact telescopes with such optics, the mirror surface will also be
strongly aspheric.
101
of size, smoothness, accuracy, asphericity and lack of axisymmetry are well beyond the
present state of the art in polishing, and will remain impossible with almost all existing
polishing techniques.
The method of stressed-lap polishing holds the promise of solving all these problems. It
is currently being tested for the first time, in the manufacture at the Steward Observatory
Mirror Laboratory of the primary mirror for the Lennon Telescope being built by the
Vatican Observatory and the University of Arizona. This mirror is 1.83 m
(72 in) in
diameter with a vertex radius of curvature equal to 3.66 m
(144 in) and conic constant
equal to —0.996. Thus the mirror is slightly ellipsoidal, for use in an aplanatic Gregorian
telescope, but is virtually an f/1 paraboloid. The surface has a peak-to-valley deviation
from the best fitting sphere of 445 /Ltm. The Vatican mirror forms a stringent test for the
Columbus (and Magellan) telescope mirrors, which at 8 m and f/1. 2 have not much larger
absolute tisphericity (1.1 mmpeak to valley deviation from the best fitting sphere). For
comparison, the fastest 4-m class mirror polished to date has about 5 times less asphericity
than the Vatican mirror.
2. Mirror specifications
Telescopes on the ground are limited in performance by distortion caused by thermally
induced variations of the refractive index of air. It might at first sight seem that the
specification for ground-based telescopes would be much more relaxed than for space optics,
and that the technologies for polishing would be quite different. In fact, the atmosphere
under the best conditions causes little degradation on small spatial scales, up to about 0.5
m. It follows that ground-based mirrors must be very smooth in this domain if they are
not to degrade performance. Smoothness on these scales is the most demanding teisk for
mirror polishing, and thus the new ground-based and space-based mirrors drive the same
critical new technology development.
The specification for the figure of the asphere for the Vatican mirror is given in terms
of the structure function, or mean square phase difference between point pairs in the
aperture as a function of their separation (Brown 1983). We choose a structure function of
the same form as the atmospherically induced phase distortions according to the standard
Kolmogoroff model of turbulence (Tatarski 1961), with a scale factor corresponding to
an image FWHM of 0.125 arcsec at 633 nm wavelength. Since the atmospheric wavefront
distortion approaches toward small spatial scales, the relevant requirement for the mirror
is that little light is lost due to diffraction by small-scale irregularities, as discussed by Angel
(1987). The specification chosen allows a loss of 20% of light at 350 nm wavelength due to
small-scale irregularities (6% loss at 633 nm), with virtually no effect on the image size.
There are no definite plans yet for use of very jispheric mirrors in space astronomy,
but it is worth noting that the mirror needed for the SOFIA airborne telescope is very
similar in size and figure to the Vatican mirror. Working at infreired wavelengths, it does
not need to have such an accurate surface, and could be finished easily by the stressed lap
method.
102
3. Rationale for the stressed lap
A number of innovative polishing techniques have been proposed for the 8-m-class
mirrors and used on smaller and slower mirrors. The general problem that any viable
polishing technique must overcome involves conflicting requirements for the polishing tool.
The tool should be large and rigid, and must fit the mirror surface to an accuracy of
order several microns. While a small rigid tool can maintain a good fit to the mirror
surface even as it translates and rotates, it has to be so small (15 cm in diameter to fit the
8-m f/1.2 mirror to within 3 /zm) that polishing becomes unacceptably time-consuming.
Furthermore, the tool's natural smoothing action works only on scales smaller than its
diameter, so one has to actively control the figure over a large range of scales. The opposite
extreme, a large flexible tool, must be sufficiently flexible that it rides over roughly 1 mm
of asphericity. Such a tool will also ride over large errors in the mirror figure without
exerting a significant corrective action.
Wehave developed a way to combine the advantages of large tools and rigid tools,
by building a large rigid lap that is bent actively as it translates and rotates, so that it
continuously matches the changing aspheric surface with which it is in contact. Actuators
mounted on the lap apply edge moments to change its shape, based on commands from
a computer that monitors the lap's position and orientation with respect to the mirror
and evaluates the required forces. In this way the stressed lap removes the difficulties
associated with asphericity from the polishing operation, in much the same way that a
null corrector removes them from testing. The eisphericity is, in a sense, transparent to
the optician, who polishes the mirror as though he were polishing a sphere with a passive
lap.
The stressed lap consists of a metal disk with actuators attached to the upper face,
and coated on the lower face with the traditional layer of pitch. The lap is pressed against
the mirror prior to polishing, allowing the pitch surface to take on the correct absolute
shape one particular position and orientation of the lap. When the polishing run starts,
for
the actuators must induce the correct changes in shape as the lap moves relative to the
mirror.
One of the key making the stressed lap work was to find a reasonably simple
aspects of
arrangement of actuators that would accurately induce the required shape changes. These
shape changes can be expressed as a sum of Zernike polynomials. For the 1/3-diameter
lap that we use for the f/1 mirror, they are given to an accuracy of 0.3% by the first three
polynomials that involve bending, namely defocus, astigmatism and coma. In their work
on stressed-mirror polishing, Lubliner and Nelson (1980) showed that these bending modes
can be produced in a thin plate by a distribution of bending moments and shear forces
acting on the edge of the plate. We use the same principle for the stressed lap.
103
The actuators that bend the lap work independently of the polishing machine, which
drives the lap in translation and rotation. Since these actuators apply no net force or
moment to the lap, they can be mounted and connected internally so that both applied
forces and reaction forces are taken by the lap. As Lubliner and Nelson pointed out, the
shear forces can be replaced by an equivalent distribution of twisting moments, which are
simpler to apply with the internal system. Such bending and twisting moments are applied
by lever arms attached to the perimeter of the disk and extending out of the plajie of the
disk. The ends of these lever arms are connected in groups by elements in tension, and the
actuators vary the tension of these connecting elements. Angel (1984) and Martin, Angel
and Cheng (1988) describe the general system, and an early version of the actuator design.
The actuators have been modified in the light of further ajialysis and experiment, and the
current design being used for the 1.8-m mirror is shown in figure 1.
Figure 1. Side view of a stressed lap actuator. The actuator column serves as a lever arm
which transmits bending and twisting moments to the lap plate. The actuator consists
of a servomotor mounted inside the lever arm, a lead screw, and a linkage that converts
the vertical force of the lead screw to a horizontal tension on a double band of steel. The
applied force is measured and controlled through the deflection of a steel leaf spring to
which the steel band is attached at its opposite end.
104
not to conform to errors in the mirror figure that are outside the specification. The worst
situation occurs when the lap overhangs the edge of the mirror by at most 1/4 its diameter,
in which case one can show that the lap distorts to give a maximum slope error of about
0.1 arcsec.
The stressed lap control system provides real-time, closed-loop control of the actuator
forces. These forces are related to the lap deformations through an off-line calibration
procedure that is performed as often as necessary to maintain accuracy, typically once
per month. During operation, the control computer issues conmiand signals which pass
through a slipring onto the rotating lap. These signals update the command positions for
analog servocontrol circuits controlling the actuators. The update rate is 1.8 kHz for the
entire set of twelve actuators, so the force variations are smooth and the instantaneous
forces within 0.1% of the required values.
Our intention is to polish the mirror from with a single stressed lap, as
start to finish
opposed to several laps of different diameters. In order to develop techniques that would
allow us to figure the mirror with this subdiameter lap, we initially polished the mirror as
a sphere using a passive lap of the same size and stiffness as the stressed lap. This exercise
also gave us the opportunity to solve the problems of mirror support and thermal control
without the added complication of debugging the stressed lap.
The mirror was not generated prior to being finished as a sphere, but was lapped
with loose abrasives from the spin-cast paraboloid, which had local surface irregularities
of roughly Imm and overall astigmatism of about 50 /xm. The 1/3-diameter lap has little
automatic tendency to remove large-scale errors such as astigmatism; these errors were
removed explicitly by varying the dwell time ajid rotation rate of the lap as a function
of position on the mirror. Astigmatism was removed by varying the mirror rotation rate
through each rotation so that dwell time increased in proportion to desired removal.
The final figure achieved with the mirror as a sphere meets the specification described
in Section 2, as shown in figure 2.
105
1 1
——
I
I I I
CO
0.1
0.01 _1 I I L_l_ J I I I
I I I
10 100
separation (cm)
Figure 2. Surface error as a function of spatial scale on the 1.8-m mirror as a sphere. The
straight line shows the surface error corresponding to the atmospheric wavefront distortion
for 1/8-ajcsec seeing, and the curve includes an allowance for small-scale errors that scatter
a small fraction of the light.
The theory of plate bending establishes that we can produce very nearly the correct
shape changes in the stressed lap with the distribution of edge moments that we apply.
This was confirmed independently through finite-element analysis performed by Walter
Siegmund. The theory caji be used to obtain the forces required to bend the lap into the
proper shape as a function of its position and orientation, but the accuracy of this predic-
tion would be limited by our knowledge of the plate properties, tolerances in machining
and assembly, and accuracy of the force measurements. Instead, we use plate bending
theory only to design the lap, and determine force values empirically. Our only measure
of actuator force is the displacement of the leaf spring shown in figure 1, which is the
feedback signal for the actuator's servocontroUer. We have made no attempt to calibrate
this in terms of absolute force. We simply determiine the feedback signals required to give
the correct lap deformations.
106
computed as a function of the offset r from mirror center to lap center, and orientation
with respect to the direction to the mirror center. We choose a reference set of actuator
feedback signals to correspond to the symmetric shape required for r = 0. These values
are fairly arbitrary as the pitch will take on the correct absolute shape for one position and
orientation. The 32 displacements are read for these reference signals; all shape changes
and all other signals are measured with respect to these reference values.
For a number of positions (r, 0) the actuator signals are varied to minimize the sum
,
over the 32 sensors of the squared difference between measured and ideal displacement
(Angel 1984). We achieve an rms error of 3 ^m or better over the entire range of r out
to 750 mm, the maximum lap excursion. We need resolution of about 0.5 mm
in r and
4 axcmin in 6 to determine actuator signals to the required accuracy. The signals vary
smoothly with r and 0, however, so we store them as smaller arrays of coarsely gridded
values, and compute derivatives as finite differences between adjacent values. The control
computer uses these arrays, with resolution of 25 mm
in r and 1.4° in 0, along with the
encoder readings for r and 0, to interpolate actuator signals which are sent to the lap.
Since this computation is done in real time, only a simple linear interpolation is possible.
We believe that the accuracy of 3 /Lim rms achieved during calibration is adequate
to polish the mirror to its final figure. Currently the accuracy achieved during operation
is limited to a value several times worse because of mechanical hysteresis in the system.
Friction in the bearing located at the connection between the tension band and the leaf
spring produces a bending moment that is not measured by the leaf spring's deflection and
which exhibits hysteresis through the cycle of forces. This problem does not seem to have
limited our ability to polish the mirror up to the present time, but we expect that it would
eventually limit the accuracy of the mirror figure. It will be largely eliminated in a future
implementation which replaces this bearing with a flexure pivot.
Following completion of the sphere, the mirror was generated as a paraboloid using the
Large Optical Generator (Shannon and Parks 1984) at the University of Arizona Optical
Sciences Center. The generated surface was accurate to 4 /xm rms and 20 ^m peak-to-
valley, with subsurface damage extending only 30 /xm deep. It was therefore possible
to start polishing with pitch directly, without going through the usual process of loose-
abrasive grinding with a sequence of grit sizes. The lap has removed material from the high
zones left by the generator, as expected. As we write this paper the surface has not been
lowered enough to meet the lowest zones left by the generator; about 80% of the surface
now has a polished shine. The polished surface is extremely smooth, indicating that the
lap is fitting the aspheric surface adequately. As long as the lap produces a surface that is
107
smooth on small scales, we can control the overall mirror figure by varying the polishing
strokes.
We are currently building a 1.2-m diameter stressed lap that will be used to polish a
3.5-m f/1.5 paraboloid starting in mid-1990. Laps of 2-2.5 m
will be built for the larger
primaries of the MMT
upgrade project (6.5 m
f/1.25) and the Columbus and Magellan
projects. These will use the same concept as the stressed lap for the 1.8-m mirror, with
modifications to the actuators and force-measuring devices based on our experience with
the first lap.
The and polishing machine currently being used on the 1.8-m mirror
stressed lap
will eventually be used to polish the large wide-field secondary mirrors for the 6.5- and
8-m primaries. These secondaries are convex hyperboloids roughly 2 m in diameter, with
asphericity comparable to that of the 1.8-m primary. The stressed lap does not care
whether the surface is concave or convex, or precisely what shape changes are required,
as long as they can be represented accurately as a sum of the three lowest-order bending
modes. For the conic sections of revolution used for all classical telescope designs, a
relatively large stressed lap can be used.
For off-axis conic sections and general aspherics, one can always represent any suffi-
ciently small sub-aperture in terms of the low-order distortions that the stressed lap can
produce. For a severe aspheric, as one increases the lap size, the higher-order distortions
eventually become important and the lap will not fit the mirror adequately. This limiting
lap size, however, will be many times greater than that of a passive lap which cannot
accommodate any change of shape beyond the tolerance in fit.
9. Acknowledgements
The development of the stressed lap has been possible only through the combined
effort of a number of scientists, engineers and programmers. We particularly acknowledge
the contributions of Dave Anderson, Bob Nagel, Dick Young, and the Steward Observatory
technical division.
References
Angel, J. R. P. 1984, Proc. lAU Colloquium 79: Very Large Telescopes, Their Instrumen-
tation and Programs, ed. M.-H. Ulrich and K. Kjar, 11.
Angel, J. R. P., Hill, J. M., Martin, H. M. and Strittmatter, P. A. 1989, Astrophysics and
Space Science, 160, 55.
108
Martin, H. M., Angel, J. R. P. and Cheng, A. Y. S. 1988, Proc. ESO Conference on Very
Large Telescopes and their Instrumentation (Garching: ESO), 353.
DISCUSSION
Macchetto : Wouldn't you end up with different micro-roughness in different places with the stressed
lap technique?
Angel: We expect that with large laps matched to the radius of curvature micro-roughness will be
very small. This technique may be the best hope for generating optics with smooth surfaces.
109
Large Telescope O-IR Astronomy from the ground
N. Woolf
Introduction
What you would like me to say, is that large telescope diffraction limited observations can't
be done from the ground. Failing that, you would like me to say it CAN be done from the
ground. I bring you the worst message of all. We don't know whether it can be done from
the ground. Or rather, that we suspect that some things can be done from the ground, and
some can not be done from the ground, and it will take a great deal, of observational and
experimental work to find where the boundary lies.
There are of course some things that will never be done from the ground, because the
atmosphere absorbs too much. Thus the region from 3000 to 912A is totally reserved for
space observations. Similarly, telescopes on the ground will always have some surfaces with
noticeably emissivity at ambient temperature. There will thus be an IR background that
limits the sensitivity of equipment at wavelengths beyond 3 or 5 microns. And at some
wavelengths airglow will make space the preferred location because of its reduced sky
brightness. But in between lies the bulk of the region where astronomy has cut its teeth,
and where we know most. That is the obvious region to advance with diffraction limited
imaging by large telescopes.
Adaptive optics
The process for correcting images to diffraction limit in real time is called adaptive optics.
There the use of some kind of flexible surface to correct for wavefront errors put in by
is
the atmosphere. There is the sensing and measuring of those errors to know how much
to correct.
Because the refractive index of air is fairly constant across the visible and IR region, the
wavefront error information sensed from one wavelength region can be used to correct at
some other region. Because the accuracy of correction needed is reduced when the
wavelength is longer, it is much easier to correct an infrared image in real time than an
optical image in real time, and the image will stay corrected for a longer time, and be
corrected over a wider field. The time and field diameter increase somewhat faster than
the wavelength, so that there are somewhat more diffraction limited pbcels in the
correctable field at longer wavelengths.
110
The length of time which the image stays corrected, is the length of time for which the
for
rms wavefront error stays within our diffraction limited tolerance of 1/10 to 1/20 wave.
Because this time is very short in the visible, the correction must be carried out on a star
with a substantial photon flux, somewhere between 8th. and 11th. magnitude. However
there are experiments underway, using the telescope to project an artificial star by reflecting
Sodium D lines off the upper atmosphere, and correcting on the image of that. If the
process is successful, diffraction limited imaging will be available anywhere in the sky.
The next problem that the wavefront errors arise at various layers throughout the
is
atmosphere, while the correction is currently applied at a single surface which can at most
be conjugate with a single layer of the atmosphere. As a result, the correction is only
appropriate for a small area, the isoplanatic patch, around the light source and in
consequence only a limited area of sky is corrected to diffraction limit. There are however
possibilities of having more than one corrective surface, and so widening the region of sky.
The expected final result is that visual images will be corrected across a patch of sky some
2" to at most 20" in the visible, while at 10 microns, the corrected area will be some minutes
of arc in size. The correction at longer wavelengths will be better and much easier, giving
a longer coherence time, a larger isoplanatic patch and being possible on a fainter star if
the initial seeing is superior. Adaptive optics is a way of improving the best sites, it is
relatively poor at coping with the inferior ones.
Because a patch of Sodium from the upper atmosphere will show a parallax,
light reflected
the size of telescope is limited for having the source in the same isoplanatic patch for all
parts of a large primary mirror. Conquering this problem will need a complex solution, but
it is not in principal impossible.
of space.
If we had such agiant telescope, with such spectacular angular resolution, what would we
do with it? There are three obvious areas of research. The first is studies of intrinsically
bright objects, such as the surfaces of stars, the second is a study of the regions close to
stars for objects related to those stars. The third is to say that the universe is similar in all
directions,and so the regions close to the direction to stars is just like any other region, and
one can study cosmology in them, and very well too, IF the telescope reaches sky
background in the area we are observing. Thus the dark sky point source faintness limit
of a 100m diffraction limited telescope is under those circumstances 7 magnitudes fainter
than the limit of a 4m telescope, and 5m fainter than the limit of a 10m telescope.
Ill
The background limit of telescopes with adaptive optics
If we observe near a star, using the star's light for adaptive optics, there is a concern that
the scattered light from the star will exceed sky background. On the other hand if there
is an artificial star made with sodium light, the sodium emission is easily filtered away. But
we currently better understand how to use the real star as a source for adaptive optics.
Thus it is appropriate to consider the scattered background around a star. For problems
of planetary searches, from any platform, the problem of scattered light is a crucial one.
Ivan King (1971) published the profile of a star image, and it was later shown by two
different groups that the profile consists of two parts, an inner region which is on the
ground dominated by seeing, and an outer region which is called the aureole.
Following this talk, and at this meeting, a consensus developed that the aureole is the result
of the residual polishing imperfections on the mirror. On the small scale, these are referred
to as "orange peel" and "dog biscuit". The larger the scale of the structure, the closer in is
the scattered light, thus orange peel and dog biscuit affect the outer parts of the aureole
from about 10" out, ripple because of the use of small tools affects the area further in.
There are by (1) polishing aspheric surfaces
possibilities of substantially reducing the ripple
with a stressed lap, (2) avoiding segmentation and controlling mirror shape at the support
points, and (3) avoiding any obscuration in the beam by use of off-axis optics. The
reduction of orange peel and dog biscuit seem likely to require some form of super
polishing.
Some improvement in both large and small scale correction will be needed if we are to
study cosmology using the areas of the sky close to stars. With current mirrors, the aureole
at about 4500A is fainter than the star by about 10 magnitudes per square arc second at a
radial distance of 3", and at 9" it is about 12.5 magnitudes fainter. The light at every part
of the aureole is likely to be reduced by the square of the wavelength. You can see
therefore, that at least in the visual there will be a very delicate question whether
sufficiently faint stars can be used for adaptive optics that the sky background light will
dominate at the outer part of the isoplanatic patch. The critical question is whether the
large ground telescope concept is boxed in between wavelengths sufficiently short that the
zodiacal light dominates airglow, and yet at the same time that the night sky brightness
dominates inside the isoplanatic patch.
potential for highest angular resolution, and the single aperture is, for currently imaginable
costs and associated aperture sizes, the leader in point source sensitivity.
The difference between a radio and an optical interferometer arises because the noise at
radio wavelengths is dominated by wave-noise, or if you will by the statistics of the bunching
of bosons, whereas in the O-IR region, the detection events are far fewer and are
dominated by individual particle statistics. In consequence amplifiers are able to be
112
relatively noise free at radiowavelengths and are very noisy at optical wavelengths, with
even a perfect amplifier producing one noise event per unit time per unit bandwidth. (I am
indebted to Dr. B. Burke for directing me to this conclusion). Thus the optical
interferometer is forced to detect at the place where the light is brought together, while at
radio wavelengths it is possible to amplify a signal, split this up, and get independent
interference between beams. The problem shows up in the mapping mode, but need not
show up in the spectroscopic observation of a single point source once its position has been
determined.
by variable delay lines (i.e. one sits on top of the bright fringe), and if perfect beamsplitters
are assumed the device will reject the incoherent light of the background and so allow the
same signal/noise ratio that would be obtained for the same area in the form of a single
dish. This was pointed out to me by Dr. Angel). Unfortunately, for this configuration the
(
light from many other areas of the sky is suppressed by interference because the
interferometer impresses fringes onto the plane of the sky.
If however one instead attempts to map an area of sky, then one finds the light spread
angularly into a pattern where a substantial fraction of the energy appears in low-surface
brightness side-lobes (see for example the patterns shown by Meinel, Meinel and Woolf
1983). There is then a loss of signal/noise ratio if there is a sky background that is
appreciable compared with the signal. Because of the background, the light in these low
level frringes can be detected with at most an extremely low signal/noise ratio, and in
consequence it hardly helps in the total observation. The problem appears to be reduced
by, (1) using fewer, larger interferometer elements and moving them around to synthesize
an aperture, with three elements being a probable optimum, or (2) having the
interferometer spacings redundant, and in the extreme variant of this, in having all elements
co-linear. Either of these solutions places considerable emphasis on aperture rotation
synthesis.
This last seems to me to be worth far more consideration for large space and lunar
telescopes as well as for giant telescopes on the ground. It has the full point source
sensitivity of the aperture for spectroscopic observations, without any special efforts beyond
those normal for adaptive optics on the ground, or alignment and phasing in space.
Further, it is possible to map in various ways that allows the data to be analyzed either for
optimum point source sensitivity or for maximum angular resolution
113
Conclusions
There are no very simple conclusions. The balance of opportunities and costs of doing
astronomy from the ground and space is a most difficult topic, even without the
uncertainties that started this paper. I believe it is entirely appropriate to put substantial
effort intoconcept development, and particularly into cost reduction for all schemes for
giant astronomical facilities.Every dollar spent in this phase can pay off hundredfold in
our eventual decision. It is in this phase that the normal committee type of compromise
is least helpful. We need to discover the optimum, regardless of individual feelings.
Equally I feel that we cannot afford to overbalance our efforts either towards large
telescope structures or towards interferometry. There will indeed be a time when we have
to set a priority, and all pull together for a single device, and maybe even for a single
platform. Earth, orbit or Moon. That time is not yet here. And equally, we are all
responsible that EVERY
concept is the very best one of its kind possible. are here to We
get the best for astronomy, not to push anyone's pet scheme.
References
Angel J.R.P. and Woolf N.J. 1986 Ap.J. 301.478
Angel J.R.P. Woolf N.J. and Epps H.W. 1982 Proc. S.P.I.E. 332.134
King I. 1971 Pub. Astr. Soc. Pacific,83,199
Meinel A., Meinel M. and Woolf N. 1983, App.Optics & Opt.Eng. 9,150
Woolf N.J., Angel J.R.P. and McCarthy D.W.Jr. 1983, Proc. S.P.I.E. 444, 78
114
Session 4
115
Considerations for a
Next Generation UV/Optical
Space Telescope
During the past 25 years, a remarkable scientific revolution, but to accelerate the quest for an-
revolution has occurred in astrophysics as a re- swers about the universe that have puzzled man-
sult of convergence on two advancing fronts. kind for centuries. Beyond Earth-orbit, NASA is
First, instruments and telescopes have been de- actively studying the possibility of a return to the
veloped to make sensitive measurements through- Moon, which would provide a valuable platform
out the entire electromagnetic spectrum. Sec- for astrophysics observations during the next
ondly, access to space has permitted observa- century.
tions above the obscuring and distorting "dirty
window" of our atmosphere. Beginning around The promise offered by space observatories will
the middle of the next decade, a third major path be dramatically illustrated when four observa-
— the availability of the permanently manned tories, the Hubble Space Telescope (HST),
Space Station Freedom — will join with the ear- Gamma Ray Observatory (GRO), Advanced X-
lier two capabilities, to not only continue this Ray Astrophysics Facility (AXAF), and Space
117
Infrared Telescope Facility (SIRTF), which form have attracted the most attention for future UV/
NASA's "Great Observatories" program, become optical space telescopes. Current technology, as
operational (Fig. 1). These observatories will demonstrated by the Multimirror Telescope,
form the core of NASA's Astrophysics Program indicates that it is possible to break up a nor-
through the end of this century and early into the mally circular telescope aperture and separate
next. the parts in order to effectively increase the
aperture diameter without requiring increased
By the mid-to-late 1990's, S.S. Freedom, associ- collecting areas. Since the cost of telescopes has
ated co-orbiting platforms, space transportation been estimated to vary approximately as the
systems, and supporting facilities will be able to cube of the diameter, this is a very important
support the Great Observatories, offer expanded consideration. Similar ideas have long been used
capabilities for other astrophysics missions, and in radio astronomy with remarkable success.
be available to enhance, and in some cases make Approaches such as these are now being exam-
possible, the successors to the Great Observa- ined in greater detail for advanced UV/optical
tories. With the utilization of S.S. Freedom for telescopes in space, as well as future ground-
repair and servicing and as a platform for ob- based telescopes such as the New Technology
serving facilities, such as the Advanced Solar Telescope (NTT) in order to reduce the weight
Observatory (ASO), and to provide a base for as- and size of the subapertures. This would then
sembly of large facilities, S.S. Freedom will be allow transportation and assembly in space to
an integral part of astrophysics research. In fact, produce a complete telescope. As studies have
some of the proposed large observatories of the shown, there are many approaches to increasing
21st century may not be feasible without exten- the aperture and thus angular resolution and sen-
sive assembly operations in orbit. sitivity. Figure 2 shows several representative
concepts.
To maintain scientific continuity and momen-
tum, to ensure that the Nation's leadership in
118
In order to "thin" the aperture, several major major trade criteria in the design of advanced
considerations must be addressed to ensure that space telescopes. Some of these criteria are listed
the optical performance remains acceptable. The in Figure 3, where various telescope concepts
modulation transfer function, as a measure of the are compared. It is obvious that the traditional
optical performance, must not become zero any- contiguous-filled circular aperture concept has
where except at the highest spatial frequency excellent optical performance, allows testing of
(corresponding to the diffraction limit of the the complete optical train on the ground (al-
circumscribed (full) aperture diameter). Other- though necessarily under degraded conditions
wise, information at some spatial frequencies because of the g-loading), avoids the complexi-
will be lost. The practical consequences of this ties of orbital assembly, and does not require
are reflected in careful selection requirements rotation of the telescope to construct an image.
for subaperture spacing and location. Experi- Unfortunately, the extreme weight and volume
mental results, reported elsewhere, seem to indi- associated with this approach could exceed the
cate that advantageous placement of the subaper- transportation-to-orbit capability of projected
ture can be accomplished. future launch vehicles, unless extreme attention
is paid to superlight- weighting of the mirror and
Although optical performance considerations telescope structure and the spacecraft compo-
play a major role in the selection of a telescope nents. The latter, however, normally has a high
concept, other important aspects also become price tag associated with it. For example, a 10-m
Advanced Technology
Advantages Disadvantages Development Auxilliary
Systems
119
® OOOOS @ 1000m
(2
^
HST
4m)
..
Monolith
(lO-m)
,
VLSI
(8-m)
COSMIC-1
(14x2-m) .J
'
LDR
(20-m) COSMIC-4 TRIOS
1
(28x2m)x(28x2m) SAMSI I
Monolithic Segmented
Approaches Approaches Interferometer Approaches
1000
HST
o
LDR
100 (Ref)
VLSI
£ 10
COSMIC-4
O
4-Meter
Ground
(Ref) HST COSMIC-1
1 .
Diffraction-Limited
Full Aperture (6300 A)
Table I. Performance Comparison of UV/Optical Telescopes
Telescope
System
and robotics, initial alignment, and long-term aspect of the design. Since there are currently no
maintenance have only recently received atten- indications in NASA's future mission models of
tion. Early analysis shows that transportation to an Orbital Transfer Vehicle (OTV) with the ca-
an orbital altitude of 300 nmi with subsequent pability to transport 50 tons of payload to GEO,
assembly requires launch capabilities of the or- telescope designers must consider assembly of
der of 50 tons for a thinned primary mirror-type telescope modules at high altitude orbit loca-
telescope with 10- fold increase in resolution over tions either through robotics, by astronauts, or
the HST. This capability may be available early both. Alternatively, telescopes must be designed
in the 21st century. However, a high Earth orbit, which can be accommodated by projected launch
especially a geosynchronous orbit (GEO), would vehicles, even at the expense of some scientific
be better for the observatory, since both gravity capabilities. To operate these observatories at
gradient and aerodynamic torques are minimized. more accessible orbits will entail the problems
For the large telescopes under consideration, highlighted in Figure 6. It is clear that assembly
these torques are the major environmental dis- by robotics and/or man in GEO without the
turbances and are the primary drivers of momen- presence of a large operational base, such as a
tum management and control system design in temporarily or permanently manned GEO sta-
low Earth orbit. tion, will further complicate the design of these
telescopes.
Additional considerations for operating large
observatories in high Earth orbit, or preferably In order to better understand the key technology
GEO, are shown in Table II. Thus it becomes issues which will have to be resolved before
obvious that orbit-to-orbit transportation will be serious design approaches can be advanced,
one of the major considerations in the develop- several different generic telescope system con-
ment of future large observatories, affecting every cepts are now being analyzed by NASA. As
LEO
Dimensional Stability
Orbital
•
Requirements
o
Transportation to Orbit \
•Assembly / ?
• Initial Alignment & Maintenance of Alignment /
• Orbit to Orbit Transfer r ?
• Operations \
• Maintenance & Repair ?
pointed out by the Space Science Board's Task The interferometric telescope concept selected
Group on Astronomy and Astrophysics in its for technology investigations is a type represen-
study of major directions for space sciences tative of the one-dimensional Coherent Optical
during the 1995-2015 time period, astronomers System of Modular Imaging Collectors (COS-
will probably pursue two different approaches: MIC) that was previously studied. The particular
( 1 ) high resolution through interferometry, which arrangement selected is known as a "Golay 9"
means a large baseline between sensors, and (2) array. This concept is optimum from the stand-
high "throughput" for imaging faint objects. This point of providing an autocorrelation function of
requires large 8- to 16-m diameter mirrors. maximum compacmess. The two-dimensional
array of nine afocal telescopes arranged in a
In concert with these recommendations. MSFC nonredundant pattern is shown in Figure 7. This
has conducted studies of two representative concept has the advantages of being relatively
advanced telescope configurations to further compact, not requiring rotation to form an im-
assess a range of key technical problems inher- age, and allowing later expansion of the baseline
ent in both interferometric-type telescopes and by the addition of more telescopes that would
large contiguous primary mirror telescopes. feed the same beam combiner. However, there
123
Figure 7. Two-Dimensional Coherent Array Telescope
are also disadvantages associated with these types In order to investigate the technology require-
of systems. They suffer from low UV throughput ments for a partially filled primary mirror tele-
due to the eight or more reflections needed to scope, the two-dimensional configuration shown
bring the collected light to focus. In addition, the in Figure 9 was selected. In this concept, the pri-
useful field of view (FOV) is very narrow, only a mary mirror consists of 1 8 off-axis segments of a
few tens of arcseconds (Fig. 8). This may not be single parent primary mirror feeding light to a
too critical, however, since even a 10-microrad common secondary mirror and thence to a focal
FOV will require a detector array with 1 million plane behind the primary in a Cassegrain or
resolution elements (for 10-nanorad telescope Ritchey-Chretien arrangement. The nonredun-
resolution). Other design benefits must also be dant, two-dimensional pattern provides the wid-
considered, such as construction and checkout of est two-dimensional aperture separation that
the individual telescopes on the ground and trans- avoids zeros in the optical transfer function. The
portation to and assembly in orbit. Furthermore, initial dilute aperture configuration requires some
considerably less cross-sectional area is exposed degree of image processing to obtain diffraction
to the micrometeoroid and space debris flux hmited imagery. This concept, an unfilled aper-
compared to large, contiguous aperture systems. ture with a common secondary mirror, has a
The probability of micrometeoroid impact with large FOV (recognizing that detector arrays may
the mirror surfaces is therefore reduced, allow- not be available to fill the focal plane). It allows
ing smaller light shields made from rigid mate- a later increase of the sensitivity by the addition
rial (a micrometeoroid bumper). This enhances of segments to the primary and has a minimum
the ability to control straylight increases during number of reflections —
thus a high UV through-
the telescope lifetime. put. The major disadvantage of this approach is
124
that a very large outer structure is required to
place the secondary mirror at the proper distance
from the pnmary. The large dimensions pose
problems, not only in control, but also in launch
and assembly of the telescope modules. The
system will only function as a complete tele-
scope for the first time when the modules are
assembled in space. Because of the large dimen-
sions of the lightweight support structure, com-
plete assembly and checkout on Earth is proba-
bly impossible or meaningless, at least from the
standpoint of verifying optical performance.
Compression Struts
Tension
Wire Ass'y
M'mor Integrating
Structure
Mirror Segment
Chordal Rib
Shown in Final
Radial Arm
Position
Brace Strut
Spacecraft Attitude
Stabilization Module Incl.
OTV Module
Orbiter Cargo
Bay Diameter
125
Readiness Scale
Criticality Key
• Potential Showstopper
© Functional Alternatives
O Deferable Entiancennent
uE
Technology Area
Imaging Optics
• Power Source
Non-Jitler Inducing (Nuclear?) o
• System
Extensive. "Smart" Fault Diagnostics ©
• "Son-of-lgloo" Onboard Computer o
• On-Board Image (RE) Construction ©
• On-Board/Down-Llnl< Handoff Trade o
• Observer/Machine Interaction ©
Vehicle Systems
126
These technology assessments are summarized Galileo, to the sophisticated instruments of to-
in Figure 10. day, this discipline is driven by technology. The
technology available for the next generation of
Although the successors to NASA's Great Ob- space telescopes cannot be predicted, based solely
servatories will probably not be operational until on this discipline's demands. Rather, it depends
early in the 21st century, history has shown it to a large degree on the requirements of the other
takes at least 15 to 25 years from concept to disciplines that use the capabilities of space.
flight of a large spacebome astronomical tele- Therefore, it must be recognized that a technol-
scope. Figure 1 1 indicates that the HST develop- ogy development period of at least 5-10 years
ment cycle, spanning a period of 23 years, is not must be allocated in advance of any serious
an anomaly. If one includes the precursor tech- design efforts. Thus, it is appropriate and neces-
nology development projections for the AXAF, sary that concept studies be conducted now to
even at this early stage in the program, the fore- identify the key enabling technologies, not only
cast is a development time of about 25 years. for the eventual development of these advanced
telescopes, but also for transportation and as-
From the time observational astronomy began sembly requirements (Fig. 12). These require-
almost 400 years ago with the small telescope of ments will have a major influence on telescope
Sensitivity Resolution
Optical Configuration
Transportation/ Fill Factor
Assembly Field ot View
MTF Zeros
Modularization Reflectivity/
Optical Coatings
Optical/
System Mechanical
Weight Stability
Program Cost
Figure 12. Generic Design and Technology Concerns for Advanced Telescopes
designs. As a matter of fact, the very configura- "New Initiative for Space Exploration" is a re-
tion of the telescopes and apertures arrangement turn to the Moon (Fig. 13). This was, in fact, a
• Expand human presence beyond the Earth increases in resolution and sensitivity if cost
effective
into the solar system
• Strengthen aeronautics research and develop Plan for evolutionary development of the lunar
facilities
technology toward promoting U.S. leader-
ship in civil and military aviation.
Figure 13. Next Century Astronomy Objectives
Consistent with the first and second goals and
now specifically directed by the President's
128
observations (Fig. 14). A stable and low gravity interferometric telescopes on the near or far side
environment would provide a platform for ob- of the Moon offer interesting trade options; and
servations, such as interferometric measurements certainly will affect the design of future space
— both at radio and visible wavelengths. For observatories, not only from the viewpoint of
example, the pattern of the Very Large Array transportation requirements and assembly on the
(VLA) to make long baseline interferometry Moon by man and robots, but also from the as-
measurements in radio astronomy could be fol- pects of modularity and commonality with orbit-
lowed to construct a "Lunar Optical VLA" (Fig. ing space telescopes.
15) as proposed by Professor Bernard F. Burke
of the Massachusetts Institute of Technology. A Exciting prospects await astrophysics research
set of 1-m class telescopes would be deployed during the next 25 years. The Great Observato-
along a "Y"-shape, each arm 6 km long, with a ries, observations from S.S. Freedom, and spe-
maximum baseline of 1 km. An angular resolu- cialized space missions will evolve to the devel-
tion of 10 micro-arc seconds at 5000 A, a four opment of space observatories with even greater
order of magnitude improvement over HST, could capabilities. The demands for improved perform-
be possible with this system. The placement of ance require aggressive long-term technology
other large telescopes, such as a 16- to 20-m di- development. Moreover, these advanced obser-
ameter UV/optical/IR system (Fig. 1 6) and other vatories will no longer be guided by historical
BENEFITS
approaches to development but must seek inno- transportation to orbit and the assembly of these
vative responses to many diversified require- telescopes in orbit are major driving forces in the
ments. An accepted balance must be developed selection of generic design concepts. Ultimately,
between the science requirements and program optical advances which are now becoming avail-
cost. able through advanced manufacturing must be
matched by technology advances in orbital op-
The studies discussed in this paper indicate that erations, system modularization, and assembly
the technology requirements associated with the by man and machine.
130
Marshall Space Flight Center August 30,1989
131
a
DISCUSSION
Bender : How bad is the debris problem at 1500 km altitude, where you can get a fully illuminated
sun synchronous orbit?
Nein : As shown in the graph below, the modelled surface area flux is a line straight above 700 km,
although there is a valley at 1300 km and a drop off past 1500 km. However the second peak occurs
at 15Q0 km. This would not be a good location from a debris point of view and also because of the
radiation environment).
—^-^—
1
• ICltud* Iksl
Ejgenhardi : Large impacts (greater than a few mm) are due to micio-meteoioids and will be a problem
in any orbit. 1500 km is a bad altitude.
Dlingworth : What is the projected capability of OTV for weight/size from LEO to HEO?
Nein : The payload capability of an OTV can range from 13-30 klb depending on assumptions made
for launch vehicle and engine. The volume available for the payload is about 15 ft diameter by 20 ft
long.
Nein : Assembly time for the Galaxy 9 was estimated at about 800 EVA hours by MSFC. This is
characteristic of all 10 m or larger observatories if restricted to launch by the shuttle (15' diameter
payload bay). The assembly time via EVA is prohibitive and we conclude that for some time robots
or automation are required. However, most 10 m si»e telescopes could be loaded without on orbit
assembly by the Shuttle C Block II 10m x 30 payload bay and 150 klb to LEO, or possibly the ALS.
132
AN INTERFEROMETER-BASED IMAGING SYSTEM FOR DETECTING
IBIS:
EXTRASOLAR PLANETS WITH A NEXT GENERATION SPACE TELESCOPE
David J. Diner
Jet Propulsion Laboratory, California Institute of Technology
Mail stop 169-237
4800 Oak Grove Drive
Pasadena, CA 91109
(818)354-6319
1. INTRODUCTION
The direct detection of extrasolar planetary systems is a challenging observational objec-
tive. As shown in Fig. 1, the observing system must be able to detect faint planetary signals
against the background of diffracted and scattered starlight, zodiacal light, and in the IR, mirror
thermal radiation. As part of a JPL study, we concluded that the best long-term approach is a 10-
20 m filled-aperture telescope operating in the thermal IR (10-15 |im) (Diner et ai, 1988a,b). At
these wavelengths, the star/planet flux ratio on the order of 10^ - 10^ (see Fig. 2). Our study
is
supports the work of Angel et al. (1986), who proposed a cooled 16-m IR telescope and a spe-
cial apodization mask to suppress the stellar light within a limited angular region around the
star. Our scheme differs in that it is capable of stellar suppression over a much broader field-of-
view, enabling more efficient planet searches. To do
key optical signal-processing
this, certain
components are needed, including a coronagraph to apodize the stellar diffraction pattern, an in-
frared interferometer to provide further starlight suppression, a complementary visible-wave-
length interferometer to sense figure errors in the telescope optics, and a deformable mirror to
adaptively compensate for these errors. Because of the central role of interferometry we have
designated this concept the Interferometer- Based Imaging System (IBIS). IBIS incorporates
techniques originally suggested by KenKnight (1977) for extrasolar planet detection at visible
wavelengths. The type of telescope discussed at this workshop is well suited to implementation
of the IBIS concept.
IBIS is designed to detect terrestrial, as well as gas-giant type planets. The optimal spec-
tral range for this purpose is 10-15 [im. Although the star/planet flux ratio improves at longer
wavelengths (Fig. 2), for terrestrial-type planets the zodiacal light/planet ratio increases by
more than an order of magnitude between 10 and 50 ^m (Fig. 3). Furthermore, the telescope
cooling requirements become considerably more difficult as the wavelength increases (Fig. 4).
Finally, the aperture size needed to achieve sufficient angular resolution will exceed 20 m if
wavelengths longer than 15 nm are considered.
IBIS uses several means to suppress the light from the parent star. The Lyot coronagraph is
an effective means of suppressing diffracted light, with little penalty in the planetary signal. At
the wavelengths envisioned for IBIS, the occulting disk will mask only a few Airy nulls, making
itnecessary to supplement the coronagraph with an additional starlight suppression method. As
described below, this can be achieved with a rotational shearing interferometer (RSI). We re-
quire the RSI to reject 99% of the incident starlight
133
2.2 Optical Schematic
Figure 5 is a conceptual optical layout of the IBIS system. The primary and secondary image
the star and planetary system onto an opening in the center of the quaternary. A tertiary mirror
accepts the light diverging from the first image and forms an image of the primary on the active
quaternary. The stellar image is then relayed to a point behind the primary. This optical layout is
3. TECHNICAL BACKGROUND
3.1 Interferometer Modelling
A rotational shearing interferometer produces superimposed images, one rotated by an an-
gle 2e about the center of the optical axis relative to the other. The quantity 20 is the shearing
angle, and resultsfrom a difference of in the angular orientations of the two fold mirrors in the
interferometer arms. The general RSI concept is diagrammed schematically in Fig. 6a. Figure 6b
shows the specific case discussed in this paper, in which the two roof mirrors are oriented 90°
with respect to each other, providing 180° of shear.
For a point source illuminating the RSI, the pattern observed in the final focal plane is an im-
age of the source and a ghost image at a position diametrically opposed to the source. With
equal path lengths in each arm and a near-perfect conductor coating the roof mirrors, the interfer-
ometer will provide destructive interference at the center of the field. Thus, for a point source on
the optical axis (the central star), the two images of the source null one another. On the other
hand, for an off-axis source (the planet), the image and ghost image do not overlap and there is-
no interference in the focal plane.
A Fourier optics modelling program has been developed to simulate the effects of diffraction
suppression techniques and wavefront errors on the feasibility of observing extrasolar planets.
The stellar point-spread function for a 16-m aperture operated at 12 |im with a 25% bandwidth is
shown in Fig. 7. Application of an occulting disk of radius 0.3 arcsec in the first focal plane, a pu-
134
pil-plane Lyot stop that reduces the effective aperture diameter to 85% of the full diameter, and a
RSI with a fHnge visibility of 0.99 results in the point-spread function for an on-axis star shown
in Fig. 8. The fraction of photons per pixel assumes that the detectors are sized so that 4 pixels
cover the central Airy disk. Note the significant reduction in energy in the sidelobes of the stellar
point-spread function over a several-arcsecond field-of-view. The peaks in sidelobe energy at
0.3 arcsec are due to diffraction at the edges of the occulting disk. This effect can be mitigated by
using a partially transmitting, tapered occulting disk (Ftaclas et al., 1988).
The accuracy on the primary mirror required to achieve the performance shown in Fig. 8 is
about V800 (^v ^40) if the deformable mirror is not used to make the wavefront errors diametri-
cally symmetric. When the wavefront errors are made diametrically symmetric, the use of the de-
formable mirror and RSI simultaneously enables relaxation of the figure errors to about X/80
4. PERFORMANCE CALCULATIONS
Performance predictions are shown in1. We compare the performance of a 10-m aper-
Table
ture operating at 10 jim (mirror temperature = 100 K) with a 16-m system operating at 12 |im
(mirror temperature = 70 K). In both cases, the mirror emissivity is 0.02, the spectral bandwidth
is 25%, and the integration time is 40 hours. Stars are taken from the Woolley Catalog and the
search distance is 25 pc. The zodiacal background used in the signal-to-noise calculations is an
average of ecliptic equator and pole background fluxes. A planet is considered detectable if the
et's angular distance from the star exceeds 2 Airy minima. Planet distances from their parent
stars are determined by their equilibrium temperature and the stellar spectral class. Note the
substantial improvement in performance resulting from increasing the aperture size, increasing
the wavelength, and cooling the optics to a lower temperature.
Diner, D.J., J. van Zyl, D.L. Jones, E. Tubbs, V. Wright, J.F. Appleby, and E. Ribak
(1988a).
"Direct and interferometric imaging approaches for detecting Earth-like
extrasolar planets." Pa-
per AIAA-88-0553, 26th Aerospace Sciences Meeting, Reno, NV.
Diner, D.J., E.F. Tubbs, J.F. Appleby, V.G. Duval, S.L. Gaiser, D.L. Jones, R.P.
Korechoff,
E. Ribak, and J. van Zyl (1988b). "Comparison of imaging approaches for
extrasolar planet detec-
tion." Optical Society of America Technical Digest Series
10, 54.
C, Siebert, E.T., and Terrile, R.J. (1988). "A high efficiency coronagraph for
Ftaclas,
astro-
nomical applications." Optical Society of America Technical Digest Series 10, 62.
KenKnight, C.E. (1977). "Methods of detecting extrasolar planets. I. Imaging." Icarus 30,
Roddier, F., C. Roddier, and J. DeMarcq (1978). "A rotation shearing interferometer
with
phase-compensated roof prisms." /. Optics (Paris) 9, 145.
136
,
0.1
0.1 1.0 10 100
WAVELENG1II (pml
Figure 1
SOLAR-TYPE STAR/PLANET
FLUX RATIOS IN INFRARED
Figure 2
10 15 20 25 30 35 40 45
WAVELENGTH (urn)
137
ZODIACAL LIGHT/PLANET
FLUX RATIOS IN INFRARED
5.0
4.5 -
4.0
O
H
< 3.5
tL
JUPITER (20 PC)
X
3.0
SATURN (5 PC)
2.5
UJ
z
<
2.0
o ASSUMPTIONS:
o 1.5
N MIRROR DIAMETER = 16 m
o 4 PIXELS/AIRY DISK
o 1.0
3 X 10' T^ = 244K
0.5 25% OF PLANET FLUX IN PIXEL
Figure 3
0.0 I
10 15 20 25 30 35 40 45 50
WAVELENGTH (Mtn)
90
LU
CC
<
CC
UJ
Q.
UJ
CC
o
oc
CC
INTERFEROMETER-BASED
IMAGING SYSTEM (IBIS)
ROTATIONAL SHEAR
DEFORMABLE
MIRROR EROMETER
FOCUSING LENS
SECO
figure 5
PRIMARY
<- -
-/K-
/ BEAMSPLITTER
Figure 6
139
STELLAR POINT-SPREAD FUNCTION
NO SUPPRESSION
16-ni diameter X - 12 \un AX/X = 0.25
10
c
o
u
o
10
-10
10 _i 1 i_ J I p
Figure 7
-2 2
10
-2
X
h.
(/I
c 10
o
o
.c
a.
c
o
u
o
-10 Figure 8
10 _i I L-
-2 2
140
DISCUSSION
Layman: Given a bulk mirror temperature of 100°K, what detector temperature is required?
R. Green : What are the requirements on the mirror scattering profile at the angular scales of interest
here?
Diner : The figure requirement of A/40 at visible wavelengths is based on calculations of scattering
due to mirror figure ripples on scales of ~20-100 cm. Using the rotationtil shearing interferometer and
deformable mirror relaxes the requirements on the primary to about A/4 in the visible.
Layman: Do micrometeroroid induced surface pits (1 mm to 100 mm pit size) give serious stray light
Diner : These small pits scatter light over a broad field of view. The most detrimental spatial scales
for scattering are tens of cm to a meter, so the small pits, at least qutilitatively, are not expected to
degrade performance.
Wool£ The benefits of the large filled aperture is that by minor modifications in the pupil plane you
can configure an apodized aperture, an interferometer, or even something you have not yet dreamed
up. The really worrisome problem is whether we will have enough control of phase and amplitude
irregularities to benefit from these schemes.
Diner : Right.
Bely: How do you plan to guide the instrument? Using the parent star?
Diner The one advantage of having the bright central star is that it provides a guiding and
:
phfise
reference. There are a number of places a portion of the stellar signal could be picked off, such as the
141
The Large Deployable Reflector
Paul N. Swanson
Jet Propulsion Laboratory
GALACTIC EVOLUTION
CONFUSION LIMITED FOR Z>1
CIO 157 MICRONS IS GOOD TRACER
INTERSTELLAR MEDIUM
FUNDAMENTAL CHEMICAL PROCESSES STUDIED BY SPECTROSCOPY
MOLECULAR. IONIC AND ATOMIC SPECTRAL LINES
LDR HISTORY
• 1978-1979 INITIAL STUDIES BY JPL AND AMES
142
LDR: TWO STAGE -
CONFIGURATION
HOLE TO SKY
SECONDARY
THERMAL
SHIELD
10 - MfTER
SUNSHIELD SUNSHiaD
SECOND STAGE
FOCUS
143
SYSTEM REQUIREMENTS
10 -
10
10
10
10
* Reproduced from SPIE paper delivered in Orlando, Florida on April 2, 19S^.
ABSTRACT
1.0 INTRODUCTION
The search for new, less expensive, and better ways to build and
deploy large optical telescopes both on the ground and in space has
received much attention over the past decade. Although numerous
innovations have been attempted, optical systems built by existing
methods are still very expensive, fragile, and demanding of time and
talent. Breakthroughs in optical design, fabrication, and control are
essential if a wider range of applications are to be addressed by
large, lower-cost optical systems. Here we report such a breakthrough
that should render large scale adaptive optics applications feasible.
147
specific mass (kg/m ) of a PAMELA™ mirror will be at least four times
lower than that of any previous mirror technology; and, more
importantly, this specific mass will be preserved as the mirror
aperture is scaled upward to 2 meters or more.
2.0 DISCUSSION
The idea of making large telescope primary mirrors from many
small mirrors can be traced back well into the 19th century. Indeed
numerous attempts have been made to build such telescopes; but, with
only a few exceptions, they have functioned only as "light buckets"
rather than true imaging devices because the individual small mirrors
could not be kept phased to the requisite precision. If the
irregularities in the optical surface exceed a few percent of the
wavelength of light, diffraction limited performance cannot be
achieved.
180
imperfections, the agile PAMELA™ segmented surface functions as a
massive, parallel-processing automaton to generate the conjugate
surface which cancels all of the errors, thus restoring nearly
diffraction-limited performance.
149
Figure 2-1 illustrates the PAMELA™ mirror concept, showing the
individual segments grouped into clusters which in turn make up the
mirror surface. Each of the individual segments has three
electromagnetic piston actuators and three active inductive edge
sensors (the other three alternate edges have passive elements
for
edge sensors associated with adjacent active segment edges).
The
piston actuators move in response to control signals from both
a
global wavefront error sensing device and the local edge sensors The
end result of precisely controlled segment motion is an
optical
surface that behaves like a phase conjugating continuous
facesheet
mirror, but with much larger dynamic range and much simpler and
faster
control algorithms and technology.
Segment
Cluster
ol Segments
Full Aperture
150
This leads to large potential savings in weight and cost.
Moreover, since the primary segments are demagnified back through the
optical train, they can correct for small scale, large amplitude
disturbances occurring anywhere in the telescope system, such as
thermal distortions on secondary mirrors and vibration induced
disturbances in the ray path.
The optical quality of images provided by highly segmented
mirrors has been carefully compared to the quality of images from
filled apertures. The amount of energy in the central lobe of the
diffraction pattern is comparable to that for a filled aperture, and
the image quality will be comparable. Six-sided diffraction spikes
will show only for very bright point sources, just as four sided
spikes occur in conventional systems from the secondary mirror
supports. Scattering of light from segment edges will lower image
contrast very slightly, as predicted by the Ruze theory familiar to
radio astronomers. As long as phase continuity is maintained, the
images from a PAMELA™ array will be excellent. Similarly, in the case
of laser beam expanders, the Strehl ratio will approach 87% at 0.5 /im
wavelength for the expected surface roughness of 0.025 /im, and the
amount of power diffracted or lost between the inter-segment gaps will
be less than 0.6 percent.
Some of the key findings that evolved out of the IR&D effort at
Kaman are summarized below:
151
overall system weight, and cost.
152
extremely large matrix, on the order of the square of the number of
segments, and requires a different solution for each permutation of
missing or non-functioning tilt sensors. While this is feasible for
small adaptive optics systems, it is very difficult for a system with
potentially several tens of thousands of elements. However there
exist a number of iterative methods which use tilt information
directly to set the surface, and which appear practical for systems
the size of PAMELA. These methods require a computation rate
substantially higher than the phase disturbance bandwidth, but recent
advances in microprocessor electronics make such computations not only
practical but cost-effective. Thus, a systems analysis approach
indicates Kaman's new measurement and control concept to be the
preferred one. Its complexity lies not in the wavefront sensor, a
difficult area, but in the computation algorithm, which is now well
understood.
3.0 VALIDATION
The results of Kaman's studies provide the basis for our
confidence that an edge-matched large array is feasible. Some
interesting experiments can be reported.
techniques have been explored. The most promising thus far have
utilized crystalline silicon and silicon-carbide for the
substrate. Two types of silicon carbide fabrication methods are
currently being evaluated under the ongoing IR&D effort. Figure
3-1 shows one example of a test segment which is dynamically flat
to X/20.
153
6) Validator. A test device has been built which has successfully
demonstrated edge match of a fully mobile center segment
surrounded by displaceable segments. The edge sensors perform as
expected, giving repeatable white light fringes on a Michaelson
interferometer. Figures 3-2 and 3-3 show the validator device
and a typical fringe pattern.
154
Figure 3-2. Dynamic segment validator.
ip'
155
DISCUSSION
Rather : There are various ways of preparing the surface. The ones available now are a bonded silicon
surface on the silicon carbide. There are a few manufacturers. One of the best is made by United
Technologies Optical Systems in West Palm Beach.
Bnrbidge: You said that the cost is down by a factor 5-10. How do you demonstrate that?
Rather : It depends on the economies of mass-production. It is still uncertain but it comes from
programs for evaluating the cost of meiss-production.
Nelson : For a 10-16 meter telescope, how small do the segments have to be to approximate the surface
with spherical segments?
Rather : There are many tradeoffs, but I would say no bigger than 15 cm, and probably 7 cm, but
Korsch can answer that since he has been looking at this question.
Korsch : If the segments are sufficiently smeill, the shapes of Ceissegrain-type and R/C-type primaries
may be approximated by simpler segment shapes. The segment sizes allowing a spherical approximation
will probably be too small (less than 1 cm), but relatively simple toroidal shapes will most likely be
adequate.
156
Optical Interferometry from Space
Robert V. Stachnik
NASA Headquarters
- RECENT HISTORY -
• BAHCALL PANEL
157
ADDITIONAL SPACE INTERFEROMETERS
' OASIS
RELATIVELY "FLOPPY" MONOLITHIC "Y" STRUCTURE HAVING
UP TO 27 1 -METER APERTURES
RING TELESCOPE
ANNULAR APERTURE AND LONG FOCAL LENGTH
» VISTA
INDEPENDENT SPACECRAFT MICHELSON DESIGN WITH BEAMS
SENT TO A CENTRAL COMBINING STATION
TRIO •
SCIENTIFIC CREDIBILITY
158
INTERACTION: MONOLITHIC AND
SYNTHESIZED APERTURES
• A CONTINUUM !
159
.
Michael Shao
Jet Propulsion Laboratory
Abstract
I Introduction
Several techniques have been proposed that in theory would be
capable of detecting Earth like planets around nearby stars. The IR
long baseline interferometer technique described here, has a
significant cost advantage of a factor of 10 to 30, $500M vs $5-15B
or $1B vs $10-308. The technique is based on using a long baseline
interferometer with an achromatic nulling beam combiner to
partially cancel the light from the star so that the IR light from
the planet can be detected.
Bracewell in the mid 70 's, first proposed the idea of using a space
based IR interferometer to extra solar detect planets. A study of
the concept by Lockheed for NASA Ames, was conducted with the
detection of a Jupiter sized planet at 10 parsec as the goal of the
"baseline" instrument. The detection of smaller but much warmer
planets such as Earth, or Venus was not considered. It turns out
that in many ways the detection these warmer planets with IR
inter ferometry, is not significantly more difficult.
Figure 1
10,000.000.000 r
100.000.000 r
1.000.000
10.000
100
0.01
0.2 0.5 2 5 10 20 50
microns
the IR two advantages are gained, one the contrast ratio between
the star and planet improves by about 3 orders of magnitude,
second, the number of IR photons per second from the planet is
actually larger in the IR. For detection of Earth like planets,
detection at lOum is optimum.
While the detection of an Earth-Sun system at 10, 2.5 or 4 parsec
is a useful objective, it is also important to determine the number
of candidate stars around which a planet could be detected, if a
planet were there. Here the advantage of a long baseline
interferometer becomes apparent. An Earth-Sun system at lOpc would
have an angular separation of 0.1 arcsec, requiring a baseline of
25 meters to resolve (or a 25 meter filled aperture telescope)
However, the vast majority of stars in the solar neighborhood are
much less luminous and the separation of an Earth like planet from
the star would be much smaller, requiring corresponding longer
baselines. While an interferometer with 2 3meter apertures (15 sq
meters) would be 10-30 times less expensive than a 16 m filled
aperture telescope, it would be an additional order of magnitude
less expensive than a 50-100 meter telescope that would be capable
of detecting hundreds to a thousand planets that the 3m
interferometer would.
V^ -y
Mirror
Corner cube
^W Each am sees 1 T, I R off bean splitter
source
the UV plane is sampled by the interferometer, either by changing
the orientation or length of the baseline, the signal would be the
modulation of the planet signal. Since on the average the planet
would be neither on the bright or dark fringe, but halfway in
between, the total integration time needed would be a factor of
four larger than the above 50 minutes because of the need to
demodulate the signal.
164
200m baseline is needed to detect Earth like planets around 50% of
the nearest 100 stars (with 6 parsec) For stars further away than
.
VI SNR summary
165
would be less than 10,000 in the 1st example with 3m apertures.
With 3m apertures, the interferometric cancellation needed is the
flus ratio (star/planet) divided by 2500 to get star light equal
the zodiacal light level.
- 5.00E+07
Figure 4
OOE+05
(5) 5 10 15 20
10.000 !-
1.000 r-
Figure 5
ai 02 0.3 05 1 2
Diameter Meters
4 hit invf^vtor^poin (9MU4
• >100m««t wididtoi worn ««•
166
Conclusion
DISCUSSION
Angel: You have considered using a two-element interferometer with two 3-meter apertures. With
a filled aperture and the apodizing system discussed yesterday, in an exposure of two minutes, an
earth-like planet would be seen in a 0.25" ring around a star at 4 parsecs. One would explore a region
with 6 resolution elements in the image space unambiguously in 2 minutes at the 5 sigma level. For
comparison, for the two 3-meter telescope interferometer, putting in the same conditions and the same
zodiceil light "background", one would get a single metisurement, not an image, at one fringe spacing at
a single orientation, and that would take several hours. At the end we would know only that the fringe
contrast would be slightly reduced by some object in the field. To make a map with any semblence of
the detail from the filled aperture image, many orientations would have to be done. This would result
in days of integration.
Shao : Let me respond to that. The standard algorithm is that the dynamic range of a map is the S/N
per u-v point divided by the number of u-v points. So if there are 25 u-v points each with a S/N of 1,
that produces a map with a S/N of 5. That is the case for a large number of pixels. The radio maps
are done that way. The S/N per individual baseline can be quite low if you have a large number. The
number of u-v points needed are the number of pixels in the map that one is trying to reconstruct. In
the case of the planet with the star nulled out there is only one pixel.
Angel: You have looked at using very large baselines with the interferometer. For the 16 meter
telescope one can get to 0.1" by masking out apertures, and so one can get out to 10 pc. Within 10 pc
there are enough stars that one could spend years with an interferometer, given the time that it takes
to do a stellar system, and so it is not clear that there is an advantage over a filled aperture to having
an interferometer with bigger baselines that goes below 0.1".
Shao : Well, it really depends on the cost. Instead of two 3-meter telescopes one should really
compare the cost of two 12-meter telescopes in an interferometer with 50 to 100 meter baselines to a
filled aperture. You could look much, much further. You could go to 50 pc. The number then goes to
thousands of objects.
167
miBgworth: Planet detection is not the only science that will be done with either interferometers or
filled apertures.
Shao : I agree that both interferometers and filled apertures have many science goals other than planet
detection. In prior studies of interferometers for the direct detection of planets, a number of problems
were identified that, upon more careful study have straightforward solutions, compared to the many
problems of putting a large precise instrument in space.
niingworth: David Diner did a study comparing interferometers and filled apertures, for planet
detection. He decided that a 10-20m filled aperture was best. Why did Diner choose the filled
aperture rather than the interferometer that you are proposing for planet detection?
Diner : I have severtd concerns with the interferometer. First, measurements must be made at many
baselines and orientations and this multiplies the integration time. Second, the interferometer must be
stable between collection of points in the u-v plane, which puts very tight tolerances on the pointing
of the telescopes and on thermal control of the optics. At the cold temperatures needed to detect
planets, if the temperature of the mirror changes by a very tiny fraction of a degree that gives a change
in the signal that might be interpreted as a planet. There were thus a number of problems with an
interferometer that led us to choose a filled aperture.
Shao : I responded to the first of your concerns in my previous answer to Roger Angel. There
are a number of techniced problems such as the path length problem that we now have solutions to.
Controlling to 5 nanometers we see as doable. For the thermal control problem we see using an array
of detectors instead of a single detector as the means to solve this problem. In addition to the detector
looking at the planet, we would look at the pixels 4-5 arcsecs away through the same optical system.
Kahan : A brief comment —your last viewgraph would suggest cost grows > D^. In fact, in a
production mode, and ignoring for the moment the issue of number of different segment shapes and
the secondary mirror support dynamics, a number of studies suggest the cost of a fully populated
segmented primary goes as < D*. Considering the cost of the partially filled aperture support, it
seems that the cost with a fully populated mirror is still very open.
Shao : Yes. For a spaceborne system that's likely to be the case at the moment.
168
Session 5
169
DESIGN CONCEPTS FOR VERY LARGE APERTURE, HIGH-PERFORMANCE TELESCOPES
SUMMARY
The influence of the primary mirror on the performance and on the feasibi-
bility of a very large telescope with extreme performance requirements is
considered. A detailed study comparing aspheric- and spherical -primary
systems, before selecting the mirror configuration, is recommended. Pre-
liminary results are presented.
INTRODUCTION
When thinking about building a next generation space observatory which will
undoubtedly take many years to complete and will, hopefully, operate for
many more years, we must also think of the next generation of astronomers,
who will eventually be the principal benificiaries of such a project. This
means that we can not be satisfied with developing a system that only accom-
modates today's list of science objectives, as even those may change and
more ambitious ones will most certainly be added in the years to come. It
must, therefore, be our goal to develope a facility that serves not only as
many science interests as possible, but that will also be able to grow to
meet the challenges of the future.
Such a universality and flexibility can be achieved by applying the con-
cept of modularization wherever possible. This not only facilitates ser-
vicing and repairing subsystems as it becomes necessary, but it also pro-
vides the opportunity to update and improve the overall system as the need
arises and the state of the art progresses.
DESIGN CONSIDERATIONS
Based on today's state of the art the feasibility of a ten to sixteen meter
telescope depends almost entirely on the feasibility of the primary mirror
which is not only unprecedented in size, but also demands an unprecedented
degree of precision.
The primary mirror, because of its size and because it is so tightly inte-
grated into the overall system structure, cannot be easily replaced like
many other subsystems of modular configuration. In addition to being the
most crucial part of the telescope it will also be a permanent component and
therefore requires extraordinary attention. A thorough study of possible mir-
ror configurations, taking into account system performance, manufacturability
and cost, should become a high-priority early effort.
171
-
Regarding the primary mirror configuration there are basically two options to
be considered.
aperture diameter 10 m
system focal length 220 m
maximum mirror separation 15 m
172
The telescope conigurations used for the comparison are diagrammetrically shown
in fig. 1. A graph summarizing the performance analyses is shown in fig. 2. The
residual aberrations are defined by the smallest circle surrounding the complete
spot diagram. It was found that a spherical-primary telescope with a three-mir-
ror corrector can be optimized to a higher degree than the aplanatic Cassegrain
(Ritchey-Cretien). The arrangement of the three corrector mirrors as shown is
neither the only possible one, nor necessarily the preferred one.
CONCLUSIONS
173
TELESCOPE CONCEPTS
DISCUSSION
Nelson : For a Ritchey Chretien system, how large a field of view can be approximated by a plane?
The issue here is how large a single detector (in a mosaic) can be and still be planar. The individual
detectors can be laid out to approximate the curved focal plane.
Korsch : It depends on the actual design parameters. For our example, the residual aberrations reach
about pixel size (10 fiia) at the edges of a 1 arcmin diameter field.
Bely : Your proposed design includes 4 reflecting surfaces. Could you comment on the corresponding
loss of throughput?
Korsch : This question is difficult to answer, because it depends on too many factors, such as coating
materials, coating process, aging, etc. As an example, according to Action Research data, the efficiency,
using a UV coating optimized for 250-360 nm (Ag and Mg F2), drops to 64% after two reflections at
150 nm.
Korsch : A three-mirror telescope with an aspheric primary could meet the performance requirements.
However, it seems that a spherical primary with a two mirror corrector cannot be corrected well enough.
Dlingworth: You have concentrated on a flat focal plane and noted that the field is quite limited
with a curved focal plane. However, I do not feel that a curved focal plane is necessarily a major
disadvantage. The instruments could accommodate such a curved focal plane and will generally have
interniil relay optics. I doubt also that we would try to have such a large magnification in one element.
The structural requirements appear to be too large.
Korsch : There are certainly more important considerations than a flat focal plane as I have tried to
point out in the paper. However, flatness of the field, at least over the area of an individual detector
array (1/2 to 1 arcminute), may be required. The concern about a large secondary magnification is as
familiar as it is unjustified. Interestingly enough, nobody is ever concerned about an afocal telescope
which has an infinite secondary magnification. It is the vertex curvature that can cause fabrication
and adjustment difficulties.
176
The Lunar Configurable Array Telescope (LCAT)
Introduction
The desire for a much larger space telescope than HST by astronomers is clearly
demonstrated by the attendance at this Workshop. The reality is that a much larger
space telescope than the HST collides with cost scaling reality. Coupled with this reality
is the fact that any multi-billion dollar science project must have broad-based support from
the science community and solid political support at both Presidential and Congressional
levels. The HST Successor is certainly in the same multi-billion dollar class as the Super
Collider of the physics community, a project that has finally achieved the broad support
base necessary for funding to follow. Advocacy of a bigger HST on the general grounds
that 'bigger is better' will not be sufficient. A new concept needs to be developed that
clearly diverges from scaling up of a traditional HST-type space telescope. With these
realities inmind we have a few comments regarding the nature of a possible space telescope
that may depart from what the organizers of this Workshop had in mind.
The national goal declared by the President is Space Station, the Moon and Mars, in
that order. Space Station is a potential location where a large system could be assembled
prior to being sent into a high orbit. It is not a desirable environment for a large space
telescope. Mars is not relevant as an observatory site. The Moon is very relevant for
reasons we will address. Our comments are based on the premise of a permanent Lunar
Outpost.
One of the main arguments for a lunar telescope is a degree of permanency, that is,
as long as a Lunar Outpost is maintained. In contrast, the relatively short lifetime of an
orbiting telescope is a disadvantage, especially as a cost penalty. Access to a telescope in
a 100,000 km orbit for refurbishment and resupply is a major problem with no solution
in the present NASA planning.
A telescope in conjunction with a Lunar Outpost means the possibility for continual
upgrading or modifying the telescope to meet changing science objectives. The two main
technical disadvantages of the Moon are i) its gravity field and ii) direct Sun and Earth
light. The gravity term is manageable. It also appears to be feasible to shield the telescope
from direct sun and Earth light and from scattering from nearby lunar terrain. Thermal
disturbances to the telescope also appear to be manageable by proper shielding, enabling
the telescope to become as cold as if it were at a lunar pole crater. If these conditions are
met, the telescope could be at a logistically convenient location near the Lunar Outpost.
We
want to address a concept that is significantly different from those presented in
the preliminary communications from Garth Illingworth in order to help fill in the matrix
of possibilities. This option, moreover, is of special interest to JPL and could be an area
where JPL can contribute in future studies.
* to attend the meeting. This paper was read by J. Breckinridge who also
The authors were unable
answered questions on their behalf.
177
produce a tight diffraction image suitable for feeding a high resolution image into the
entrance aperture of analytical instrumentation. In dilute array it could become an in-
terferometer of immense resolution capability through image reconstruction. JPL has
prepared reports on the performance capabilities of lunar imaging interferometers that
can be supplied upon request.
An LCAT is also a possibility for a very large space telescope operating in high Earth
orbit. Here the problem is one of telescope lifetime compared to the usefulness of a given
configuration and the number of weeks or months that a given configuration would be
maintained. Configuration changes for VLA are scheduled months in advance to meet the
science objectives of many astronomers.
is not appropriate for an orbital system, but it is for a lunar
This scheduling strategy
system. the orbital system has a lifetime of only 5-10 years, configuration changes
If
would be limited because of the intense demand for observation time. A full 10-year
program could be developed for the array in its densest configuration. Similarly, a full
program could be developed for the array in its interferometric configurations. The system
lifetime would simply be too short to meet its potential for serving a broadest astronomical
community. Not so for a lunar-based LCAT.
In our concept the most attractive aspect of LCAT is that the array consists of
modest-sized fully-independent modules. This means that lunar-based astronomy could
begin as soon as the first module is delivered to the Lunar Outpost. The cost scaling law
is favorable, the political hurdle minimized, time to first light minimized, and industry
operation. The LCAT module would have the basic parts of any terrestrial telescope,
with the beam combining module a separate subsystem delivered as a complete unit. The
goal would be an LCAT module small enough and light weight enough that it can be
delivered as space is available in the manifest.
Some of the main technical issues which need to be addressed are examined below.
Field of view. The view of the
field of MMT
is limited to a few arcmin by the size of
the facets on the six-sided beam combiner. If a wider field is desired, a prime focus
single mirror combiner can be used. It is still limited to a field of about 10 arcmin,
but for a large telescope this size field is quite useful.
Added The beam combiner and path length compensation add at least
reflections.
three reflections. This can be serious for wavelengths below 0.2 fim. One is then
caught between the desire on the one hand for the highest resolution, meaning an
array of apertures rather than a single aperture, and on the other hand by a longer
178
integration time. For the most extreme ultraviolet it would probably be preferable to
operate the array as singles, and consequently at the resolution provided by a single.
Path length compensation. Path length compensation of a lunar array is small com-
pared to that for an imaging interferometer on Earth. The rate at which the path
length compensation system operates is small but over a much larger total path-
length change if a kilometer-sized array is used. An array like that proposed for
project ALOHA by Labeyrie, where mobile modules move so as to keep path lengths
constant, is a possible option. Further engineering analysis would be needed to deter-
mine if the ALOHA concept is a practical option for a lunar system. Demonstration
of ALOHA on Earth is clearly a necessary precursor.
Polarization. The plane of polarization must be maintained throughout the reflections
in the path length compensator / beam combiner optics; otherwise the diffraction
pattern of the array will be compromised.
Diffraction / MTF effects. The diffraction pattern formed by an array of discrete
apertures results in energy being distributed into an array of secondary maxima sur-
rounding the central maximum. The cleanest diffraction pattern is clearly that of
a single unobscured aperture. As the apertures are separated the number and in-
tensity of these secondary maxima increase. For a non-redundant array the central
maximum is at maximum sharpness and resolution but at minimum central spike
intensity. What is not always appreciated is the seriousness of the efi'ect of such an
array pattern on the secondary maxima. Both the number and intensity total of the
secondary maxima are maximized. This situation is very bad if one wants to feed an
analytical instrument or record a direct image.
We feel that the LCAT
concept should be given study attention edong with the more
conventional options for both orbital and lunar telescopes. This study should first be
from a systems point of view. This would be followed by addressing technical issues such
as the performance of LCAT in its densest clustering as well as for various configurations
required for typical science objectives. It is easy to talk about beam combining. The
real problems arise faced with actually building a working system. These
when one is
practical engineering aspects of beam combining mean that good concepts need to be
developed and then analyzed. One objective of such a study should be to define precursor
experiments that could be done on Earth to show that critical milestones can be met.
There an interesting compromise between a single large aperture telescope and an
is
interferometer that still makes use of the modular concept: A fixed, co-mounted cluster
of seven modules, much like the MMT, plus outlying modules that would constitute the
configurable interferometric array. The co- mounted cluster of seven would avoid path
length compensation, give a reasonably tight diffraction pattern, and still have the cost
and logistical advantages of modules that we discussed above.
Current related projects at JPL are addressing construction of a large imaging inter-
ferometer at a terrestrial site, location TBD (supported by the U.S. Navy), and design of
an orbiting imaging interferometer of fixed baseline, supported by JPL funding.
Acknowledgements
The research described paper was earned out by the Jet Propulsion Laboratory,
in this
California Institute of Technology, under a contract with the National Aeronautics and
Space Administration.
179
Artist concept of the Lunar Configurable Array Telescope (LCAT)
Standardized modules are delivered to Lunar Outpost as manifest permits. The modules are config-
ured as they become available into a Large Aperture MMT with two or more configurable interferometric
modules. Astronomy begins as soon as first module is de livered. The science objectives are similar to
those of the NGST.
Advantages of lunar siting include: long operating lifetime, support from Lunar Outpost, vacuum
environment, very low tracking rates, and ability to be reconfigured as science programs require.
180
DISCUSSION
Woolf Recall that yesterday Roger Angel showed that while we are replacing the MMT with a single
large aperture, with Columbus we are building a new MMT. The MMT concept is only appropriate if
you have already started with the largest filled aperture that is possible. And if you want the same
point source sensitivity as a 16 meter with thinly spread 4 m apertures, you need 256 of them.
Rolan : Extending the comment that Nick Woolf made, - as segment diameters rise, the full facesheet
costs may drop, but one finds that reaction structure costs climb. (As diameters go up, bending rises
by segment size to the 4th power; this has to be bought back with thickness, and thus the substrate
cost can climb to oiTset pure faceplate based conclusions.) Thus, the optimum cost point for "5-tier"
construction may be < 8 m, especially when possible breakage is considered. Conversely, for "3-tier"
designs (which are probably the preferred way to go), larger may be better.
Breckmridge: Yes, and it's we need to do sufficient phase A studies to really let us get at
certain that
some of these issues. I believe that NASA must have a much more aggressive Phase A system study
than we had for EST. This is essential to: (i) identify trades between filled aperture and interferometry
in terms of science payoffs; (ii) constrain cost; (iii) monitor technology development activities. The
more money that is put up front in studies and analyses the cheaper the whole telescope is going to
be (..APPLAUSE!!..).
niingworth: Let me make a point that is related to this discussion. There have been several workshops
over the past few years where a rather rosy view has been taken about the potential of interferometry.
There have been many scientists who have become concerned that we were starting down a path that
was too narrowly defined in its technical focus without a clear understanding of the scientific goals of
the astronomical community. The perspective has now become more sophisticated and an appreciation
has developed that these approaches are complementary. If one looks at the observational phase space,
the multiparameter phase space, there are regions which are optimally tackled by interferometry, and
there are regions that are optimally tackled by large fiUed-aperture systems. The overlap is a grey
area, which needs to be fleshed out scientifically and technically. I see this workshop as one of the first
181
Large Optical Fabrication Considerations
and Speculation about the overall NGST Conilgiu'ation
Michael Krim
Perkin- Elmer Corp.
Introduction
It had been recognized at Perkin-Elmer, as elsewhere, that the the ability to effi-
Since this conference devoted to the science and engineering aspects of the Next
is
Generation Space Telescope (NGST) and not mirror manufacturing in general, we have
attempted to make our remarks about large mirror fabrication quite specific to that
project. In this regard we will present a design concept for the NGST*, the intent of which
is to provide the rationale for the selection of a particular mirror segmentation approach
as well as to provide sOme insight into other systems level design issues. Notable among
these are the issues and considerations relating to system deployment and launch vehicle
space constraints, i.e. real-estate allocation, and other mechanical aspects of the design.
In any event it is exciting to speculate about what the large-scale design architecture of
the NGST might be. It is happily reminiscent of the work that we did in the early 1970's
when the configuration of the Hubble Space Telescope was first being "put to paper".
or fully assembled segmented, 10m mirror would not exist in the USA's or anyone else's
inventory in the 2010 to 2015 era. While the Advanced Launch System (ALS) is generally
regarded as a 10m vehicle,it would not be large enough to accomodate a fully assembled
and erected 10m system. As shown in the inset, a 10m aperture telescope will have an
This paper was written prior to receiving the preliminary systems description and data package
prepared by Bely and Dlingworth. Any differences between what was assumed here and that data is
completely unintentional. The conceptual design described in this paper was based only on the material
contained in the announcement of the meeting and some (hopefully) educated guesses.
182
Rerwit nanipulaior sysien
for eivtMO antf conlinftncy--'
optrsitng noMs
HSTEND-CF-LIF^
TODAY 0-C/D
AST LAUMCHffl
y
1990 1995
V
2000 2005 2010
V
Xil5
AST/VEHICLE INTEG'N
AST INTEGN 4
PM & GLASS FAB
f
SrSTEM DESIGN
rr...t. .t.
KEY TECH DEMO'S
Figure 3
183
overall diameter of 11 or so meters when baffling, structures and other system elements
are accounted for. Adding shroud-to-structure clearances and making allowances for the
fairing structure itself adds perhaps another meter. This suggests that a 12m shroud
would be realistic expectation.
size. Presuming that a 10m sandwich mirror is designed such that the maximum bending
stress developed at a 7.5g quasi- static launch acceleration does not exceed a safe stress
limit of 2000 psi, it would weigh in the neighborhood of 50,000 lbs and be approximately
30 inches thick. This weight range is predicated on a family of mirror designs with 0.5 and
1 inch faceplates and a lOx areal density core. In Figure 2 we show the methodology used
to determine the weight and proportions of these mirrors. The stress equation shown in
that figure yields approximate answers only and is in no way intended as a substitute for
rigorous analysis. While this mirror doesn't imply a final optimized design, it is felt to
be representative of what might be achieved with a glass structure. This model further
assumes that the mirror is supported at 3 points near its periphery. Some stress reduction
can be achieved with supports at a 2/3'rd radius location. The center deflection at 7.5 g's
is on the order of 0.25 inches or about 9500 waves! It follows then that a (force) actuator
system and s reaction plate would be needed to compensate for "some" non-recoverable
launch strains. It follows also that an auxiliary multi-point launch support system would
need to be extremely stiff, and consequently heavy, if it were to be effective in limiting
deflections for such a mirror.
Onthe other hand, 3/4 of an inch glass distributed over a 10m diameter would weigh
only 7500 Ibi. However a 3/4 inch thick mirror segment in excess of 50 inches in diameter
will require more than a simple 3-point support system to limit stress levels to 2000 psi.
Therefore thin shell (meniscus) mirrors of this type would need to employ displacement
type actuators to couple it to a structurally efficient composite sandwich (or truss) support
structure. These are "stiff" actuators which directly impart a displacement to the mirror
as opposed to a "60ft" actuator such as employed on the HST which apply a force field
to the mirror. This is an important but subtle difference. The idea here is to "....use
structures for what structures do best and use mirrors as mirrors".
The sandwich mirror will require fewer but more powerful actuators than the thin
meniscus mirrors. Both approaches will require reaction , i.e. support, structures. Qual-
itatively the weight of the actuator systems will be "a wash", at least to first order. In
that case, the segmented meniscus approach could save about 25,000 lbs over a monolithic
approach. The additional weight of the deployment mechanisms and possibly a robotic
arm for teleoperator- assisted erection will still result in a positive weight advantage for
the segmented approach.
Finally one might question the technical and economic feasibility of producing a 10m
diameter lightweighted (sandwich) mirror blank from a low expansion material such as
Zerodur or Coming's ULE within the next several decades For these reasons, the ability
.
184
— I
FK^ORC Z '
M\R.ROR. UjeiGKT eSTlMATIjJc; MeTHCPO
E
lofo /tpa^fi Off<kiTr co^s
MAX
W vv/ IS we"iGK"7
O (T- ^^ -^
^T + 12T-5
FOft .-r'^ PAC(3-1*i./>Tff5
dL) r-
3T'^-3T*I
6T 3T
Foe q-= 2.0OO pst' _j
mer ov«t^/h,(_ MiRcoie pi(C«:xje^c ^p>i^
y^
= MO SOL-UTICUO @ 2 000 p<'
185
( LEO and that there was an available orbit transfer vehicle (OTV), the boosting of a
)
fully assembled, relatively frail and flexible NGST represents a major controls, contam-
ination avoidance, and structural dynamics challenge. Therefore we opted to configure
the design to be automatically deployed in the specified high earth orbit (HEO) and to
require only a single launch to achieve operational status. Finally, it is assumed that it
would be desirable to achieve an operational date of about 2010 for the NGST. That is
consistent with the HST's 15 year lifetime. Completing the NGST within 20 or 25 years
from now implies that the technologies that will be used for it's design and construction
are no more than extensions of what can be accomplished today, particularly with regard
to materials. In fact it might be stated that the NGST cannot afford to use anything
later than 1995 technology. We show this in Figure 3 where an attempt at a development
schedule is presented. This schedule tacitly implies therefore that the primary mirror will
be glass, such as Zerodur of Corning ULE, although the rapid emergence of new mirror
materials such as silicon carbide may alter this conclusion in the near future.
A few comments about the concept drawing need to be made here. To a large extent,
the drawing began with an outline drawing of the assumed launch vehicle which defined
the envelope for the stowed NGST. An 8m outside diameter ascent fairing with a 7m
diameter available for the payload was assumed. While such a vehicle does not exist
today, it is within the range of capabilities that are being considered by the Air Force and
NASA. For the purposes of this paper, volume constraints were considered to be more
criticalthan weight. The ability to stow a 10m system in a 7m payload envelope and
then automatically deploy it places far greater demands on on the resourcefullness of the
designers than weight, at least in my opinion.
186
7-DOf DUAl. nODE
TELEOPERATOfi SMART -ARM
EIGHT 126 » 110 INCH
TRAPCrOIOAL PA^iELS MIRROR SEBrtENTS
FOR AN a-n APERTURE MOUNT HERE
WITH Jl« OBSCURATION
JETISONKRE
AFTER ASSEMBLY
1 .5 X .25 meter -^
CIRCLE
1 youp^
ttitsie '-toPfoeT erf (raoojco US06 Mi«c-oeA»'J
Figure 4
4 25 STOWAGE
'Vegatable Steamer' Stowage Concept
DIAMETER
187
approach where the mirror was divided into only two segments, split along a diameter
and stowed on either side of the central telescope structure, also shown in Figure 4,
was considered as way of minimizing the inter-segment gap area. While it has many
advantages from a mirror producibility and performance aspect, the lack of rotational
symnetry was judged to be a disadvantage from an automatic deployment aspect. As
shown in that figure, such an approach would include a robotic arm to 'maneuver' the
segments and other system elements into position as an alternative to the more fully
automatic deployment approach possible with the six-petal configuration.
_ 1 d^AR
twentieth wave (rms) and for a nominal segment size of 2m and a radius of curvature
of 30m. This is equivalent to a gross saggita tolerance of 0.04 micrometers or about
1 fourteenth of a wave. Clearly this degree of precision is difficult, if not prohibitively
costly (my opinion), to achieve if the segments were individually produced and/or if
there were no means for adjusting curvature after the segments were actually completed.
The four mirror telescope design advocated by Meinel and the LDR community is an
elegant means of correcting radius of curvature errors arising from manufacture. For
the NGST however, we felt that the additional two reflections imposed by the tertiary
and quaternary mirrors might be too expensive from a throughput aspect in the short
end of the operating spectrum. A more appropriate solution for this application we
thought would be the use of shape (figure) control actuators which physically bend the
segments to the correct curvature,if required, to correct for manufacturing errors. The
figure control system, whether used directly on the primary mirror or on an image of it
on a quaternary, also provides a correction capability for errors that might arise from
the large temperature change between the shop and orbital operation at the desired 100
degrees Kelvin. Of particular concern is the effect of this spatially uniform temperature
change on any CTE anisotropy that might (will) be present in the segment substrate. We
will take advantage of the ability of the figure control system to correct low frequency
aberrations and thereby relax the manufacturing tolerances in this range to about a half
wave rms. The presumption here is that a properly designed actuator system can provide
a wave of correction, at least in the long spatial frequency domain. In the mid- spatial
frequency regime, beyond the (presumed) correction capability of the actuator system,
the as-manufactured figure quality will be on the order of 0.01 waves. As with the Hubble
Space Telescope (HST), our optics manufacturing approach is based on the use of small
tools operating at a uniform and constant interface pressure with material removal being
proportional to velocity of the tool path at any given point. This is accomplished at
Perkin-Elmer with a machine referred to as the Computer Controled Polisher (CCP).
188
The concept of a CCP
capable of figuring a 10m mirror will be described later in this
paper, after the architecture discussions are completed.
The ability to compensatelong spatial frequency errors with actuators mitigates
for
against the need for inherently rigid mirror segments. In fact, rigidity can be a detriment
in the design of a figure control system (in certain instances). Provided that figure errors
in those spatial frequencies equal to or shorter than the span between three (theoretically
two) actuators are within acceptable limits (0.01 waves rms), the low spatial frequency
errors can be relatively large, several waves is not unreasonable although we have not yet
quantitatively explored just how inaccurate one could allow the optic to be before the
sensing and control system becomes an impractical theoretical exercise.
conceivable therefore that the substrate could be a relatively thin sheet of ma-
It is
terial, a 25mm
(1 inch) for arguments sake, provided of course that it can be generated
and figured. It is (well) beyond the scope of this paper to address the trades relating
thickness, actuator density, sensing and control system complexity, structural integrity,
weight, inherent reliabiity, and so on. Suffice for this discussion, the ability to generate,
figure,and polish extraordinarily thin substrates is an important consideration in the
manufacture of a NGST type mirror.
Other Interesting Speculations About the Concept: It was assumed that at least a
dual focal length capability, for many of the same reasons this is the norm with terrestrial
telescopes, would be desired. For illustrative purposes it was assumed that the visible and
infrared final focal ratios were 10 and 20 respectively and that the primary mirror would
be f/1.5. Focal ratios (f/no's) greater than 20 were not considered for NGST because of
the size related and exponentially increasing vibration and structural sensitivities. Final
focal ratio-to-detector matching is as usual, an instrument responsibility. These choices
were not based on any quantitative trades or analyses. An f/1.5 primary was judged to be
as fast as one could go optically and still achieve a 20 or so arc minute total field of view.
The selection of an f/10 visible and an f/20 IR focal ratios offers the ability to minimize
the size of the secondary mirror to reduce self emission for IR operation. This could be
at the expense of having two secondaries that could automatically be interchanged. This
is probably a disadvantage from a weight, dynamics, and failure potential aspect. Several
otlier more attractive approaches for achieving a dual focal length capability could be
considered including the use of common f/10 fore-optics with the visible focal clearance
and employ a tertiary to achieve a 2x final magnification as well as to provide an exit
pupil for the location of a cold stop.
Unlike the HST, where a 3m
superscribed circle enveloped the instrument package, it
may not be necessary to locate the NGST's final focus behind the primary. Presuming
that a 3m diameter instrument section is still reasonable, then the instrument section
might be allowed to pass through a central hole in the mirror at the expense of a linear
obscuration ration of 0.3. A
1.25m secondary with a magnification of 6.6 would produce
an f/10 image which comes to focus approximately 0.6m ahead of the primary. This is
what is shown in the conceptual drawing.
Regardless of what form the final optical design might take, the secondary mirror needs
to be supported by a structure which, from an idealistic IR performance standpoint, should
not introduce any obscuration in object space and must not introduce any obstruction in
(convergent) image space. An off-axis system would fulfil these desires. A structure such
as the secondary mirror support truss would be a good practical compromise. While
HST
the the obscuration from the thin spiders legs are in object space, the rest of the structure
189
is totally outside of the optical envelope. However, for all intents and purposes, a 10m
cylindrical truss such like the HST is a non-solution for the NGST, it is not amenable
Baffling:
Bright object baffling, i.e. the earth, sun, and moon, is a persistent problem for giant,
space deployable telescopes. The use of a positionable bright object blocking screen might
prove to be a more easily implemented solution, albeit with (potentially severe) opera-
tional constraints. While on the subject of large distributed surfaces, the concept drawing
does not show any solar arrays. Hopefully by the time NGST becomes operational com-
pact power sources such as nuclear or radioisotope generators become politically as well
as technically acceptable. The cumulative effect of submicron micrometeoroid impacts on
the surface of the primary mirror is an issue that requires further evaluation. A baffle en-
closure, even one composed of several layers of multi-layer insulation (MLI), can provide
some protection against this source of degradation.
So much for thoughts and speculations about what the NGST might look like and
why. Now I shall describe how the mirror described in the concept might actually be
produced.
As we stated at the beginning of this paper, Perkin-Elmer along with others in the
field, have been exploring alternative ways of producing optics in the 8 to 10 meter
size range. Our experience lies with the successful application of small tools, the CCP
described earlier, and we can extend that technology from the 90'th wave, 2.4m HST
primary to the the 10m NGST. All of the technology to accomplish this facility scale-up
is currently in place and we believe that such a mirror could be completed in five years,
including the production of the blank by Corning or Schott and the parallel construction
of the building and the fabrication of the metrology and polishing equipment. Subsequent
mirrors of that size, if we hope that there will be, could be figured at the
there are any and
rate of one every two years in that facility. The model of such an optical manufacturing
facility is shown in Figure 5 and will be described more fully later.
Segment Manufacture:
up our optical shop to accomodate 8 to 10m
In addition to the basic issue of scaling
HST type (monolithic) mirrors, the question of radius of curvature matching of individual
segments to the kind of precision described earlier has been also been addressed at Perkin-
Elmer. Making matched segments, one at a time, has successfully been accomplished
under government contract. In this instance the segments were spheres, approximately
Im in diameter with a focal length of 5m (as I recall). While successful, this approach is
190
Figure 5: Model of the 10m Optics Manufacturing Facility
INTERFEROMETER LOCATED
AT CENTER OF CURVATURE
SECTION THROUGH
COMPUTER CONTROLLED POLISHER
Figure 6 Method for simultaneously producing all segments with large CCP
191
extremely costly in terms of schedule. As described earlier, curvatures can be adjusted
to some extent with actuators, or as in the case of Keck, with preset "coercers", i.e.
a "set and forget" actuator. Or at least that's what I think is being done. On another
program that we were associated with, the Large Deployable Reflector (LDR), the concept
of "semi-replication" was evolved where a graphite tool was used as a master for molding
thin quartz substrates to a common radius of curvature. In this instance quartz was
selected because of it's superior homogeneity. This was an important factor since the
temperature change between fabrication and operation was several hundred degrees (F)
and the AR/ R precision had to be preserved. This process was subsequently demonstrated
in a joint Per kin- Elmer /Hereaus program, under the DARPA umbrella program called
Rapid Optical Fabrication Techniques (ROFT). In another program sponsored by NASA-
Lewis, we successfully laminated thin glass sheets to (nearly) matching CTE (coefficient
of thermal expansion) composite substrates preformed to a spherical surface. These two
programs are mentioned only for completeness since enormous material and process scale-
up problems would need to be solved before they could even be considered as a candidate
for the NGST mirror. From here on we shall confine our discussion to glass mirrors, albeit
unusually thin by present standards.
The ability of the CCP to be scaled up to 10m sizes begs the question, "Why not
temporarily attach all the segments to a 10m tooling plate and generate them all at once?".
In principal, the concept is as illustrated in Figure 6. The idea here is to "fool" the CCP
and/or the optical shop into thinking that they are working on a monolith. In this manner
itis believed that all of the segments will have the correct curvatures, eliminating the
AR/R concern described earlier. This is really an extension of the technique developed by
Dr. Robert Leighton of the California Institute of Technology for the rapid and accurate
manufacture of 10m class sub-millimeter aluminum reflectors.
There are three must be resolved for such a technique to work: what
issues that
material should the tooling plate be made of; how are the segment substrates attached to
the tooling plate; and what will be the extent of sleeking or other gap related 'damage' and
how can it be controlled. Ideally the tooling plate would look very much like a relatively
thick mirror and be constructed from the same material as the segments. The segments,
presuming that they are constant thickness elements of a meniscus, would be attached to
the concave front surface of the tooling plate. This front surface would be generated to a
sphere whose radius of curvature matched the back surface of the segments.
In addition to the obvious choice of ULE or Zerodur for the tooling plate, a much
lower cost and more readily available material, slip cast silica, has merit. Slip cast
silica,
as produced by the the Cerodyne Corporation, is used for high temperature tooling by the
automotive and aircraft industries. It is cast, as implied by the name, from a fused silica
slurry which may optionally contain a pebble sized fused silica aggregate which improves
its strength and reduces shrinkage during the drying process. It is fired at a relatively
low temperature (1100 degrees C) after casting to produce a stable, structurally sound
shape which is about 95% as dense as the parent material. It can (probably) be cast over
styrofoam or similar blocks to achieve for lightweighting if deemed necessary.
The key attributes for the attachment method to join the monolithic tooling plate
and the segments include:
a) the ability to fill (the small but inevitable) voids between the the mating surfaces
b) the ability to set up without shrinkage or other mechanisms for inducing strain in the
substrate
192
c) the ability to allow the pieces to readily be separated after figuring
d) stability or the absence of flow characteristics such as exhibited by pitch
e) and sufficient stiffness so that the bondline doesn't act like an elastic foundation (and
foul up the edges)
Candidate adhesives or bonding agents include quartz filled epoxies with a silver
parting layer on the segment, quartz filled hard wax, certain of the Cerro near-room
temperature melting metal alloys (used in the turbine blade industry for tooling purposes),
as well as vacuum chucking. Development of a large scale segment attachment method is
clearly a development area simply because it hasn't been done yet (we believe).
tapes. CCP simulation software which is based on specific tool material removal profiles
and proven to be accurate on the HST, is used to verify the command tapes prior to an
actual material removal 'run'. This software will be preserved as the machine size is scaled
up. The surrogate mirror, or tooling plate approach, advocated for manufacturing the
segmented NGST primary, greatly simplifies the metrology operations over what would be
required with individual off-axis segments. Metrology for a 10m optic and our envisioned
facility for accomplishing it will be described next.
Unlike large 5-axis numerically controlled machines, the CCP is not a massive system.
Because material removal is (simply) a function of dwell time and not vertical position
of a tool, the CCP does not need to be extraordinarily rigid. Its precision is primarilly a
function of metrology and the command tape algorithms and not the stiffness of the ways.
Therefore expandability is readily accomplished without having to design and construct
massive machine bases or turntables. In fact it should be noted that there are several
commercially available precision X-Y machines that are large enough to adapt to a 10m
CCP. Notably among these are the Cincinnati Milicron modular and the Swedish Arboga
line of precision (tape laying) machines.
193
b) insensitivity of the 100 foot high metrology tower to ambient seismic and acoustic
noise
c) in-situ metrology to minimize moving and handling the 60,000 lb surrogate mirror
(tooling plate and segment assembly) between figuring and metrology operations
d) the use of the CCP in a grinding mode to generate and aspherize the segments.
The ability of the CCP wheel in addition to its more
to operate with an abrasive
familar loose abrasive grinding and polishing mode reduces, or might possibly eliminate,
the need for either a 10m spherical generator or a precision grinder such as a Campbell.
More importantly this would eliminate the need for a large precision turntable. Rough
shaping of the surrogate mirror (tooling blank) can (probably) be as 'sloppy' as 0.025
inches or 1000 waves and still be within the capabability envisioned for the CCP in a
wheel grinding mode. Since the CCP spindle path can follow almost any mathematically
describable shape, and since the spindle has a two axis tilt capability, by commanding it
to follow a series of concentric circular paths it can duplicate the kinematics of a classical
generator without a massive turntable. Following this quasi-generating operation with
a spiral path where the 'Z' direction or spindle axis is also programmed, the (nominal)
1mm or 0.040 inch of parabolization (or whatever) would also be accomplished without a
turntable (and without a large Draper).
In the next section, we will describe a facility concept which is based on the use of
this manufacturing approach with special emphasis on metrology.
Facility Concept:
As shown in Figure 8, the facility
is essentially contained within a 150 foot high, 60
foot square tall building. This building is surrounded by a lower building which houses
engineering offices, a small machine shop, and an area for integrating the mirror assembly
with the shop support system. Because this operation requires high capacity overhead
cranes and because the cranes cannot pass by the metrology tower support columns, this
assembly operation is carried out in this support area. Once the integrated shop support
system and mirror are wheeled into the tower and firmly jacked into position, we do not
envision it to be moved again until after the final interferograms are completed.
Metrology Tower:
The metrology tower (Figure 9) is erected on a massive seismic mass, a 2000 ton
concrete foundation which 'floats' on a thick bed of crushed rock. This foundation is
separate from the foundation on which the rest of the building is constructed. The
tower itself is constructed from four Im diameter Ismm steel columns braced to each
other with a platform on top for the metrology equipment. To absolutely eliminate any
resonance amplification of the basic tower modes or to prevent the existance of local,
acoustically excited 'ringing' modes, the columns will be (partially) filled with gravel. We
have designed the tower to have a I'st mode of 1.3 cps, including the non-structural mass
of the gravel.This is well above the noise spectra usualy associated with groundborne
'cultural' noise.
We had considered in-air (as was employed on the HST program), in helium, and in
vacuum metrology. Ultimately we came to the belief that a vacuum approach is the least
likely to provide 'regrets' during mirror fabrication. It is a conservative approach but
194
TD ELMNATC JCOuTTC COLMja
FUXMKALPI m-
I ^^^^^^^^^^^^Vti«^^^y;v;^^y
t>OL*C w*u. rXMUu' MUUIU
Figure 7 The Large Optics Facility as envisaged on the coast of Southern California
195
one which we have absolute confidence in. In addition to the extremely long optical path
length which itself might be a problem, stabilization times for HST
metrology approached
6 to 8 hours in a closed chamber that had about tenth of the volume needed for the NGST.
By eliminating the air, this stabilization time issue is eliminated. We believe that a mild
(10~^ torr) vacuum can be achieved in the same or even less time, containment vessel
would be required to contain the helium in any event. Rather than helium, it could be
'filled' with vacuum.
The vacuum chamber is suspended from the walls of the building and does not have
any direct mechanical connection with the seismic mass or with the metrology tower.
However the seismic block does serve as the lower closure for the chamber. We accomplish
this by constructing the vacuum tank with a cylindrical skirt at its lower end. During
figuring and polishing operations or when access to the mirror is needed, this skirt is
raised approximately 18 feet above the floor. During those metrology operations when
we will require vacuum, the skirt is lowered to the floor and an inflatable rubber-like
seal will close the gap. As vacuum is increased, the skirt will be forced against the floor
compressing a compliant gasket on its lower end. The inflatable seal is a barrier against
the transmission of mechanical noise from the vacuum tank to the seismic mass and
consequently the metrology tower.
Metrology:
Full aperture phase measuring interferometery (PMI) will be used for the NGST. It
is important to note the inclusion of the words, 'full aperture'. We believe that working
at the full circular aperture with all of the segments in place on the surrogate mirror, and
consequently having symmetry about the optical axis, will lead to more accurate as well
as more rapid metrology. The ability to simultaneously figure all of the segments in place
is a distinct advantage of the CCP approach which is (conceptually) unlimited in size.
Unlike theHST where an all-reflective null corrector was employed, we believe that
this would be impractical for the NGST primary. The reason is that the mirrors would
have to be approximately 2m in diameter (or so I'm told by the optical designers). The
HST null corrector design cannot be simply scaled to the NGST because the f/No of the
latter is almost twice as 'fast' as the HST's 2.3. Suspending two 2m mirrors on metrology
mount systems whose precision needs to be equal to the HST mount is judged to be
an unacceptable and unnecessary risk. Therefore a refractive, or possibly a diffractive,
approach will be employed. Obviously different nul correctors will be used for coarse IR
and for precision visible interferometry.
196
Concluding Remarks:
I apologize for a possibly too long and/or a too qualitative a paper. However
I hope
NGST program. As for me, it is always exciting to start off with a blank sheet of paper
and begin contemplating the architectural issues associated with a new and bold initiative
such as the Next Generation Space Telescope.
DISCUSSION
Angel: A comment on the isolation of the tower. The tower that we are building in Arizona in
the mirror lab (in the football stadium!) is very similar to yours. The resonant frequency is 10 Hz.
We have put on isolators which give the concrete base a frequency of about 1.5 - 2 Hz. We were
it
concerned that if it was just put directly into rock, the isolation would not be adequate at the critical
frequencies.
Krim : We based the design here on what has been done in Danbury. There we have an air system
which can also be disabled if we
The reason that the air system is used is because of sources of
wish.
vibration some severed hundred yards away. The isolation system is quite adequate when these sources
of vibrations occur. The resonant frequency of the tower was 6 Hz before the non-structural gravel
mass was added. So the towers are probably quite similar.
Angel: Do you have any isolation at 1-2 Hz from the rock bed?
Krim : No. We don't have any significant disturbances at 1-2 Hz except at the 1-2 micro-g level. We
have measurement data on this.
mingworth: The HST mirror is clearly one of the best ever produced. However, we have come
to retdize that the HST mirror will show some scattered light, particularly in the UV. This arises
because of residual structure on the surface from the footprint of the small tools used in the computer
controlled polishing. Have you thought about using technology such as the stressed lap approach that
Roger Angel discussed? This will not leave structure on scales that are a particular problem for the
UV-visible region.
Krim : This is an interesting question. We do need quite lightweight mirrors, with thin faceplates
for future large telescopes. A problem arises when one uses relatively large tools which cover several
cells and the resilience of the mirror differs from the rib to the center of the cell. A large tool will
exert a non-uniform pressure profile and will rub away more material over the ribs. This introduces
"quilting". One of the advantages of CCP is we never span more than a cell, and so we avoid
that
"quilting". We will be looking further at the question of the mid and high spatial frequency content.
mingworth: It would be good to quantify whether the "quilting" is a worse problem for the image
quality than the smtdl tool effect.
Krim: We must remember that the HST mirror is a very conservative mirror compared to what we
would be considering for the future. It has a faceplate that is an inch in thickness.
Nolan : So your conclusion is that bigger is better for the support structures. Recall Pamela, for
example.
Krim : Yes, in our case, and for our design at least, it worked out this way.
197
Large Segmented Optics Fabrication and Wavefront Control
J. Richard Vyce
Litton Itek Optical Systems
As a space based optical system to be implemented and operated for years or decades
the NGST requires significant technological advances over the HST. Many, possibly all,
of these advances are under development. Several have been and are being done at Itek;
the scope of these efforts offers strong support for NGST feasibility.
The 10 meter f/1.75 Keck primary, comprising 36-1.9 meter hexagonal segments at
6 different radii, represents one realistic NGST primary option. Indeed, there was the
reported 1988 announcement by the Soviet Academy of Sciences on their intention to
pursue a space telescope with segmented optics of a "design similar to the Keck telescope".
Keck Telescope mirror fabrication, involving maJcing the 36 segments plus a set of 6
spare segments, provides experience relevant to the NGST. Because the segments are a
reasonably thick 10 cm, it has been possible to make them utilizing the innovative stress
polishing technique. However, this is uidikely to be possible on the necessarily thinner
solid, or light weighted, substrates of an NGST. Thus, Keck experience is relevant for
the magnitude of surfacing and for the final test technique, but not for the surfacing
technique. That is likely to be based on more general surfacing techniques such as Itek's
Computer Controlled Optical Surfacing (CCOS) described below. CCOS was originally
considered for final figuring on the Keck segments, but this was obviated by good stress
polishing results, and use of a "warping harness" in the segment whiffletree supports for
figure correction.
Steps in the Keck surfacing process are illustrated in Fig. 3. Stress polishing involves
applying precise bending loads to an appropriately supported circular substrate, so as to
deform the upper surface to the negative of the aspheric departure required in the finished
198
surface. After polishing with a large lap to the required spherical radius as measured with
a micro-inch bar profilometer each segment is unstressed and tested in the AutocoUimating
Test Facility (Fig. 4). It is then cut to hex shape, installed on its whiffletree mount, and
remeasured in the ATF for warping harness adjustment. The ATF employs two movable
flats and precise alignment metrology for autocollimation testing the segments in each of
the 6 raditd positions.
One infrequently encountered optical requirement, but a critical one in segmented
mirrors, the need for precise radius match among segments. This is achieved through
is
measurement with the bar profilometer and radius control in polishing. The need for
radius determination in ATF testing added significantly to the difficulty of its metrology.
After some debugging problems the Keck segment fabrication process is proceeding
CCOS process is aimed at surfacing general aspheric elements both centered and
off-axis with progressively greater efficiency. This process provides capability to manu-
facture 2 to 4-m aspheric mirrors in less than half the time needed with the conventional
technology. It is increasingly automated in its three major steps: figure generation, grind-
ing/polishing, and testing. CCOS is essentially the robotic emulation of a skilled optician
(Figs. which measured positive surface errors (bumps) determine rubbing com-
5,6) in
mands for a nutating lap whose polishing "influence function" is well characterized. As
with an optician, the process is iterative, but improving control of polishing parameters in
CCOS is increasing the degree to which errors are corrected in a polishing cycle, thereby
reducing the number of cycles and total surfacing time.
The mirror segment is first polished on an Arboga 5-axis machine which has circular
symmetry about an external center and hence the polishing pad works in circular arcs.
The pad is kept from rotating by a radius arm affixed at the center of symmetry.
To minimize the time between rubbing and testing, a CCOS machine is located ad-
jacent to an optical test chamber. The work piece, blocking body and stand are on an
air dolly which permits easy movement of that assembly between the machine and test
chamber.
An important CCOS process adjunct is the laser profilometer, which serves two func-
tions. Itprovides fractional micron measurement on ground surfaces, permitting use of
fast fine grinding almost to final figure, thereby greatly reducing material to be removed
at the slower polishing rate. It also makes the precise measurement of radius and off ajds
location required by segmented optics.
Interferograms of an earlier aspheric segment and a large aspheric demonstration
mirror recently completed by CCOS, shown in Fig. 7, indicate the good performance of
the process on difficult surfaces and the quality improvement in the recent mirror.
Cost and delivery of fused or fritted lightweighted mirror substrates can frequently
be bettered by machine lightweighting of solid blanks. This is especially true for glass-
ceramics whose material cost is low and which currently cannot be fused or fritted. Ma-
chine lightweighting has emerged as a useful option in lightweight mirror fabrication and
has been applied at Itek to the largest lightweighted elements.
199
The basic machining technique is and has exhibited low risk
relatively straightforward
comparable to other lightweighting techniques. After machining, acid etching is normally
employed to relieve stresses in the grinding-damaged surface layer.
From the Generic Optics Control System block diagram (Fig. 1) a list of key NGST
active/adaptive technologies has been compiled and addressed in terms of experience from
specific programs or groups of programs at Itek (Fig. 8). For most technologies relevant
experience has been accumulated from multiple programs over as long as two decades. A
chronological listing of programs (Fig. 9) brings out some of their accomplishments, in
meiny cases the first of their kind.
The following sections give examples of this work, first in active primary mirrors and
then in adaptive optics.
As early as the late '60s, there were concerns about providing ever larger diffraction
limited space optical systems using passive technology alone. In the search for alterna-
tives, development was started on active optics capable of being controlled by appropriate
sensors to correct system wavefront errors ocurring during operation. An early 72-inch ac-
tive opticsbreadboard served to test opto-mechanical design concepts, control algorithms,
and closed loop operation.
The advent of real time adaptive optics in the early '70s, combined with the growing
active optics experience led to investigation of large, lightweight, segmented active optics
with characteristics consistent with space use. The first program of this kind was oriented
to S-MWIR sensor application and thus was constructed for and tested at cryogenic
temperature. In addition to breaking new ground in lightweighting, closed loop phasing
and figure control through numerous actuators, and cryogenic operation, it also proved the
viability of "negative figuring" the (repeatable) figure change from ambient to cryogenic
test temperature.
Building on this experience, a more advanced active, segmented primary has been
developed (Fig. 10) aimed at a specific space application and compatible with space
flight after replacement of some electronic and mechanical components by space qualified
versions. This mirror design was intended for scaling to larger size, and is believed to
represent the most mature point of departure for an NGST primary of the Workshop
strawman type.
Adaptive optics of the kind currently used or planned stem from the 1973 21- channel
breadboard predecessor of the Compensated Imaging System (CIS) (Fig. 11). The CIS,
started in 1975, has operated on a 1.6 meter telescope throughout the '80s to provide
ground-based satellite imaging of unprecedented quality.
Adaptive optics technology has advanced about 11/2 generations beyond that in the
CIS, in wavefront sensors, wavefront processing electronics, and deformable mirrors. From
the beginning, wavefront sensors have had photon counting sensitivity and multi-kilohertz
bandwidth.
200
The first is used for SWIR laser wavefront diagnostics,
kilochannel wavefront sensor
and also incorporates a complete 100+ channel closed loop wavefront correction system.
A pulsed, visible wavelength wavefront sensor of extremely high performance is de-
signed for electronic scaleup to multi-kilo channels.
The wavefront sensor for space use is part of a major Shuttle-based experiment
first
scheduled to fly in 1991. Its design is based directly on the CIS wavefront sensor.
Deformable mirrors have improved substantially from that in the CIS, which required
multi-kilovolts to produce one micron surface deformation. Current designs (Fig. 12)
employ discrete multi-layer actuators to produce several microns of deformation from ±100
volt drive with wavefront quality suitable for the shortest visible wavelengths. Existing
technology is of kiloactuator scale, and is capable of growth. The deformable mirror for
the Shuttle wavefront control experiment is similar to those above but designed and tested
to space and man-rated standards.
3.4 Summary
This limited review has covered many of the advanced optical technologies that may
be required for the NGST.
All of those considered have been or are being pursued. At this level of examination
there is no indication either of technology gaps or of "show stoppers" that would prevent
attaining the desired NGST level of performance.
201
Figure 1
>/
Zl TILT &
DEFORMABLE
MIRRORS
BEAM
SHARING
ASTRONOMICAL
SENSOR
SUITE
/ OPTICS
PRIMARY MIRROR- PROCESSORS CONTROL * WAVEFRONT, PHASING,
POSSIBLY ACTIVE AND SENSORS* ALIGNMENT, GUIDANCE, ETC
SEGMENTED
Figure 2
202
rigure 6
Figure 4
Autocollimation
flat
Primary segment
Yrrrrn,
1.8 m \^ 203
Figure 5
CCOS PROCESS
• Small orbital tool moves over optic under control of computer
Figuring Smoothing
Figure 6
CCOS CYCLE
Figure 8
Active Mirror Experiments 1970 Active control understanding and algonthm development
Compensated Imagmg 1 975 First large scale operational adaptive optics system
System (CIS)
Advanced Adaptive Optics 1980s Large scale wavefront sensors, processors, and deformable
mirrors
Figure 10
206
Figure 11
Deformable
mirror
i-^i^' Imaging
detector
Wavefront
sensor
I'ncompensated
Wavefront Compensated
processor image
Figure 12
LOW-VOLTAGE KILOACTUATOR
MIRROR— PRINTED WIRING CONCEPT
Pusher
207
DISCUSSION
Angel: You got close to actually showing us what kind of quality you can get with computer controlled
polishing. You showed an interferogram with rather closely-spaced fringes. You do phase-shifting
interferometry and so real numbers are probably available. Can you tell us what quality can be
obtained?
Vyce: I cannot say exactly what the quality is, but I am sure that it can be excellent for visible
wavelength optics.
Angel: I can see wiggles in the interferogram so it is probably not adequate for ground-based systems,
and almost certainly not for the space optics that we are talking about here.
Unidentified : Those are not corrected optics - that is an active mirror and is Jis manufactured without
the corrections that will be applied.
Vyce: Angel is referring to the finer-grained structure which would not be corrected out. I am not
certain that this is a finished unit, by the way.
Angel: There is a concern about computer-controlled polishing, and so I urge that when you come to
talk to astronomers you give us numbers so that we can see how well the technique is doing.
Nelson : With regard to Angel's comments. Two of our mirrors for the Keck telescope had final
polishing with CCOS and, ignoring the edge problems with CCOS, the mirrors were polished to
surface errors below 20 nm. There were edge problems and there were high frequency errors at the
10 nm rms level that led us to go to a more cost-eiTective solution, namely warping harnesses. Since
it is an iterative process, Itek may be able be able to do better. The edge efiects were a particular
concern to us because it ia a segmented mirror.
Vyce: I think that we have a way of dealing with the edge efiects and so we are checking that out
Kahan : Dick, would you agree that its fair to say that meter class optics can be made to visible
quality levels in about 30 weeks, given today's CCOS techniques? This makes the primary mirror less
of a cost driver, given the epic aerospace aspects of the total program.
208
Fabrication of large and fast mirrors
with extraordinarily smooth surfaces
Ernst-Dieter Knohl
Carl Zeiss
Abstract
At the inception of the Next Generation Space Telescope (NGST) program we should ask what
technologies are needed and are these technologies already available. It is clear that an important area is
the fabrication of highly polished optical surfaces, in order to get high resolution images. For the NTT
program Zeiss developed a fast polishing technique to generate super smooth surfaces, and patented it
as membrane tool technology. The striking results of the ESO NTT demonstrate how powerful these
new tools are: the concentration of energy of the 3.6 m primary turned out to be twice as good as the
HST, and 50 times less expensive (Neglecting the atmospheric contribution to the image spread) Rapid
metrology is essential for fast figuring process. Zeiss is running interferometry in "Real Time" averaging
hundreds of frames in one evaluation. Therefore no vacuum chamber is needed. An adequate mirror
support system for figuring and testing has been built for the NTT primary and has been qualified in
interferometric tests. For an active supported mirror Zeiss prefers the thin, solid meniscus solution for
best optical performance. With such designs, mirror diameters are limited to between 8 and 9 m.
deformation of the surface is done by lapping, and the asphere is poUshed. Finally, an
interferometric test with a compensating system will give the input to the next pohshing
cycle. This process being repeated until the figure and poUsh are satisfactory.
This time consuming process was estabhshed to generate the optic of the 3.6 tele- m
scope MPIA with 35 n deviation from the best fitting sphere and a F-number F/3.5. In
four years a RMS wavefront aberration of 24 nm was reached. The first light of the MPIA
telescope was in 1984 and Birkle has published "Operation and Test Results" in Ref.l.
3.6 m optic was fabricated for the Iraqi Observa-
With the same technology a second
tory on Mt. Korek. Unfortunately the complete telescope is stored today in Baghdad.
209
2.1Membrane polishing tool technology
The idea to use strip tools was initiated by experiments that Zeiss made dnring
figuring of the ROSAT shells, illustrated in Figure 1. In the ROSAT program a super
smooth surface quality was achieved:
- very small ripple amplitudes and
- surface roughness better than 2 A rms (Ref. 4, and Figure 2)
Later on in the ROFT program Zeiss developed the so called membrane tool for high
quahty optical surfaces in the range of 25 nm rms wavefront aberration with high wear
rate (generating 100 m^ per year). This was the background used for the NTT primary
fabrication with a flexible, stripe like tool and pressure control.
These tools turned out to be a great success in several respects
- they produce excellent wear rate, because of the great area of contact
- For the same reason, they produce extraordinarily smooth surfaces at frequencies
associated with ripples.
- flexible stripe tools can foUow aspheres as fast as F/1.0 where the conventional pol-
ishing technique fails.
If the tool wear rate is set by actuators under computer control, cycle time can be
reduced drastically so that an 8 m mirror with a 50 m^ area can be finished within 2
years.
A been made of the surface ripple of the NTT primary (Ref. 5).
special evaluation has
A window varying between 25 and 550 mm
was moved along the fringes and the ampUtudes
at the corresponding frequencies are shown in Figure 3.
The ampUtudes of the MPIA and NTT primary are compared in Figure 4. At wave-
lengths between 25 and 75 mm, and also between 225 and 550 mm, the NTT mirror
surface is smoother by a factor of 2 to 3. The NTT intrinsic quality was shown to give
an 80% encircled energy within 0.1 arc sec diameter.
210
2.3 Supporting System
The advanced technology of active optics leads to more and more thin, solid mirrors
with the most possible homogeneity. The primary was chosen
aspect ratio of the NTT
D/h = 15. New projects Hke SOFIA or VLT tend to D/h = 45. Therefore the new mirror
quaHty depends strongly upon the optimized supporting system. The mirror support
during figuring has to guarantee the following functions:
- Supporting the mirror axial weight
- Carrying the axial and tangential force variations of the polishing loads
- Damping of the pohshing loads
- Supporting for interferometry without imposing any constraints
- Active correction of long wavelength deformations to increase the dynamic range of
the test system
During the NTT programmeasured the quahty of the support system by turning
Zeiss
the mirror with respect to the supporting pads. Finally a perfect match was found between
data from mirror support during fabrication and data from mirror support in the telescope
cell The beautifully sharp astronomical images are a direct consequence of the support
.
quality.
Active correction makes sense only if the mirror body is solid and homogeneous. In
case of a structured mirror the active bending corrects for long wave errors but introduces
ripples with even steeper slope errors. This effect have been demonstrated with simple
paper models.
In a study phase of the VLT program Zeiss developed all the technologies needed for
8 m mirror figuring (Ref. 6). The biggest useable monolithic glass mirror is between 8
and 9 m diameter for several reasons:
- fabrication of blank "
- mirror handling
- transportation on road and by ship
Many For example the
ideas exist for the construction of larger primaries. MMT
solution or the Keck solution with all its attendant problems. To overcome the problems
with figuring of off-axis aspherical elements, Zeiss is studying a 12 primary made out ofm
4 segments. The idea is to assemble these segments at simulated zero-g into a monolithic
blank, do the polishing and interferometric testing in the well known on-axis manner, to
take the four segments apart for transportation and to align and assemble them again on
site in the telescope cell.
Thorough calculations have still to validate this approach. But the great advantage
of large coherent optics is obvious, and the approach is worth pursuing.
211
4. Conclusions
- The largest monolithic mirrors that can be produced, handled and transported are
limited to diameters between 8 and 9 m.
- Two years is typically needed for figuring, preparation of supporting pads and testing
to finish 50 m^ of super smooth optical surface corresponding to an encircled energy
of80% within 0.2 arcsec diameter.
- The great improvement of the NTT optics was achieved with stripe tools and actuators
under pressure control, with fast IR and visible interferometry, and with unconstrained
mirror support.
- The advantages of active optics are only obtained if ripples at spatial frequencies
full
Figures 5 illustrates the evolution of surface polish quality with time for the NTT.
Figure 6 shows the time actually taken to figure mirrors of given aspheric deformation in
three cases, and predictions for the VLT.
References
1 K. Birkle and U. Hopp, Das 3.5 m teleskop MPIA Operation and test Results, Mit-
teilungen der AG Nr. 68, 1987
2 R. N. Wilson et al., ESO Active Optics: the NTT and the Future, ESO Messenger
No. 53, Sept. 1988
3 R. Wilson ESO, "First Light" in the NTT, ESO Messenger No. 56, June 1989
4 K. Beckstette, Aspects of Manufacturing Grazing Incidence X-Ray Mirrors, Carl Zeiss,
D-7082 Oberkochen, West Germany SPIE Vol . 645 (1986)
5 H. Geib, C. Kiihne, E. Morgenbrod, How to Quantify Ripple, Proc. lAU Colloquium
No. 79, April 1984
6 E.D. Knohl et al, 8 m Class Primary: Figuring, testing, handhng, SPIE Vol . 1013
(1988)
212
ESO-NTT Ml RMS - Werte
100000
E
10000
?r 1000
100
10
SPIEGEL: NTT-MI BILD NR.7 GRAD ZEISS
^
/
I
i
i
300 \
^ 250 -
in
H
Z J-
£ 200
4^
o
UJ 150
OQ
\ I
4
1
20. INM)
I
Wolter I Mirror System
^ folded
Measured Resolution:
Mirror material:
ZERODUR
Surface Roughness
better 2 A RMS
Contour of Hyperbola 2
after superfinish
within 0.05 pm amplitude
.5
-.5
-GOO
DISCUSSION
niingwortli: Mike Krim noted a "print-through" problem with thin mirrors. For thin mirrors do
you think that the technique will work without "print-through" of the support so that the resulting
segments are smooth on small scales?
Knohl : The technique works on rotationally-symmetric mirrors. We are now working on the 12 meter
DGT telescope and are discussing putting the segments together and polishing them as a monolith.
Krim : The "quilting" problem arises particularly in lightweight mirrors that are "pocketed" from
the back. There are tradeoffs that can be made in the degree of lightweighting, the cell spacing, the
faceplate thickness, and the tool pressure. It is a midtiparameter (7?) tradeoff.
Angel: On that topic, if you make a lightweight glass honeycomb structure for a large space telescope
you are talking about really very thin faceplates - maybe a cm or less. Under any realistic polishing
pressure, maybe a tenth of a psi, for example, the cells would have to be very small to avoid deflections.
They would probably have to be an inch or two in size. The only advantage of a computer-controlled
polisher in not "quilting" the surface is when the tool size is less than a cell size, say a cm or two. I
find it difficult to see that this small tool advantage could be usefully realized for polishing a 16 meter
mirror. Therefore I do not see that there is a domain where small tool polishers will be used for large,
lightweight space optics.
niingworth: Have you thought about using your membrane lap technology on segments' surfaces by
polishing all somewhat like that described by Mike Krim of Perkin
the segments at once in a support
Elmer?
Knohl : Yes, the membrane tool can be used for segments of any shape. In this case, the workpiece
doesn't move and the membrane tool is oscillating in the surface plane. The point still being under
discussion is the best set up for the metrology of off-axis segments.
216
Economical Production of Large Optics *
ABSTRACT
Optics for large systems are frequently ultra-lightweight and deformable by design. They often have
unusual shapes and include non-rotationally symmetric surface figures. Surface roughness, subsurface damage,
light scattering and laser damage threshold specifications are also frequently required. Manufacturing techniques
being developed to address these stringent requirements will be discussed.
1. INTRODUCTION
As large, monolithic optical structures are no longer feasible, large optical systems typically incorporate
multi-segmented arrays.''^ Spacebome systems can be assembled in orbit from segments that can be
transported in smaller launch vehicles. Space-based and ground-based systems use active optics to compensate
for distortions. Manufacturing these optical elements presents many challenges. Conventional processes,
materials and methods are not capable of figuring these active optics in a cost effective manner. New,
deterministic manufacturing techniques need to be developed. These techniques must not only be capable of
accurately figuring large, complex, optical components; they must also support a performance level requiring
minimum light scattering and minimum laser threshold damage (resulting from subsurface damage and surface
microroughness) On-going process developments at Eastman Kodak Company have resulted in optical
.
manufacturing technology that will provide the deterministic, cost effective fabrication of large state-of-the-art
optical components.
actuator control in the system). Glass-ceramic or stable glass materials such as fused silica, ULE®, or Zerodur®
are likely candidates for substrates.
The generationof an aspheric profile (i.e., diamond grinding of the three-dimensional aspheric equation
to within afew waves of nominal ) for axisymmetric optics is typically accomplished by spinning the optic about
the optical/mechanical axis and moving a grinding head radially. This is not possible for large, off-axis
elements. Therefore, reliable aspheric curve generation of unusually shaped, off-axis segments requires a
generating machine with state-of-the-art size, stiffness, accuracy, and meuology. A machine of this caliber
would also allow manufacturing large anamorphic designs and using ductile grinding techniques^ to produce a
specular surface.
Process developments at Kodak have resulted in machinery capable of generating axisymmetric aspheric
elements up to 0.8m diameter, and to within 6 iim p-v of nominal. Kodak has now expanded this process
capability to large off-axis segments by commissioning Cranfield Precision Engineering, Ltd., to build a
three-axis CNC generating machine with a working envelope of 2.5m x 2.5m x 0.6m.* The OAGM
2500
(Off-Axis Generating Machine) pictured in Fig. 3 features a) a precision air bearing grinding spindle, b) an
in-situ metrology system using a retractable air bearing measurement probe, c) a metrology frame based on
distance measuring interferometers that reference to three precision mirrors, and d) a Granitan® machine
structure for stability yielding a total machine weight of 130 tons. The machine is expected to be on line in
December 1989.
Graphite/Epoxy Graphite/Epoxy
Support Tubes Structure
Secondary Segment
Mirror Phasing and
Primary Mirror- Alignment
Segments Sensors
218
Fig. 3. Kodak's 2.5m off-axis generating machine.
The OAGM
2500 will hold off-axis elements on the machine work table in an orientation that minimizes
both sag and slope. The in-situ metrology system will reference the coordinate system of the machine and glass
blank to the aspheric equation. The surface will then be diamond ground and profiled by the metrology system.
Tool wear will be compensated and the map of the surface profile will be passed onto the next process step. The
fine mane surface produced by this machine will not be sufficiently specular for interferometry. Further fine
grinding will not be necessary, as the surface roughness and surface figure errors will be well within the envelope
of correction for the subsequent small tool polishing operation. This machine may be capable of ductile grinding
(the metrology system has 2 nm resolution). If this is possible, the specular surface produced will be ready for
interferometric testing, thereby minimizing or eliminating the subsequent small tool polishing operation. Process
development will evolve toward this end.
To remove subsurface damage from the previous aspheric generating operation and
to achieve the surface
Subaperture lap polishing is best suited to this operation because
specularity required for interferometric testing.
of the size and lack of symmetry in the shape of off-axis segments. Full aperture laps have difficulty conforming
to non-symmetric shapes and are limited further by large aspheric departures.
The concept moving a small orbiting polishing lap over the optical
of small tool polishing consists of
surface under computer control. Because the removal profile of the lap is known, and because polishing
parameters are held constant during the run, controlled amounts of material can be removed.
At Kodak, process development involving small tool polishing has been concentrated on understanding
the process parameters that effect the tool removal function. To accommodate aspheric surfaces with various
slopes and errors, computer modeling routines have been developed to choose the optimum tool removal
function. The notorious edge effects caused by small tool polishing have also been modeled. Controlled
experiments that compare predicted removal vs. actual removal yield results as shown in Fig. 4. Results indicate
that this approach is deterministic.
219
^2
1.0
o.e
0.6
0.4
i 0.2
0.0
•0.2
0.4
•0.6-
Table 1
Dwell Time
Small Tool Obliquity Rectangular
Polishing Pad & Orbit Diameter X.Y.Z
Pad & Orbit Speed
Dwell Time
Ion Obbquity Rectangular
Figuring Beam Energy X.Y.Z
Gas Type
Temperature
Kodak has supported process development in ion figuring and has significant experience with the
substrates fused silica and plasma etch^ on optics up to 100
silicon carbide using ion mm
in diameter. We have
now increased this process capability by commissioning DynaVac Company, Division of Tenney Engineering, to
build a vacuum chamber capable of ion figuring (Fig. 5). This chamber has a 2.5m capacity. The ion source
position is controlled by a five-axis CNC positioner (x.y.z, ^x, 4)y). The machine is expected to be on line in
September 1989.
TRANSLATION
STAGE
221
With the IFS (Ion Figuring System), off-axis optical elements will be figured to their final specification.
Interferometric data will be passed in closed loop fashion to the machine's controller for each iteration. Few
iterations are expected, as this process is highly deterministic. The logical extension to this process development
is a state-of-the-art machine that exhibits in-situ metrology and thin film coating capability. Process
development will evolve toward this goal.
Planetary polishing machines have a horizontal, annular-shaped lap, the flatness of which is maintained
by a conditioner polishing on the surface along with two or more workstations. Within each workstation, optics
are held in septums which are motor-driven to maintain rotational synchronization with the lapping table.
Recirculating slurry systems are frequently used and environmental controls are necessary to maintain figure
control of the lap. Planetary polishing is an efficient method of polishing large piano optics. Numerous optics
can be processed simultaneously and stock removal can be carefully controlled, minimizing subsurface damage.
Fig. 7 shows a 4m planetary polishing machine designed and built at Kodak to support Im optics for the
Lawrence Livermore National Laboratory's Nova Laser program. The table for this polisher is a single piece of
granite weighing 33 tons. The conditioner is a 2.5 ton piece of fused quartz.
Final figuring for large piano optics no longer needs to be completed on the planetary polishing machine.
Once subsurface damage is removed and the surface error is within a fraction of a wave of specification, ion
figuring will be used to complete the manufacturing process. For large piano optics used in transmission ion
figuring is an extremely attractive method to correct for inhomogeneities in the glass.
222
Fig. 7. Kodak's 4m planetary polishing machine.
4. CONCLUSIONS
Kodak's new optical fabrication and finishing systems are undergoing final acceptance tests and should be
operational by the end of 1989. They will be usable with beryllium and other materials, as well as glasses. Their
application will substantially reduce the time and cost traditionally associated with large optics. Kodak's
developments have been driven by our growing concern that, despite growing needs for large optical systems,
their deployment could be inhibited by expensive optical components.
5. REFERENCES
^
P.N. Swanson, J.B. Brekinridge, A. Diner, R.E. Freeland, W.R. Trace, P.M.McElroy, A.B. Meinel and A.F.
Tolivar, "System Concept for a Moderate Cost Large Deployable Reflector (LDR)," Optical Engineering, Vol.
25, No. 9, pp. 1045-1054, September 1986.
2 D.C. Kline, "Space-based Chemical Lasers for Defense (BMD)." Proceedings of the Inter-
Ballistic Missile
national Conference on Lasers '87, F.J Duarte editor, pp. 205-217, STS Press, McLean, VA, 1988
3 N.J. Brown and B.A. Fuchs, "Brittle to Shear Grinding Mode Transition for Loose Abrasive Grinding," OSA
Technical Digest: Optical Fabrication and Testing, pp. 23-26, November 1988.
" P.B. Leadbeater, M. Clarke, W.J. Wills-Moren, T.J. Wilson, "A Unique Machine for Grinding Large,
Off-axis Optical Components: The OAGM 2500," Precision Engineering, to be published.
5 S.R. Wilson, D.W. Reicher and J.R. McNeil, "Surface Figuring Using Neutral Ion Beams," Advances in Fab-
rication and Metrology for Optics and Large Optics, J.B. Arnold and R.E. Parks editors, Vol. 966, pp. 74-81,
SPIE, Washington 1989.
® L.N. Allen, "Precision Figuring of Optics by Ion Machining Processes," Conference on Lasers and Electro-
Optics, Baltimore, MD, Vol. 11, April 1989.
223
DISCUSSION
Nelgon : I have a question legaiding the convergence rate of ion polishing. I had heard that you could
get accuracies of 90%. What is your experience with this machine and your expectations?
Keller While we haven't had any direct experience with large optics in the new ion facility, we have
:
made some estimates based on our experience with small optics. Based on our work on the HST mirror
we believe that we could cut the 18 iterations needed for that mirror with conventional techniques down
to 3 iterations. Furthermore, even though those 18 iterations involved only 70 hours of polishing, it
actually took a year to finish the mirror because of the overhead of testing, handling and setup. With
a deterministic process such as ion polishing we believe that we could do a HST mirror in about a
month.
Keller : Effectively it is like diamond turning where the materijil is sheared away instead of being
"chipped" off.
Angel: Can you give us some feel for the size of the ripples that might be left on an aspheric surface
from the polishing tool and the size of the ion beam needed to remove them?
Keller : We need some experience before we can give you some real numbers, but we do have some
cedculations.
Nelson : What is the maximum beam size that you could use?
Keller : The beam size could be 5, or 10 or 15cm. It can be made quite large.
Dlingworth : Removal of the edge ripple from your example would suggest that you need ion beams
that are a fraction (1/3-1/4) of the small tool size. Is that the size you use - several cm? If I understand
correctly, this means that the "polished" surface from the "small tool" polishing would have to have
the final desired smoothness on scales less than several centimeters.
Keller : Yes, 1/3 to 1/4 will work, but the "up modes" of the ripple are Gaussian so the same size will
work (faster removal). However, note that in one example, the low ripple mode was below the nominal
surface. In practice one must keep the ripple zone "turned up."
224
,
Richard J. Terrile
and
Christ Ftaclas
ABSTRACT
The goal of imaging planets around the nearby stars has important
scientific significance but requires the use of advanced methods of controlling
diffracted and scattered light. Over the last three years we have undertaken a
study of coronagraphic methods of controlling diffracted light and of figuring
hyper-contrast optics. Progress in these two general areas have led to a
proposed space-based, 1.9 meter diameter coronagraphic telescope designed
specifically for very high performance in the imaging of faint objects near
bright sources. This instrument, called the Circumstellar Imaging Telescope
(CIT), relies on a new high efficiency coronagraph design and the careful
control of scattered light by extremely smooth optics. The high efficiency
coronagraph uses focal plane apodization in order to concentrate diffracted
light more efficiently in the pupil. This allows convenient removal of the
diffracted light by masking off parts of the telescope pupil while not
sacrificing the center of the field. Reductions of diffracted light by factors
exceeding 1000 are not only possible but are required in order to detect extra-
solar planets. Laboratory experiments with this new design have confirmed the
theoretica,l diffraction reductions to the limits of the optics used (factors of
about 300) . The extremely high efficiency of this coronagraph puts strong
constraints on the narrow angle scattered light due to figure errors in the
telescope mirror. Since planets orbiting nearby stars are expected at angular
distances of about 1 arcsecond, it is in this small angular range in which
scattering must be controlled. The figure errors responsible for scattering in
this range come from mid- spatial frequencies corresponding to correlation
lengths of about 10 cm on the primary mirror. A primary mirror about 15 times
smoother than the Hubble Space Telescope mirror is required for the CIT.
Laboratory experiments indicate that small test mirrors can be fabricated with
existing technology which come within a factor of two of this requirement.
1 . INTRODUCTION
In this paper we will discuss the application of super- smooth optics to the
problem of the direct detection of extra-solar planetary systems. The two key
technology developments which have enabled serious consideration of this
scientific problem are in the control of diffracted light and in the fabrication
of super-smooth optics. This paper will concentrate on the control of
225
. .
Over the last three years a joint program between the Jet Propulsion
Laboratory (JPL) and the Perkin- Elmer Corporation (now Hughes Danbury Optical
Systems Inc.) has been organized to define the requirements for optics capable
of directly detecting extra-solar planetary systems from Earth orbit. The main
conclusions from this study were that: 1) Direct detection of nearby extra-
solar planets is possible from a meter-class orbital telescope designed to
reduce diffracted and scattered light to levels about 1000 times below
diffraction from the unapodized aperture. Furthermore, there are a wide range
of scientific goals in planetary science and in astrophysics that are
addressable with such an orbital telescope. 2) A high efficiency coronagraph
was designed and tested in the laboratory which can reduce diffracted light to
these required levels without sacrificing the center of the field of view and
without dramatically reducing the effective aperture diameter of the telescope.
3) The requirements on the mirror figure demand a mirror about 15 times
smoother than the HST primary at mid-spatial scale frequencies (10 cm).
However, laboratory and fabrication experiments have shown that current
metrology is capable of making such measurements and that small (30 cm diameter)
test mirrors have been fabricated with nearly flight quality figures. As a
result of these conclusions we have proposed a flight project called the
Circumstellar Imaging Telescope (CIT) '
.
The CIT consists of a 1.9 meter diameter telescope in which small angle
scatter has been greatly suppressed, mated to a camera that can implement a
variety of diffraction control strategies. These strategies, including a new
high efficiency design, are chosen to realize the performance inherent in the
fore-optics and to maximize the scientific return of the mission. Thus both
diffraction and scatter are comparably reduced unveiling the circumstellar
environment. One arcsecond away from a bright point source the baseline CIT
will have a background level more than 5 magnitudes per square arcsecond fainter
than the Hubble Space Telescope (HST) leading to a much higher sensitivity near
bright objects.
The key design features of the CIT are control of scattered light through
use of super-polished mirrors, control of diffracted light through use of a
coronagraph, and control of image motion through use of an internal fine
pointing system. Computer modeling indicates that, with only moderate advances
in mirror fabrication technology, the CIT will be able to detect Jupiter sized
planets around solar type stars at distances out to ten parsecs
At the very heart of Planetary Science lies the question of the uniqueness
of life on Earth. Current theories entertain the idea that suitable conditions
for life probably occur commonly throughout the galaxy on planets surrounding
stars. Therefore, a corollary to this fundamental question is what is the
nature and probability of planet formation. With our solar system as the only
available model, clues to the nature of planet formation are derived from
observations of current conditions. The planets orbiting the sun are co -planar,
226
co-rotating and generally show a compositional ordering with distance from the
sun. This regularity is thought to be a relic of the conditions during the
formation of the sun and solar system. Models of the formation of the planets
have them forming in a collapsing nebula or disk of material around the early
sun. Indeed, recent observations of circumstellar disks indicate that these
conditions do occur early in the lifetimes of stars. In order to gain a deep
understanding of the entire process of planetary formation it is desirable to
observe the various stages of evolution from the earliest stages of stellar
formation to a survey of mature planetary systems. Recent progress has be made
in detection and imaging of circumstellar disks, but direct planet detection
remains the most challenging observational goal.
is about 10% greater than Jupiter before additional mass causes the core to
become degenerate and the radius to decrease with added mass. The visual
magnitude of Jupiter, viewed from 10 parsecs, is 25.9 at opposition, 27.4 at
quadrature and 27.5 on average. Only the presence of a broad ring system could
add to the equivalent cross section and therefore the brightness of a large
planet. The challenge of direct detection of planets is not in the detection of
a faint signal, but in separating this relatively faint planetary component from
the dominant background glare of the parent star. Even for relatively nearby
stars the angular separations of parent star and planet are an arcsecond or
less. It is now recognized that the HST will not be able to directly detect
extra- solar planetary systems because of the inherent levels of scattered light
in the telescope ' '
.
o
The coronagraph was first employed by Lyot in 1934 to observe the solar
corona. At the first focus the source is occulted and then the entrance pupil
is re -imaged. The occultation results in the remaining diffracted light in the
system concentrating in a ring approximately centered on the image of the edge
of the entrance pupil. The pupil or Lyot stop masks off part of this ring so
that when the light is re -imaged to a second focus the diffraction pattern is
reduced in intensity. The same principle can be applied to secondary
obscuration and secondary support structure so our subsequent discussion will
227
concern itself with a simple circular unobscured pupil.
a
UJ
>
2 4 6 8 10 12 1* 16
2 .* .s a
Figure la Figure lb
Figures 1 a and b show the distribution a Lyot type coronagraph and in the first
of light in the pupil of
and second focal planes, respectively. Note that in diffracted light is concentrated in a ring
Rgure la
near the outer edge of the entrance pupil but is only reduced by about a factor of 1 Gin the center of
the pupil. A 90% Lyot mask effectively removes the outer bright ring. Rgure lb shows the
original
star signal in the first focus (top cun/e) and in the second focus (bottom). Note that at the second
focus light is reduced by about a factor of 100 but the edge of the occulting mask produces a bright
228
.
' -1
U5
z Z
UJ
a
Ml UJ
> >
!
UJ
cc a.
8 8
2 .* .5 .8 1.0 1.2 2 * 6 8 10 12 14
Figure 2a Figure 2b
Hgures 2a and b are similar to Hgures la and b but are tor a high efficiency coronagraph using a-
90% Lyot mask. TTie diffracted light in the center of the pupii is now reduced by over three orders of
magnitude and masking can be more effective (Figure 2a). An 80% Lyot mask would reduce
second focus by atX3ut a factor of 1700. Also because of the transparency of
diffracted light in the
the occulting mask, the second focus (bottom of Rgure 2b) shows a smooth distribution of light and
high efficiency all the way to the center. Photographs of these cases from our coronagraphic
breadboard are shown in Appendix B.
2. The system performs better at small field angles than large ones so
that the "effective" diffraction efficiency, that is, the diffraction
reduction scaled to the transmission of the mask is nearly constant up to
a few diffraction radii.
Thus, over most of the field, the hybrid coronagraph enhances the brightness
ratio of an off-axis source with respect to an on-axis source by a nearly
uniform factor even when transmission losses through the mask are taken into
account
229
and results to date totally support the computer models. Our gaussian masks are
made by direct playback through a microdensitometer onto Kodak Tech Pan film.
They have a central photographic density in excess of 5 and conform quite
closely to the required transmission profile. For the dimensions of our
laboratory system the full width between 50% transmission points of these masks
is 1 mm.
We have compared the occulted point spread functions, and resulting pupil
plane images for the hard and "soft" stop cases. We have verified to date that
the run of intensity for the 90% Lyot stop follows model predictions for both
the hard and soft edged stops. So far we have demonstrated diffraction
reductions of 300 to 400 and are limited by the quality of the relay optics used
in the laboratory set-up.
5. CONCLUSIONS
6 . REFERENCES
(1986)
6. Brown, R. A. "Direct Planet Detection: A Realistic Assessment of HST
and Other Spaceborne Optics." Bioastronomv - The Next Steps ed. G. Marx, ,
230
DISCUSSION
Miller : Was the increased smoothness at the end due to the way in which the mirror was supported
and the pitch was grooved, or was there some other process that was used?
Terrile : aware that anything particularly unusual was done during the polishing. Some gains
I am not
were made by and controlling the mechanics of putting the mirror together. We are
stiffening the lap
now interested in trying to see if the same gains can be obtained with smtill tools - with computer
controlled polishing.
Bnrrowg : If you scaled this to 16 meter, the diffraction curves would come down and the problem
would be easier.
Terrile : That's a good point. With a 16 meter mirror you would be looking for the Jovian planets
at many more rings out. So the Jupiter detection problem becomes easy because you would be at 50
- 100 rings out. But if you want to look at other problems like detecting earth-like planets, then you
are back to the same difficulties.
mingworth: Have you thought about the practicjility of doing these smooth surfaces with lightweight
mirrors where you would have to worry about print-through from the ribs and supports to the surface?
Terrile : Yes, you have to worry about that. With planet detection we are talking about using spatial
frequencies around 10 cm, just the scale of the waffle pattern on a lightweight mirror. So we are very
interested in the new lightweighting approaches, particularly those where the mirrors are lightweighted
after polishing. Unless we can be shown otherwise, we will take a conservative approach and opt for a
solid 2 meter mirror, with its weight disadvantage, for this problem of searching for Jovian planets.
231
Sensing and Control for Large Optical Systems
Alan B. Wissinger
Perkin Elmer Corporation
Abstract
The paper provides an overview of the technology developed at Perkin Elmer for large
optical system sensing and control. For future telescopes, sensing and control systems
will be required to measure wavefront quality and alignment on-orbit without access to
bright stars or to the primary mirror center-of-curvature. Examples of past technology
developments (HST's OCS and the Sample Point Interferometer patent), current technology
(laser radar and mirror sensing using holographic grating patches), and future technology
(phase retrieval from the image alone) are reviewed.
1. Introduction
1 originally organized this talk along the lines of the historic development of large
optics alignment and wavefront sensing and control. However, in listening to other talks
at this workshop, 1 realized that I could have gone back farther in technical history and
made an interesting point.
A long time ago, Perkin-Elmer built a 36 inch balloon-borne telescope for Princeton
University: Stratoscope II. system incorporated a means for alignment
The Stratoscope II
sensing and control. The sensing sj'^stem consisted of a TV camera that produced a very
detailed and enlarged image of a star, at about f/200. The control system consisted of
Nick Woolf, sitting at the TV monitor in the Stratoscope control van. Nick observed
the sharpness and symmetry of the image, and made decisions about how to move the
secondary mirror of the telescope. He then commanded (via radio) the mechanism of
the secondary mirror mount in one or more of its three degrees of freedom: longitudinal
motion for focus and two lateral motions to reduce coma.
I mention this because we have come nearly
full circle with our technology; the future
2. Requirements
Table 1 lists typical summary requirements for alignment sensing and control for a
large space-borne telescopic system such as the next generation Space Telescope. Since
most ground-based optical metrology is done with the sensor at the center of curvature
length of the mirror), it is not feasible to transfer these well- known techniques to space
applications. Furthermore, sensing techniques which make use of bright stars (as in the
Hubble Space Telescope) may not be applicable since the primary mirror of the next
generation Space Telescope will most likely require continuous sensing and control of figure
quality and segment alignment during long integration times on faint sources. In addition,
it is desirable that sensing and control not interfere in any way with the astronomy mission,
and, of course, it should be lightweight, efficient and reliable.
232
Table 1. Alignment Requirements
Measure:
- Figure quality of optics on orbit without access
to bright stars or center of curvature.
- Alignment of optical system
Desired features
- Non interference with mission; continuous measurement
- Applicable to space environment (lightweight, efficient, reliable)
3. Historic Technology
The Optical Control Subsystem (OCS) of the Hubble Space Telescope (HST) will
produce an interferogram of the optical wavefront sensed at three places in the telescope
f/24 focal plane. With this information, the operators of the telescope will be able to
correct focus, alignment, and figure of the primary mirror.
The sensing portions of the OCS is a white-light, radial shearing interferometer. The
white light source is a bright star. Figure 1 shows a block diagram of the OCS (along
with the Fine Guidance Sensor, which is housed in the same radial bay module). The
OCS part of the diagram is enclosed by the outlining.
233
The main telescope is upper left part of the diagram, and shows the five-
shown in the
degree-of-freedom secondary mirror mount and the primary mirror with its twenty-four
figure control actuators. In operation, the image of a selected bright star is positioned on
the OCS pickoff mirror (lower left). The Hght from the star is collimated, after passing
through the astigmatism corrector plates and the field lens. The collimated beam is split
by the beam splitter. One of the beams is magnified by several diameters while the other
beam is demagnified by a similar amount. One of the two beams is then path-length
modulated by a set of oscillating optical wedges. The two beams are then combined at
the beam combiner, and relayed to an image dissector camera.
reference beam is the larger of the two beams. In effect, a small
The interferometer
portion of the primary mirror (the enlarged beam) becomes the reference for the entire
mirror. The camera and signal processing electronics records the temporal phase (relative
to the modulator) of the interference pattern and transmits that data to the ground.
Further computer processing produces information about the state of focus, alignment and
primary mirror figure. It is expected this procedure will be followed weekly in the early
days of HST operation, and hopefully less frequently after favorable operating experience
is gained.
In the late seventies, a patent for the sample point interferometer was granted to
Perkin Elmer. The purpose of the invention is to provide interferometric sensing of the
position and figure of a mirror segment from a convenient location, i.e. behind either the
secondary mirror or the primary mirror.
Figure 2, from the patent, illustrates the principle of operation. A mirror or a segment
of a mirror (item 21, upper diagram - "Fig. 1") for which the position and figure quality
is measured is fitted with a number of retrorefiectors about
to be an eraser on the size of
a lead pencil. These retrorefiectors might be arranged as shown in the inset figure at the
right ("Fig.2"). A laser and lens arrangement (Items 10, 11, 12,20 and 22) illuminates the
mirror and the embedded retrorefiectors. The laser light reflected from the retrorefiectors
is combined with the light from the reference beam of the interferometer (item 15). The
optical path length of the reference leg is modulated by moving the end mirror (17). The
combined beams are detected by a photodetector array (item 18), where lens 22 produces
an image of the mirror.
For each retrorefiector (items 23a, 23b, and 23c), there is a detector. The electrical
signal output of these detectors is shown at the bottom ("Fig. 4"). If the optical path
length to the retrorefiectors is not exactly equal, a relative phase shift will occur as indi-
cated by item 25. It is this phase shift that is the desired measurement. Measuring phase
electronically is much more precise than measuring relative intensity, as in conventional
interferograms.
Since the wavelength of the laser light is accurately known, the measured phase shift
provides the mirror position actuator system with the signals needed to correct the po-
Of course, the system must be initialized and calibrated
sition (or figure) of the mirror.
since it can only measure changes from the start position, and must make continuous
measurements and count waves to "remember" the initial position.
The center illustration ("Fig. 3") shows an embodiment where the laser and interfer-
ometer optics are conveniently located behind the primary mirror.
234
May 1977 4,022,532
U.S. Patent 10.
X'^^^. ^
4. Current Technology
235
generated, they can be designed for any arbitrary mirror surface and can form an image
at any reasonably convenient place.
- computer generated
- fabricated locally on aspheric surface
- diffraction-limited quality
- controllable diffraction efficiency
- extremely low scatter
Figure 3 shows an application of the grating patch HOEs in the sample point inter-
ferometer application. They can also be used for alignment and focus monitors since the
image produced by diffraction is of extremely high quality, and is bright (virtue of the
laser illumination).
is applicable to large optical systems having a smooth optical sub-
This technology
strate, and permits exceedingly precise, unambiguous, and rapid measurements of the
position and figure of the front surface of the mirror.
PHASE
REM HODIiATOR
EXPANDER
/ ll'lllllll N. N
LASER \ \ ^V^
DETECTOR SECONDARY
ARRAY niRROR
The Applied Science and Technology Directorate at Perkin Elmer has recently pro-
duced a unique and very simple laser radar capable of unambiguous absolute distance
measurements. The new laser radar incorporates a standard laser diode and an electron-
ics box that modulates the diode current and decodes the laser return. The principle of
operation is based on the fact that the backscatter from a target will form a front cavity
236
for the laser. Generally, the front cavity effect is unwanted, but in this application, the
front cavity is exploited to yield the desired distance information. Patents are pending at
Perkin Elmer, and journal papers are available describing the application. (See DeGroot
et al,Applied Optics, Vol. 27, No. 21, Nov. 1, '88; and Optics Letters, Vol. 14, No. 3,
Feb. 1, '89)
Table number of applications for the laser radar. Of most interest to this
3 lists a
workshop the "Optical Feedback Metrology Tool" item. The sub-micron metrology
is
237
Table 4 - Laser Radar for PSR and LDR - Front Plane Sensing
1. Lock-up approach
- Concept: use phase sensitivity of backscatter modulation for
monitoring position.
- Accuracy: O.lfi to 1 /x in space.
2. Absolute ranging
238
Ed Siebert of P-E, and after all of his computer development is completed, I am sure
you'll be able to find future journal articles by Ed.
Table 6. SWEBIR
Spatial wavefront estimation by intensity relationships
Computation of pupil function (figure and phase) from optical transfer function (OTF)
SWEBIR is a very exciting development for large space telescopes since it promises
a very minimum amount of sensing hardware in the telescope: a single CCD (and a
computer). The computing requirements are not extraordinary.
6. Conclusion
A variety of technologies are available for the direct sensing and control of the large
telescopes of the future. These range from direct observation of a star image to interfer-
ometric measurements of the wavefront or, alternatively, sampled optical path length, to
laser radar.
239
DISCUSSION
Wissinger: Yes. The spatial frequency bandwidth is governed by the size and density of the detector
array used to measure the PSF intensity.
Diner : I don't understand how you retrieve an OTF from the intensity image because you've lost
phase information.
Wiasinger: This is the purpose of phase retrieval. A comparison of algorithms is given by J. Fienup,
Applied Optics 21, 2758 (1982).
Wissinger: With a modest computer, the output should be expected in several seconds.
Vyee: Is the SWEBIR phase retrieval technique based on the 70s Gonzales' work more recently
updated by Jim Fienugo of ERIM?
240
Requirements for Diffraction Limited Optics
Christopher J. Burrows f
Space Telescope Science Institute
Abstract
We show that a ten meter 'diffraction limited' telescope can conduct a complete survey for a Jupiter-
like planet in a similar orbit around any of 20 or so stars closer than 5 parsecs. It will see at leeist one
planet in every close planetary system similar to ours. In addition to the scientific interest, such a survey
has great popular appeal, and should figure prominently in the justification for funding a Next Generation
Space Telescope (NGST) mission. If the telescope is capable of this very demanding mission, it will be
suitable for a wide variety of other scientific investigations.
Certain design constraints must be met for the survey. In particular, we show that 'diffraction
limited' means that both low and high spatial frequency errors on the wavefront are small in a definite
sense. Low frequency errors lead to a loss in resolution. High frequency errors lead to a loss in contrast.
A 10m telescope that is not diffraction limited in both respects will not be able to conduct a complete
survey. If the resolution is too low, the planet's image is smeared over too much stellar background signal,
Two areas of critical technology emerge from these considerations. First, for high contrast imaging,
fabrication and polishing techniques must be developed to yield smoother mirrors than those on HST.
Then the wings of a stellar image will be dominated by the intrinsic diffraction profile, which can be
largely removed by apodization. Second, for high resolution imaging, the conflicting requirements for a
large, lightweight, and fast primary mirror with an accurate figure mean that active optics will be needed.
We show that it will be hard to control the mirror surface with field stars. Either local sensing of mirror
displacements will be needed, or the mirror must be intrinsically stable to a fraction of a wavelength over
the full 10m aperture for a typical time of many seconds. This implies critical developments are necessary
in control and structure interactions.
1. Introduction
In this paper we derive technical constraints for a 10 meter space telescope given the
specific scientific goal of conducting a meaningful survey for extrasolar planets. We show
in Section 2 that such a telescope, if diffraction limited can achieve the goal. In Section
3 we indicate how performance changes as the telescope degrades from the ideal, and we
indicate technological problems that will need to be addressed to ensure that the required
image quality is achieved. The detailed discussion of the planet detection problem for the
Hubble Space Telescope (HST) given in Brown and Burrows (1989, hereafter referred to
as BE) is extended here to the proposed 10m aperture, and to include the effects of noise.
241
The result for HST
was unfavorable because even with assumed perfect apodization,
microroughness scatter degrades image contrast. We show that a 10m aperture with a
mirror that is about 3 times smoother than HST on the other hand would detect a Jupiter
over a wide range of orbits and for a large number of stars.
If an infrared wavelength band is used where the planet is emitting radiation, the
brightness ratio goes up relative to the visible, but unfortunately the contrast will go
down by almost the same amount. Less obviously, as pointed out by Shao at this meeting,
most nearby stars are much
luminous than the sun, so the equilibrium temperature
less
of a planet at a given separation is lower, in the absence of internal heat sources. This
seems to imply that a survey would require a larger aperture in the infrared, in addition
to cooled optics. Coronagraphic techniques do not work well for a 10m telescope in the
infrared (lO/xm) at the separations (< 0'f2) where such a planet would be hot enough to
be detectable around all but a handful of stars, although other interferometric techniques
may be feasible.
If the telescope is designed to make the direct visible planet search possible, it will
be ideal for a number of other scientific apphcations where high resolution and contrast
are needed, such as the imaging of circumstellar or protostellar disks, or the environs of
active galactic nuclei. For example, the possible protoplanetary disk around the my = 4
star Pictoris has a surface brightness of my = IG/arcsecond"^ at a separation of 6",
about one magnitude fainter than the seeing profile from the star. The 10m NGST
with a primary mirror that meets our specifications would have a scattered light level
of m,y = 18/arcsecond^. Thus significant improvements over ground based observations
would be expected. HST has a much higher predicted scattered light level, and hence
may not offer any improvement over ground based measurements for a resolved extended
source in a scattered Ught halo such as the (3 Pictoris disk.
Planet detection is limited by systematic errors which depend on the contrast achieved,
and by photon noise. We examine each in turn.
On
a diffraction limited telescope of aperture radius R, a stellar image has a peak
intensity proportional to irRr/X^, and an average asymptotic wing intensity proportional
to X/{2ir^R9'^), where 9 is the angular distance from the star, A is the wavelength, and
small corrections for the central obscuration ratio e have been ignored for simplicity. The
is 17.8 magnitudes at one arcsecond for
ratio of these surface brightnesses X'^/{2ir^R^d'^)
a 10m telescope in the visible (600nm). On the other hand, the brightness ratio at visible
wavelengths of the sun and Jupiter at maximum elongation is about 22.3 magnitudesf
(10~^). Given a star at 5 parsecs, a planet of Jupiter's size in a 5 AU orbit would
be separated from it by one arcsecond. So the planet would be about 22.3 - 17.8 =
4.5 magnitudes (60x) fainter than the background from the star, when at maximum
elongation.
Observing a source with a relative brightness of 1/60 on a spatially variable but
measurable background is demanding but possible. Specifically, a reference image can be
obtained by rotating the spacecraft around the line of sight to the star and taking a second
otherwise equivalent control exposure. Both images need to be flat fielded, and have any
t From the observed geometric albedo of 0.44, a phase function ^(90) = I/tf corresponding to a
Lambertian sphere, and the observed Jovian radius and orbit.
242
distortionsremoved and the control image would need to be derotated before subtracting.
The stellar background should then almost exactly cancel. Sources of systematic error
(such as pointing and flat field errors) would need to be carefully controlled. BB (1989)
argue that for HST, such systematic effects hmit the relative brightness to at best about
1/100.
Photon noise, on the other hand
typically not a serious problem for the large
is
aperture, provided a wide bandpass used (justifying our choice of the average wing
is
intensity above). In order to detect the planet, one can design an optimal hnear filter,
because the planet is unresolved so the expected signal is known, and the noise will
have Poisson statistics. Within 5 parsec, AUen lists 50 stars. Of these, 23 are members
of known multiple star systems and can probably be regarded as lower priority for a
planetary search. A further 5 stars are fainter than 12th magnitude. This leaves 22
candidates, with magnitudes approximately evenly distributed between 6th and 12th. In
only one hour, Jupiter 5 AU from an my = 6 candidate, contributes about 2 x IC* counts,
corresponding to a signal to noise ratio of S/N =
13 (assuming an overall efficiency of
0.5, a bandpass of AA/A =
and including noise from the reference exposure). If the
0.5,
candidate star has my = 9 we would stiU have S/N = 3 in one hour, and even the faintest
candidates can be surveyed to this level in less than a day.f
A 3a- detection can be confirmed by examining the planet's rotated position - There
should be a corresponding negative signal there. It seems desirable to do the detection
in each channel separately to give a 'coincidence check', and hence eliminate problems
caused by cosmic rays, or bad pixels for example. A more detailed analysis of the detection
possibihties, including the effects of micro-roughness and apodization is given after their
effects are discussed in the next section.
3. Image quality
A telescope can be regarded as diffraction limited when the images that it gives are not
sensibly different from those obtained from a hypothetical perfect telescope of the same
aperture size, and overall efficiency. In the previous section, the performance of such
a hypothetical telescope was shown to be adequate for the planetary search problem,
but any significant drop in performance relative to this standard would make detection
impossible.
There are several ways of faiUng to meet the 'diffraction limit': The aper-
different
ture configuration can be suboptimum, the mirror can be contaminated with particulates
(dust), the images can be distorted by aberrations, and the mirror surface can scatter
light. AU of these problems should be dealt with by careful attention in the design phase.
Their detailed effects are examined and compared to HST in the following four subsec-
tions. HST is used as a reference because it is the best we have without NGST, we
can learn from the experience in constructing it, and it was pushing the hmits of our
technology when constructed.
The effect of small structures in the entrance pupil is to diffractively scatter light to
large angles. In the case of HST, the secondary mirror spiders will produce diffraction
t For reference, under the above conditions, the telescope's limiting magnitude is 34.5 in 1 hour for
S/N=3 with a detector read noise less than 3 e~ rms, and a background of my = 24/arc8econd'^.
243
peaks in two orthogonal do not disturb the image significantly.
directions, but otherwise
Clearly, a suitable spacecraft roll suffices to examine any region covered by the spikes.
Although unobscured designs are possible, a 10m telescope may also require secondary
mirror supports in the beam. If so, they should be made as narrow as possible. Thinner
spider fins scatter proportionally less power over a proportionally wider angle, while the
width of the spikes remains of the order of the Airy disk, so the intensity decreases as
the square of the width. In HST, further pupil obscurations are caused by the three
primary mirror support pads. The effect of these is twofold: they raise the average
intensity within the first arcsecond of the image, and more seriously, largely destroy the
circular symmetry of the Airy rings. It is important to avoid any such pupil obscurations
if at all possible. If a segmented approach is used in the primary mirror design, the
effects of diffraction at segment edges would need to be assessed. Such effects can be
minimized by making segment boundaries parallel to the unavoidable secondary mirror
support structures (which need not be orthogonal), and by making the segments as large
as possible.
Finally, a central secondary obscurationcause the monochromatic Airy ring in-
will
tensities to rise and fall relative to the inverse cube asymptote. The period of these
fluctuations in Airy rings is the ratio of the outer to inner pupil radii (1/e). For HST,
the central obscuration ratio, e = 0.33, and this will give the images of bright sources a
characteristic appearance with a repeating pattern of a relatively faint ring followed by
two brighter rings. The NGST will probably have a much faster primary, hence a smaller
central obscuration. The rings will therefore fall off more uniformly in intensity. These
considerations are only relevant if the bandpass AA/A is less than IjN for Airy ring num-
ber and do not apply to a broadband planet survey. An alternative strategy is to look
AT,
in anarrow bandpass near the minima of the Airy pattern. This improves the contrast,
but cuts the signal to noise and search efficiency significantly. Such an approach should
be used for follow-up observations only near the brightest stars, and we do not consider
it further.
In conclusion, the aperture configuration will not prevent the ideal performance nec-
essary for planet detection, provided that care is taken in the design of the primary mirror
mounts so that they are not in the beam.
244
3.3 Mirror figure errors
One effect of wavefront errors on the image is to lower the central peak intensity.
The central intensity drops by the Strehl factor (5 = exp(-47r^<T^/A^)) for a small root
mean square (rms) wavefront error a, or if the wavefront error is large but occurs at high
frequencies in the pupil, and has Gaussian statistics. If a 20% drop in peak intensity
isregarded as acceptable, then the equation can be solved to show that a must be less
than A/ 13. For larger wavefront errors, the peak intensity falls rapidly, and the desired
survey would be impossible. The important point is that alignment and microroughness
errors will also contribute to this total, so the primary mirror surface figure will need
to be maintained to about A/40 rms f, over the entire aperture. This is a demanding
specification and corresponds approximately to the performance of the HST primary. It
is an order of magnitude more stringent than a typical groundbased telescope mirror
specification. Furthermore it seems clear that the primary wiU need to be perhaps as fast
as Fp = 1.2 and lightweight for packaging reasons.
Nobody has proposed a method of building such a monolithic 10m mirror that will
passively retain shape to this accuracy. Existing proposals generally involve either
its
radial displacement, the square of segment radius, and inversely proportional to the full
aperture baseline squared, and F/ratio of the primary {Fp) cubed. A detailed calculation
shows that for A/20 wave rms at 500 nm and Fp = 1, any radial displacement must be
less than 7.5 fim for one meter segments. If this degree of freedom is to be passively
controlled, (as in the Keck telescope), it is clear that the segments cannot be much larger.
A tilt can be estimated to an accuracy of X/n if at least n^ photons are
wavefront
detected|. Furtherm ore, the correctable field size of the telescope is limited to about a
radius of 2J\Fp/D where the astigmatism of a Ritchey- Chretien Cassegrain becomes one
wave. The field must be fuUy corrected or cahbrated in order to be useable for wavefront
measurements, so this seems a realistic bound on the achievable field.
For the 10m Fp —
system we need a guide star density of 400 per square degree in
1
order to get one in the field on average. Over much of the sky (away from the galactic
plane) this means that guide stars will be 18th magnitude or fainter. It takes 1 second to
collect 400 counts from an 18th magnitude star with a Im aperture (assuming an efficiency
of 0.5 and a bandpass of 200nm). On the other hand, over the full aperture one collects
enough light for overall pointing in about 10ms.
In conclusion, if the mirror surface is to be actively controlled in tilt, and field stars
are to provide the position feedback, any disturbances must be
than 1/40 wave over less
timescales of several seconds over the full 10m aperture. The
any gravitational, effects of
ram pressure, thermal, and magnetic perturbations would all need to be shown to be
below these levels over such timescales. Clearly, such a control system could not take out
245
any structural normal modes (except perhaps loosely coupled components such as solar
panels). Further studies should reveal if such a quiet design is possible, but it seems likely
that such considerations will impose extremely tight constraints on the design, and we
may well be forced to internal laser interferometry or direct metrology systems (such as
capacitive sensors).
dP _ SX 47r2g(27rg/A) c
dfi "7r3(l-e)D^3+ X^ X^
Where, G{K) is the two dimensional mirror surface power spectral density at wave-
number K. The first and third terms correspond to aperture diffraction and dust scat-
tering. They have been shown to be acceptably small for the survey. BB (1989) con-
sider all the available metrology for the HST primary. They deduced that a power law
G{K) = a/Kf^ fitsf the data with aggj^ 27100 and /3^5y = 2.2 for G in /xm^ and K
=
in cm~^. The first two terms are equal for any telescope with the same power spectrum
slope when
(5/l")0-8(A/l,xm)-2-8(I»/lm)(l - e) = 20{a/aHST)
Hence, at 600nm, the microroughness scattering equals the diffraction scattering for
HST 3" 6 away from a point source. At greater angles, the wing intensity will fall approx-
imately as the inverse square of the intensity, rather than the inverse cube expected from
the diffraction limit.
Sayles and Thomas ( 1978a, b) haveshown that a power law description works for a
large variety of surfaces, and that the index is often close to 2. King (1971) has shown
that the observed profile from groundbased telescopes falls as the inverse square at angles
coronagraph that gives essentially perfect suppression of the Airy profile, with very small
loss in throughput, at least beyond about O'.'l. This technique does not work for fight
scattered by microroughness, so that one will always ultimately be fimited by the quafity
of the mirror surface.
t The fit is uncertain particularly because the bandpasses of the metrology considered in deriving it
246
the mirror polish power can be improved by about a factor of 2 (a factor of 1.4
If
in amplitude) on the relevant scales (1-10 cm), over that predicted for HST then the
planet search is feasible. An order of magnitude improvement would be desirable because
both contrast and signal to noise would improve correspondingly, leaving more margin for
error and a smaller planet would be detectable. Such a specification is reasonable. There
are indications that the HST primary is not as good as some ground based telescopes in
this respect. The specifications on the secondary pohsh can be about 1/e less stringent,
because the beam diameter is smaller on the secondary.
Microroughness imposes a practical hmit on contrast. A goal should be to fabricate
a mirror that has a surface that is 3 times smoother in amplitude than the HST primary.
The telescope would then be 'diffraction Umited' out to 8 arcseconds from a bright source.
than at 1", so a Jupiter-hke planet at 15" from a 6th magnitude star at 5pc would have a
brightness of 6 + 5.9 + 22.3 = 34.2. A detection should be possible throughout the range
1" to 15" and is Umited at the low end by low contrast and systematic errors, and at
the high end by poor signal to noise. On the other hand, in the region where the image
is microroughness limited, the ratio of the planet to stellar surface brightness will be
constant, until the sky brightness becomes the limiting factor. In this case, one is limited
by signal to noise for short exposures (one hour), but by contrast for longer exposures (10
hours). The microroughness hmit will be reached if a suitable coronagraph is used.
AU of these considerations are illustrated in Figure 1, which shows the available search
space both with and without microroughness scattering. The area indicated as feasible
apphes to the detection of a Jupiter-hke planet in orbit around a star at 5 parsec. There-
fore, angular separations in the Figure correspond numerically to the size of the orbit
relative to Jupiter's.The grey area defining the hmits of detectability corresponds to a
signal to noise in the range 2 to 4 and a contrast in the range 1/64 to 1/128. For the
bottom panels, all the Airy profile contribution is assumed removed by apodization, and
the microroughness scattertaken to be a factor of 2 better than that predicted for
is HST
(although we recommended a factor of 9 in Section 3).
247
Diffroction limit - 1 hour Diffraction limit - 10 hours
(U
V
3
9 8
0-
m&^w
iliiliinsj:
Cl
<
61
u
3
4-'
c
10
01
D
C
(U
L
Q.
a
<
6
Figure 1: Feasible search space in white, as a function of the star's apparent magnitude, and
the angular separation from it of a Jupiter-hke planet. The panels correspond to 1 and 10
hours of integration, when limited by diffraction or microroughness. The dotted contours
map the signal to noise ratio, while the solid contours correspond to contrast.
248
5. Conclusions
(6 years for Jupiter around one solar mass). If a coronagraphic camera is used the planet
would be detectable throughout most of the orbit.
Conversely, a significant decrease in the peak intensity, or increase in the wing intensity
would make experiment impossible with a 10m aperture. In the microroughness
this
hmit, increasing the aperture size improves the contrast ratio as the square. Although
the experiment is feasible with a 10m mirror that is a factor of 2 smoother than the
HST primary, an order of magnitude improvement would vastly improve the results, and
expand the search space. A significantly smaller aperture, even if made proportionally
smoother would suffer from severe signal to noise problems for the survey considered here.
Planet searches are not the only scientific driver for the development of the NGST,
but the goal has great popular appeal, and should figure prominently in the justification
for funding such a mission. Furthermore, as we have shown, mission requirements flow
naturally from such a goal, in areas as diverse as aperture configuration, mirror quality,
pointing stability, detector performance, and mission lifetime.
If these ambitious goals can be met, other scientific projects requiring high contrast
and resolution, and accurate reproducable pointing, would of course also benefit. HST
has a much higher predicted scattered light level, and hence may not offer any improve-
ment over ground based measurements for a resolved extended source in a scattered hght
halo such as the (3 Pictoris disk. The NGST will provide us with higher resolution and
sensitivity. With careful design it will also provide higher contrast. Such image quality is
vital for the appUcation that may be a primary driver for the development, and will also
serve well in other investigations.
Acknowledgements
It is a pleasure to thank Robert Brown and Pierre Bely for many useful discussions,
and Jan Koornneef for a careful reading of the manuscript.
References
249
DISCUSSION
Angel: I have a comment relating to on-board metrology using guide stars. The situation is easy
compared to doing adaptive optics on the ground. You can take stars at much larger angles if you
use something like a Hartmann test, for example. You know what the aberrations are going to be
at large fields. So once you have made a calibration, you can either do it by calculation or explicitly
with the telescope. It seems to me that fields up to several arcminutes are quite feeisible, because the
aberrations will still be such that a Hartmann test will work. You can use brighter stars from a wider
field because wavefront distortion of the guide star can be quite large.
Biirrows : My main object in identifying the guide star problem is to bring it to people's attention. The
field can no doubt be enlarged somewhat with corrective optics. However, the uncorrected aberrations
increase as the square of the field angle, and I have already assumed that they can be calibrated or
corrected to something like A/20 over a field where they are as large as one wave. It is an order-of-
magnitude calculation. The problem that I have is that even if you make it a 5 arcminutes or 10
arcminutes field you still need an extremely quiet structure over long periods. Furthermore, for many
active optics concepts, people talk about correcting over diameters that are much smaller than a meter.
I agree that one can push any of the individual eissumptions but we will need to push several of them
at once. It seems to me that it will be extremely challenging to do that. So perhaps we will need
instead to put effort into developing on-board metrology that will make the structure rigid internally
without external references.
Breckinridge: Is there an observing program to take place during the flight of the Space Telescope
which will measure the performance and the performance degradation of large mirrors in space for the
Burrows : Yes. One of the Observatory level tests to be conducted during the Science Verification
period is specifically designed to measure the wings of the point spread function out to about 12 arcsec.
These wings provide the background against which any planet must be detected. The mirrors are not
expected to experience degradation on orbit at length scales of centimeters, and so the performance in
this region is not expected to change. It will be monitored with the science instruments as the mission
progresses. It will be interesting to compare the results of this test with predictions made on the
bcisis of the pre-launch metrology. I know of no case of a mirror where there is both well-established
metrology and good measurements of the profile. So maybe we have to be careful when saying that we
understand the wing profile. HST may be the first case where both of these measurements are done.
niingworth : The timescales and size scales (Is and Im) that can be constrained with the signal from
an 18th magnitude star do not seem to me to necessarily be a severe problem - But I would appreciate
some feedback from the speaker and the audience on this!
Burrows : The 1 m constraint seems reasonable from stiffness and weight considerations. The 1 second
constraint is much harder to justify, and I have identified it as an area for further work. Clearly, thermal.
250
gravitational, magnetic and possibly atmospheric drag stresses become relevant on these timescales in
addition to mechanical normal modes that may be excited in the structure by the pointing system in
response to those perturbations.
Layman: The 1 second sample time calculated for A/20 position measurement accuracy (18 mag star,
Im^ surface area) would permit no more than approximately 0.1 Hz control bandwidth.
Burrows : One would need to be concerned if the area to be controlled was less than a meter, such as
Unidentified : A comment of on the dust coverage. For AXAF the end- of-life budget for coverage is
5 10"^. As you noted, dust only affects the Airy profile at large radii, but it still matters.
Burrows : Dust has been a serious concern for the HST project, and the measured 2% coverage is
expected to increase by 1% or so during launch. For a given dust distribution, larger apertures have
Angel: For a telescope on the moon one can dump the vibrational energy into the ground. The moon
also has the great advantage of lacking the exciting mechanisms (e.g., wind) that cause us so much
trouble on earth. For a spacecraft you need to avoid vibrational frequencies which are higher than
some critical frequency. So you have to think about what is going to excite it, and how quickly you
can dump it.
Stockman : I believe I can comment on Roger Angel's question about how vibrations are damped
in space. On HST, the structure is made extremely stiff so that all natural frequencies are above 10
Hz. However, the solar arrays have very low frequencies (about 0.2 Hz) and their effects and slow
thermal removed by the pointing control system. The Pointing Conrol System and
effects are all other
disturbances are controlled or damped to minimize the excitation of high frequency modes.
251
Precision Segmented Reflectors (PSR)
Richard Y. Lin
Jet Propulsion Laboratory
PSR PROGRAM
• OBJECTIVES
• DEVELOP ENABLING TECHNOLOGIES FOR LARGE LIGHTWEIGHT SEGMENTED
REFLECTOR SYSTEMS FOR SPACE APPLICATIONS
• VALIDATE TECHNOLOGIES BY MEANS OF A SYSTEM DEMONSTRATION
• APPROACH
• FOCUSED DEVELOPMENT
• SOME ADVANCED RESEARCH
• WILL SERVE AS A FUTURE TECHNOLOGY TEST BED
• 4 MAJOR ELEMENTS
252
TECHNICAL OBJECTIVES
• LIGHTWEIGHT 5 KG/METER SO
, LOW COST
» SURFACE ACCURACY 3 MICRONS
• THERMAL STABILITY < 3 MICRONS OVER 100 C
• SIZE 1 METER
TWO-RING ERECTABLE PRECISION SEGMENTED REFLECTOR
(Tetrahedral Truss Design)
SUMMARY
Current PSR technology development Is intended for LDR-like
applications:
• Large segmented reflector systems
• Lightweight composite panels at 3 ^m RMS
• Precision lightweight structure at 500 |im
• Active figure control at 1 ^m RMS accuracy
• 5 ^m RMS overall surface accuracy
254
DISCUSSION
Nelson : What limits the stability of the stiuctute? You have 100 microns at an estimate. Is it water
outgassing or thermal variability? What might you expect for a 10-16 meter space telescope structure
like this?
Im : In the development of space structures, one of the key components are the joints. Currently
these are aluminum which has a large coefficient of thermal expansion compared to graphite epoxy.
So we are interested in pushing that joint design towards titanium, for example, which has a much
smaUer coefficient of expansion, or even graphite epoxy. But the challenge is how does one design a
structure that is lightweight, but also at the same time does not have hysteresis, non-linearity or free
Unidentified : Have you done any testing of these structures at cryogenic temperatures?
lin : First, we have done about a dozen tests on the panels, one being a low temperature test in a
thermal vacuum chamber. It was done with a goal of calibrating the overeill system, and so it wasn't
really intended to get the low temperature performance of the panels. But we believe that at the low
temperatures, about 200''K, that the surface accuracy changes would be about 5-6 microns rms. This
is from analysis. In addition, we are in the process of carrying out actual tests. We have, however,
done a lot of low temperature testing in the materials area using coupons. This materials development
is being done in parallel and one can do a lot very useful testing using coupons.
255
ASCI£: An Integrated Experiment to Study Control Structure Interaction
in Itirge Segmented Optical Systems
The use one of the major design concepts for the new
of a segmented primary mirror is
generation of large ground and space-based telescopes. The W. M. Keck Ten-Meter Tele-
scope (TMT), or NASA's planned Large Deployable Reflector (LDR) are typical examples
of this approach.
In a segmented reflector the mechanical rigidity and geometric aecuracy are supplied
solely by the support structure. Imperfections in the manufacturing process, deformations
due to gravity loads, thermal gradients, slewing and tracking dynamics, and structural
vibrations make it imperative that the positions of the segments be actively controlled.
For example, the TMT segment alignment system requires 162 sensors, 108 actuators,
and a special control system to align its 36 segments.
An important characteristic of such systems is that the supporting truss is very light
(even for ground-based telescopes like the TMT), thus very flexible, with usually low
natural damping. As a result, interactions between the segment alignment control system
and the structural dynamics are expected to occur.
Theinteraction between the control system actuators and sensors with the dynam-
ics of the supporting structure seriously limits the performance of the system. A recent
analytical study done by Lockheed [refs 1, 2 and 3] of the TMT that modelled the full
structure, actuator and sensor and control system operation showed that control sys-
set,
tem stability was seriously affected by dynamic coupling between the segments through
the support structure. Tests performed on a single segment and support cell conducted
at the Lawrence Berkeley Laboratory failed to predict this phenomenon because they did
not account for the effects of collective motion and coupling in the full system. While
analysis can be very effective in predicting major behavior, there are numerous practi-
cal problems that must be solved and tested with real hardware. Moreover, the design
and implementation of a multi-actuator, multi-sensor control system for large flexible
segmented reflectors (LFSR) has never been experimentally validated. There was thus a
need to develop a test bed that could support strong interdisciplinary studies to develop
and validate the emerging LFSR technology.
ASCIE Objectives
256
segmented telescopes. The ASCIE test bed has been designed to support a number of
interdisciplinary studies that address major technical challenges of LFSRs. One of the
immediate objectives of this project concerns the study of structures/controls interaction
in LFSRs. However the scope of ASCIE is of a more general nature. Topics such as
structural control (e.g. active damping, vibration suppression, disturbance alleviation)
or pointing and slewing techniques for LFSRs wiU also be addressed using the ASCIE
System.
The near-term goal for the ASCIE is to demonstrate in the laboratory a fully operating
TMT-like segment alignment control system with a level of performance comparable to
means to investigate the CSI
that required for a real telescope. This study will provide a
phenomenon and compare it to analytical predictions. Longer term
in a real structure
goals include substantial improvements in bandwidth and disturbance rejection through
the use of advanced control techniques.
ASCIE Features
The ASCIE structure shown in Fig. 1 consists of a 2-meter, 7-segment actively con-
trolled primary mirror supported by a light, flexible truss structure. The optical system
emulates that of an f/1.25 Cassegrain telescope and utilizes an actively controlled sec-
ondary mirror. The six peripheral segments are controlled in three degrees of freedom
using specially developed precision actuators. Segment alignment is obtained through the
use of edge sensors whose signals are processed by the control system which then generates
the commands for the actuators. One of the unique features of the ASCIE is its optical
scoring and calibration system which eliminates the requirement that the segments have
real optical surfaces. Small optical flats combined with a special faceted secondary mirror
reflect laser beams onto an array of linear position-sensing photodetectors.
OPTICAL
MEASUREMENT
SYSTEM FOR
CALIBRATION
AND SCORING
6 ACTIVELY
CONTROLLED
SEGMENTS
EDGE SENSORS
FOR ACTIVE
ALIGNMENT
Figure 1
LIGHTWEIGHT TRUSS
257
ASCIE structure dynamics properties
The ASCIE structure was designed to replicate the complex dynamic behavior that
characterize large segmented systems. Typical of such systems is the modal grouping due
to the high degree of symmetry of the structure. For a perfectly rigid support structure,
the segments and their supporting mechanisms (e.g. subcell and actuators) have almost
identical dynamic properties and thus can be viewed as N identical oscillators at the
same frequency. For ASCIE 18 modes of vibration related to the segments will occur at
more or less the same frequency. However, because the support structure is in reality quite
flexible, coupling between the grouped oscillators produces two results. First, the resonant
frequencies tend to spread slightly by moving away from each other [Ref 7]. The second,
and more significant effect in terms of CSI, is the creation of global or collective modes
in which the segments as a whole behave as a continuous sheet rather than as individual
pieces. These modes effectively couple one part of the mirror to another, creating adverse
interactions that did not exist when considering individual segment dynamics. Fig. 2
shows a comparison between the modal frequency histograms of ASCIE and of the Keck
telescope. A great similarity can be observed. This behavior is quite different from that
of a beam-like structure as also shown on the figure. In addition the ASCIE structure
was tuned to have its significant modes around 12-15 HZ to be relevant.
90 -
>
U
z
D
o
UJ
FREE ACTUATOR
MODES
J I
120
>
o
z
UJ
D
o
UJ
E
Figure 2
10 IS 20 25
MODE NO.
c. Cantilever Beam
258
ASCIE Control system principle
The segment alignment control system is similar to that of the Keck telescope. It
utilizesa self-referenced system of edge sensors providing a set of error signals that are
processed through a special algorithm to obtain the piston and tilt errors for each individ-
ual segment. Corrections based upon these errors are applied, through proper electronic
compensation, to the actuators controlling position and tilt of each segment (Fig 3). In
such a centralized control system where the actuators are driven by signzds from all the
sensors, structural dynamics can couple the actuators to all the sensors through global
modes of vibration, thus resulting in potential instability.
MIRBOB
SEGMENT
Figure 3
,
Figure 5
- CENTRAL SEGMENT
150 250 ., ; 1. 1 ..
100
».'&%:^\^^^lf
il
-250
PETAL
-150 -250
TWIST
-200
2.5 5.0 2.5
TIME (s)
Piston error
260
Figure 6a
comparable to that of the Keck telescope requirements and thus represents a major step
in validating the technology for large flexible segmented optical systems.
Conclusions
This paper has presented a description of the ASCIE experimental setup, a generic
test bed for several essential technologies. In particular its multi-input, multi-output,
non-collocated control system and complex structural dynamics, characteristic of large
its
segmented systems make it an ideal test bed for CSI experiments. The high accuracy of its
measurement system will make it possible to investigate the dynamics of microvibrations
References
1. Aubrun, J-N, Lorell, K.R.,Havas, T.W., and Henniger, W.C: An Analysis of the
Segment Alignment Control System for the W- M. Keck Observatory Ten Heter Tele-
scope. Final Report. Keck Observatory Report Number 143, Dec 1985
2. Aubrun,J-N., Lorell, Dynamic Analysis of the
K.R.,Mast, T.S., and Nelson, J.E.:
Actively Controlled Segmented Hirror of tne W. M. Keck Ten Meter Telescope. IEEE
Control Systems Magazine, Vol. 7, No. 6, Dec 1987
3. Aubrun, J-N, Lorell, K.R., Havas, T.W., and Henniger, W.C: Performance Analysis
of the Segment Alignment Control System for the Ten-Heter Telescope. AUTOMAT-
ICA, Vol. 24, No. 4, Jul 1988
4. Sridhar, B., Lorell, K.R., and Aubrun, J-N.: Design of a Precision Pointing Control
System for the Space Infrared Telescope Facility. IEEE Control Systems Magazine,
Vol. 6, No. 1, Feb. 1986
5. Lorell,K.R. and Aubrun, J-N.: Active Image Stabilization for Shuttle-Based Pay-
loads. Proceedings of the 1986 American Control Conference, June 18-20, Seattle,
Washington
6. Parsons, E.K.: The Jitter Beam: an Experiment Demonstrating Pointing Control on
a Flexible Structure. 1988 American Control Conference (Atlanta, Georgia), June
1988
7. Sridhar. ,B., Aubrun, J-N., and Lorell, K.R.: Analytical Approximation of the Dy-
namics of a Segmented Optical System. Proceedings of the International Federation
of Automatic Control gtn Symposium on Distributed Parameter Systems, June 30 -
July 2, 1986, UCLA, Los Angeles, California
261
DISCUSSION
Ndson : What is the sensor noise with the actuators off, that is, the high frequency noise? Also, what
is the sensor noise when the actuators are turned on and running - that is, the sensor residuals?
Lorell : The baseline sensor performance has a floor around 17- 20 nm, and that is inherent in the
sensors. When the loop is closed and the system is running with 6 panels, we get approximately 200 nm
of piston error and 800 nano-radians of tilt error.
Nelson : And that is measured by the optical system, and not the sensor system?
lorell : That is the sensor system, since we have not yet calibrated the optical system.
lorell : It was the lab floor. Next year we hope to apply speciflc loads.
Bely : Have you an explanation for the plateau on the frequency response curve?
lorell : Yes, it is the collective motion. We saw this on the Keck models. These big segmented
reflectors are like coupled spring-mass systems. Jerry Nelson has a computer animated movie of the
vibration modes of the Keck telescope from 6 H« up to about 50 H«. There are modes where the motion
of one panel ripples through the structure and causes the other panels to vibrate. For example, the
model Keck telescope that we did some time ago for the whiffletrees had a natural frequency
for the
of 25 H». What we saw were peaks at 23.9 Ha, 24.2 Hz, 25.0 Hz, 25.2 H« and so on. The 25 He
- the energy spectrum spreads out. The mirror acts, for example,
whiffletree frequency spreads out
likea membrane with very densely packed modes and so classical structural control becomes much
more diflicult.
262
Control Structures Interaction (CSI) Technology
W. E Layman
CSI Task Manager
Jet Propulsion Laboratory
Control Structures Interaction (CSI) technology for control of space structures is being
developed cooperatively by JPL, LaRC and MSFC for NASA OAST/RM. The mid- '90s
goal of JPL's CSI program is to demonstrate with analysis, ground and flight tests, the su-
per quiet structures needed for large diffraction-limited instruments such as optical stellar
interferometers and large advanced successors to the Hubble Space Telescope. Micropre-
cision CSI technology is intended as a new "building block" for use by the designers
of large optical systems. The thrust of the microprecision CSI technology effort is to
achieve nanometer-levels of space structure stability/accuracy with designs which employ
otherwise conventional spacecraft technologies.
Without CSI, an ambitious (advanced in size or dynamic range) large optical sys-
tem would demand major state-of-the-art advances in areas of conventional spacecraft
technology, such as:
1) Ultra-quiet reaction control wheels, tape recorders, antenna and solar panel drives.
2) Ultra quiet cryogenic coolers, IR chopping mirrors, oscillating gratings, etc.
3) Ultra-stable/ultra-stiff/ ultra-quiet and highly-damped structural materials and joints.
4) Ultra-tight temperature control, etc.
Even on a first, less ambitious, large optical system, incorporation of CSI technology
is desirable in order to assure healthy performance margins. By incorporating CSI into
the design, project scientists can be more confident of achieving needed performance, and
project managers can be more confident of meeting project schedules and budgets.
Since optical interferometer missions require extreme structural and optical stability
and accuracy, JPL's CSI team is developing a Focus Mission Interferometer (FMI) as
a technology- driving challenge-problem generally representative of large optical system
technology needs. The FMI finite element models, control system designs, and perfor-
mance analyses will be available to the large optics design community for reference as
they develop future large optics design concepts and mission proposals. The CSI design
methodologies demonstrated on the FMI will be applicable to all large microprecision
optical systems.
JPL design experiences have indicated the following CSI technology development areas
are especisdly applicable to large optical system projects:
263
Control/structures design methods
Quick early estimates of system performance are needed for pre-project studies of large
optical systems which incorporate CSI. The JPL and LaRC CSI teams are cooperatively
developing expedient CSI analysis/design tools for quick global (structure/control/optical)
system stability/accuracy prediction. Also, CSI design strategies, and architectures (dis-
turbance isolation, structural response suppression, optical element position compensa-
tion, local control, global control, proof mass actuators, active members, etc.) are being
evaluated and documented in a CSI Handbook. Transition from conceptual studies to
flight-projects will demand CSI design/anjdysis tools with expanded capacity and accu-
racy, for exact detailed design and exact performance prediction. JPL and LaRC will be
cooperatively developing these tools as well, as part of the CSI technology development
effort.
264
JPL Control Structures Interaction Technology Program
X
Langley Research Center Marshall Space Flight Center
Jet Propulsion Laboratory
Micro-Dynamic Test and Analysis Active Structure Development Control/Structure System Methods
•
"Quiet" Space Interferometers '
Large Space-Station Telescopes •
Large Sub-Millimeter Antennas
• Synnott Study Team, Others •
Large EOS Telescopes • Low Earth
•
Large Free Flier Telescopes • Geosynchronous
• "Active" Space Interferometers
• Heliocentric
• Focus Mission Interferometer
• Reliable/Ground Testable
265
JPL CSI Technology: Active Structure
266
Control Structures Interaction: System Architecture
Pointing '
Line of Sight i
CSI
Motion
Compensation
Active Members
csi
Disturbance
Motion Isolation
Active Members
On CSI
Disturbances Motion Reduction
Active Members
• Open Loop Energy Dissipation: Less Than 10% of Induced Strain Energy
• Closed Loop Energy Dissipation Range: 0. 1 % to 90% of Induced Strain
Energy
• Closed Loop Energy Dissipation Is Commandable
ruioalactrlc
267
1
VIBRATION ENVIRONMENT
AND CONTROL APPROACH
1000
INDUCED
VIBRATIONS
100
10 _ Stabimy Reqt:
Hz
1 10
OPTICAL COMPENSATION
ACTIVE/PASSIVE VIBRATION DAMPING
ALIGNMENT CONTROL
VIBRATION ISOLATION
- GRID 41
Fr«« Decay
GRID 42
DISTURBANCE
SOURCE
ACTIVE MEMBER
STRUT 22
ACTIVE MEMBER t 4
DIAGONAL
LOAD CELL
268
DISCUSSION
Bely: What gain in mass over conventional systems can we expect from an active structure?
Layman: In a representative calculation, about 100 times the stiffness of support structure (5 to
10 times the mass) was required to obtain as low a structural vibration response with a conventional
structure as was achievable with a controlled structure. There is no good rule of thumb for conventional
structural equivjilents to controlled structure. It may be impractical to solve many future problems
with a conventional structure.
269
1
Dominick Tenerelli
Lockheed Missiles and Space Co
10 METER
UV-VISIBLE-IR OVERVIEW
.TELESCOPE-
Q pjgcnvE :
RELATIVE TO HST
•
REVIEW 10 METEB TELESCOPE BEQUIBEIIEIITS
• OeCUSS WPROVEO JITTER REQUIRED REUTWE TO HST
OF THE 10 METER TELESCOPE
DBCUSS TOBQOE ACTUATOR DUE TO LARGER
• INERTIA
10 METER
UV-VISIBLE-IR POINTING CONTROL SYSTEM REQUIREMENTS
.TELESCOPE-
IOMETER
•tBronotgytor
270
10 METER
UV-VISIBLE-IR
. TELESCOPE—
JITTER COMPONENTS BASED ON HST
OVERALL JTTER
7fna$
TELESCOPE
(20 -> 60 Hz)
37 mas
• CONSIDERATION OF A TWO LAYER CONTRa SYSTEM. IE. BODY POINTING AUGMBOED BY MAGE
MOTION COMPENSATION OMC). TO ACHIEVE 1.4 MAS M THE VSBLE ( 0.3 MAS IN THE UV ) MDICATES A
HIGH BANOWDTH FINE GUIDANCE SENSOR WILL BE REQURED TO REDUCE ERRORS SMCE SOME
ERROR SOURCES HAVE FREQUENCIES UP TO 60 HZ. OR MORE
• TO ACHEVE LOW PHASE ERROR IN TRACKING THE POINTING DEVIATIONS, THE IMC SENSOR MUST HAVE
A BANDWDTH 5 10 TWES HIGHER THAN THE DISTURBANCE A CONDITION HARD TO SATISFY FOR THE
-
SUMMARY REDUCTION OF ERROR SOURCES AND PRECISE BODY POINTNG WILL BE KEY TO ACHIEVING THE
:
jnTEH REQUIRaCNTS
10 METER
UV-VISIBLE-IR TYPICAL RWA/STRUCTURAL JITTER
. TELESCOPE-
1.0. 2.11
as. n.i
BBjOW 10 HZ WHEB. SPEED THE DOMINANT SOURCES OF DISTURBANCE ARE T»E REACTION WHEEL
TORQUER RIPPLE T>CflE ARE A FEW COMPONENTS OF AXIAL AND RADIAL FORCE
ABOVE 10 HZ THE RWA FORCES DCMNATE
THE FREQUENOES OF THE DISTURBANCES VARY FROM 22 T0 104 HZ. 271
10 METER
wv yisiBLEiR RWA MANEUVERING FOR THE 10 METER TELESCOPE
_ TELESCOPE
TORQUES WHX BE SOUR PRESSURE AND HEUUH BON. OFF FROM THE IR
INSTRUMENTS, BOTH OF WHICH ARE SMALL COMPARED TO THE TORQUES ON HST
IN THE LOW EARTH ORBIT.
• WITH AN INCREASE IN RWA TORQUE AND SPEED, THE 10 METER TELESCOPE CAN
ACHIEVE ESSENTIALLY HST MANEUVER PERFORMANCE WITH VEHICLE INERTIAS
OF 200,000 KG-M^Z. (MAXMUH HST INERTIA IS APPROXMATELY 00,000 KG-M«2).
SUMMARY : EMSTING RWAs CAN BE MODIFIED FOR THE LARGER 10 METER TELESCOPE
INERTIA.
RWAs V5 CMGs
Each actuator has Its assets. The CMG can provide high torque, high momentum storage capability
and. the force dIstuiiMmces tend to be at flxed freoucnclcB. The high torque capability of the CMO Is
not needed for the 10 meter telescope. The fibced ncquendes of the CMG rotor forces and torques
make It attractive when viewed relative to the fact that the changing RWA irtied speed causes the
RWA to 'sweep' through the telescope structural modes. For example. If the HST bearing were used
in a CMG which had a rotor frequency of 100 Hz (6000 RPM). the bearing forces would have a smaO
component due to the retainer at 34 Hz. The other dominant forces and moments would be at 100
Hz. 282 Hz. 516 Hz. 560 Hz. ct al. frequencies which could be effectWdy reduced by an Isolatar. If
required. Also, the CMG could Incorporate a capability to reset Its spin speed over a small range.
e.g. . ± 5% about the 100 Hz nominal speed to avoid troublesome structural modes. Mfwt CMG
development to date has been for large devices. Relatively few small devices have been built (one
example Is the astronaut manned maneuvering unit), and none have not been concerned with ultra-
low force, moment and torque ripple noise in the development of these devices.
The HST RWA can meet the maneuvering requirements and has been extensively characterized
relative to its noise producing error sources.
Using 'today's persecttve" on the 10 meter telescope, which is many years from actual hardware
Implementation. It is better to spend any available technology funds to pursue a low noise RWA at
this time.
272
.
IOMETER
UV-VISIBLE-IR
TELESCOPE
NUMMARY
bUMMAHY
.
JITTER PRESENTS THE TECHNOLOGY CHALLENGE. IT S EXPECTED THAT THE 1.4 MAS (20% OF HST)
JITTER REQUIREMENT CAN BE MET WITH DESIGNS USMG AVAILABLE ATTTTUDE CONTROL
ACTUATORS EMPLOYED M PROVEN HST CONTROL LAWS AND USER INTERFACES. THE 0.3 MAS ( 4%
OF HST) JITTER REQUIREMENT FOR UV WLL REQUIRE ADDITIONAL DEVELOPMBfT OF LOW NOISE
GYROS, RWAs AND "QUIET STRUCTURES.
PRECISION BODY POMUNG COUPLED WTTH REDUCTIONS IN D6TURBANCES WILL BE THE PRME
CANDIDATE TO MEET THE 10 METER JITTER REQUREMENTS DUE TO THE DIFFICULTY M OBTAIIMG A
SUITABLE MC SENSOR
USE OF IMC WILL REQUIRE DEVELOPMENT OF A HIGH BANDWIDTH SENSOR. THE STAR ENERGY
AVAILABLE AT 17 MAG DOES NOT APPEAR TO SUPPORT A HKSH BANDWDTH SENSOR. BRIGHTER
GUIDE STARS ARE REQUIRED, IMPLYING A LARGER FGS ACQUISITION AREA.
RWAs ARE AVAILABLE TO MEET THE 10 METER MANEUVER REQUIREMENTS. TORQUER RIPPLE AND
FORCE DISTURBANCE REDUCTIONS ARE REQUWEO TO MEET THE JITTER REQUIREMENT.
DISCUSSION
Unidentified : You noted that some people at LMSC feel that you should use CMGs, Controlled
Moment Gyros, instead of reaction wheels. Why use reaction wheels?
Tenerelli : Because of problems with CMGs on Skylab there is a bias against CMGs and towards
reaction wheels amongst some folks. In our organisation there is a strong element who feel that CMGs
should be the momentum compensation device that is used. But we can make either work.
Stockman : What was the structure you assumed for your analysis for a 10 meter telescope?
Tenerelli : We assumed that we would have a metering structure and a focal plane structure that
273
Moving Target Tracking for a Next Generation Space Telescope
David R. Skillman
HST Project Office, Goddard Space Flight Center
Abstract
The Hubble Space Telescope required a complex method for tracking targets in motion in the solar
system, due mostly to atmospheric drag effects. A NGST in high earth orbit, or on the Moon, would not
suffer these drag effects, thus mitigating the tracking problem. Many of the HST problems arise from
the 'open-loop' nature of the tracking method. An NGST would be better off if it could use 'closed-loop'
tracking of all targets, both fixed and moving. A method to accomplish this is proposed. Methods
to reduce the jitter and lower the ambient temperature of the optics via a tethered optical system are
proposed.
1. Introduction
The Hubble Space Telescope required a complex method for tracking targets in motion
in the solar system. Most of that complexity is driven by the low orbit of HST and the
attendant atmospheric drag effects. Target tracking must be 'open-loop' on HST since
vehicle tracking done on the guide stars, not on the target. This puts a burden on the
is
b) Targets in the solar system have a substantial parallax motion due to the orbiting
motion of the spacecraft. This spacecraft- induced parallax motion of the target can
only be tracked if the position and velocity of the spacecraft are well known in advance.
significant. Variations in solar activity can dramatically affect the drag. For HST the
position uncertainty can grow to the order of a 1000 km in a month. At Jupiter this
would correspond to pointing errors of hundreds of milliarcseconds.
These three considerations show that it is impossible to produce a tracking profile
(spacecraft attitude as a function in time in the control system coordinates of the space-
craft) that will remain accurate for execution a month or more in the future.
This problem was solved on HST by generating all tracking paths in geocentric coor-
dinates and then having the HST flight computer perform the coordinate transformation
from geocentric to spacecraft centered coordinates. The orbit ephemeris of the HST in
274
the flight computer is corrected every few days (and can be updated within 12 hours if
length. This error is of the order of 30 milliarcseconds at the end of a three arcminute
length track.
For a NGST with a pointing stability requirement below a milliarcsecond, roll refer-
ence star positions would have to be known to an arcsecond or better and the star tracker
would have to have a resolution of better than an arcsecond. Astrometric stars would
probably be required for these roll control stars (i.e. Hipparcos stars). The nominal three
arc minute track length used for HST is predicated on the width of the FGS pickoff mir-
rors. A telescope that is responding to true science requirements may want track lengths
much longer. (Halley at closest approach was moving 3 arc minutes in 15 minutes of
time.) Such longer track lengths make the roll knowledge even more crucial for moving
targets.
Again, this roll problem is due in part to the inability to 'closed-loop' track on the
target. If the moving target itself is used to generate an error signal, this roll issue
disappears.
Traditionally a dichroic beam splitter or pellicle is inserted near the focus to divert
light into a guiding device. An alternate way
be able to view a target while also
to
performing a science observation would be to intercept some of the light directly at the
prime focus of the primary while allowing the bulk of the light to trace a nominal path
through the rest of the telescope optics. A camera at the prime focus could then generate
the necessary error signals for the pointing control system (Figure 1).
The advantages approach would be that the amount of intercepted light could
of this
be controlled by geometry rather than by index of refraction changes or by partial reflec-
tions. No pickoff mirrors need to be devoted to the guidance system. The roll axis would
be controlled by the fixed head star trackers, thus only one guidance camera is needed.
275
Figure 1
4. Jitter control
Moving mechanisms and excitable structural modes provide most of the high speed
jitter.These sources include solar array movement, solar arraj' excitation modes, reaction
wheels, antenna pointing/tracking motions, and tape recorders. The isolation and control
of these sources demands much attention and consumes resources. A simpler approach
would be to divide the spacecraft up into a noisy platform and a quiet platform. This is
only possible in a high earth orbit where gravity gradient and aerodynamic torques are
ignorable.
Figure 2 schematically outlines a division of the spacecraft into two platforms. All
mechanically noisy subsystems are moved into a support module. This support module
houses the reaction wheels, the telecommunications antennas, the solar array, and the
tape recorders. In addition, it contains quiet subsystems such as transponders, magnetic
torquing coils, batteries, local gyros, and the flight computer. This platform is sun pointed
with low precision (a few degrees). Segregation of these components also reduces the
harness size for the spacecraft.
The optical system, with its science package, is housed in a separate platform. The
two are connected by a flexible cable which contains two wires and an optical fiber. The
wires carry power to the optical section and the optical fiber carries data bi-directionally.
The quiet platform must have its own gyros and magnetic coils. These gyros would be
fiber-optic gyros and would have fiber coils that span large segments of the body. The
magnetic torquer coils would also be body-spanning.
The two bodies are coupled via magnetic fields. Each platform can generate at least
a dipole field in any direction. The solar pointing platform generates a base field and
the optical platform generates a torquing field to interact with the base field. The dipole
created by the optical platform will be perpendicular to the base field (Figure 3). As
the optical platform slews, the fields will remain fixed relative to each other, generating
maximum torque.
Since a dipole in a gradient will experience a net force, the interacting fields can also
be used to position the optical platform relative to the support platform. As the optical
platform torques against the base field, the reaction wheels in the support platform will
act to maintain the solar pointing of the support platform. Thus the slewing motion of
276
Solar
array
and
Sun
shield
SUN
LINE
Figure 2
Figure 3
277
the optical platform will be reflected in the increased reaction wheel speed of the support
platform.
The large circular solar array/solar shield structure is designed to cast a shadow region
in which the optical platform can operate. This provides a number of benefits. The first
benefit is that the passive temperature of the telescope can be lowered considerably. This
will lend itself to achieving lower optical surface temperatures for infrared performance
and for lower ambient detector temperatures. Power from the tether will supply heat to
those instrument components that prefer higher temperatures. The second benefit is that
the telescope line of sight can come closer to the sun than 90 degrees. This allows access
to more of the sky and allows observations over longer time windows. Inner solar system
objects such as Venus and comets can also be followed closer to the Sun.
5. Conclusion
The primary conclusions of this paper are that closed-loop tracking of both fixed and
moving targets is highly advantageous, that fixed head star trackers of high resolution are
needed for roll control, that an alternate method exists for imaging the target on both the
science instrument and the guidance camera, and that a dual platform offer substantial
vibration isolation.
DISCUSSION
mingworth: David, you made an interesting point about the difficulty of doing planetary observations
from the lunar poles. However, I do not think that we are likely to face that problem. As good as
these sites would be for tistronomical telescopes, I cannot see us getting access to these sites in our
lifetimes, especially for the two telescopes needed for full sky coverage.
278
Passive Cooling of a 10 meter Space Telescope
A Preliminary Investigation
Philip J. Tulkoff
Swales and Associates, Inc.
1. Objectives
2. Assumptions
The spacecraft structure was assumed to be approximately 12 meters in diameter and
12 meters high. The upper cylindrical section of the structure houses the primary and
secondary mirrors and acts as the telescope's baffle and sun shield. The lower portion of
the spacecraft called the instrument module (I/M) houses the scientific instruments and
the spacecraft subsystems such as power, attitude control, and communication. Figures
1 and 2 show views of the external geometry as described above.
SECONDARY MRFOR
HOUSMj
UNE OF SIGHT
TELESCOPE EXTEROH
SURFACES (SILVER TEFLON)
(a - 0.12. e - 0.76)
SOLAR CELLS
INSTRUMENfT MODULE
SOLAR CELL*
RADIATOR EXTEROfl
SPACECRAFT EXTERIOR
FIGURE 1
SPACeCRAFT EXTERIOn
FIQUnEZ
The primary mirror housed in the telescope section was assumed to be made entirely
of aluminum. The face sheets centimeter thick and the core was
of the mirror were 1
comprised of an aluminum foam. The active mirror surface was modeled as having a
thermal emittance of 0.05. The secondary mirror was contained in a shroud at the top of
the telescope section to protect from stray light. The mirror was assumed to be 1 foot
it
in diameter and also had an active mirror surface with an emittance of 0.05. The hole
in the middle of the primary mirror was not modeled in order to simplify the geometric
model.
The external surfaces of the telescope section were assumed to be covered with sil-
verized teflon to provide for a low alpha/e thus reducing the absorbed solar flux on the
sides of the spacecraft. All internal surfaces of the telescope section were assumed to be
optically black.
279
The power requirement was assumed to be 5 kilowatts and was supplied through
bod}' mounted solar cells attached to two areas of the I/M to take advantage of the
range of possible solar angles that the spacecraft might encounter. The other areas of
the 1/M were used as thermal radiators to control the temperature of the instruments
and spacecraft subsystems. The thermal radiators were coated with silverized teflon to
provide for an efficient radiator surface as well as reduce solar inputs. Figure 3 shows a
view of the bottom of the spacecraft and depicts the location of the solar cells and the
radiator panels.
SOLAR CELLS
INSTRUMENT MODULE
INSTRUMENT ^ v,,r^^ ^
kxx/7>^x SIDE PANELS
RADIATORS _
SOLAR
VECTOR
<
SUN PERPENDICULAR
TO LOS CASE
SOLAR CELLS
AFT END
FIGURE 3
The telescope and I/M sections were modeled as two separate elements. The only
interaction between the elements was through a radiation couphng between the bottom
of the primary mirror and the top closeout of the I/M. No conduction from these two
sections was considered in order to simplify the modeling and due to the lack of structural
detail. This method also allowed for comparisons between the solar flux effects on the
primary mirror and the radiation heat leaks from the I/M to the telescope.
The side panels of the telescope section were not conductively coupled to one another
due to the lack of structural definition. This assumption does create the worst possible
gradient between structural panels around the spacecraft's circumference allowing the
temperature results to be as conservative as possible.
3. Environmental Parameters
280
4. Thermal Model
The spacecraft was modeled with 156 active surfaces which were combined to form 102
nodes for temperature calculations. Multi- layered insulation (MLI) was used in several
areas on the spacecraft to minimize heat gain from solar inputs and warmer surrounding
structures. The bottom of the primary mirror was assumed to be covered with MLI to
prevent interactions with the warmer I/M located below it. The sides of the telescope
were covered with MLI to prevent excessive heat inputs to the mirrors and to reduce the
circumferential gradient around the structure.
Four different modeling assumptions were made for each orbit/ spacecraft orientation
combination. The assumptions were as follows:
Case A: No I/M coupling to the telescope allowing for only environmental heat
flux inputs to the mirror temperature. No MLI on the anti-sun side
efl^ect
Case B: No I/M coupling to the telescope. MLI around the entire telescope cir-
cumference.
Case C: No conduction coupling from the I/M to the telescope. Assumed that
the I/M portion of the spacecraft operated at 20F and that a radiation
coupling from the top of the I/M to the bottom of the telescope section
existed. MLI around the entire telescope circumference.
Case D: No conduction coupling from the I/M to the telescope. Assumed that
the I/M portion of the spacecraft operated at 20F and that a radiation
coupling from the top of the I/M to the bottom of the telescope section
existed. No MLI on the anti- sun side of the telescope.
5. Results
A twenty-four cases were run as a result of the three orbits, two spacecraft
series of
orientations, and the four modeling assumptions. The results are summarized in Table 1.
All cases that assumed no I/M coupling result in primary mirror temperatures that
were determined strictly by environmental flux leakage through the MLI and in some cases
by the combined effect of earth IR or albedo which enters the telescope aperture. The
sunsynchronous orbit was warmer than all other cases considered due to its low altitude
which caused higher fluxes incident on the telescope external surfaces. On the sun parallel
to the LOS orientation and sunsynchronous orbit enough flux gets into the telescope to
cause the primary mirror temperature to rise almost 45K above the sun perpendicular
case.
281
Table 1,
282
Another observation that comes from the examination of the temperature results
shows that the addition of the 20°F I/M sink adds 5°C to the primary mirror temperature
in the case with no anti-sun side MLI and 8°C to the fully insulated case. This result
is caused by the additional effective radiator area caused by having no insulation on the
anti-sun side of the telescope. The fully insulated case does not allow the extra heat to
dissipate as effectively.
The High Earth Orbit yields the lowest temperatures of all cases examined. The
absorbed flux tables showed that no earth IR is absorbed by the spacecraft due to its
extremely high altitude. The two sun parallel to the LOS cases with no I/M coupling
were not considered realistic for the following reasons. Due the the spacecraft orientation,
sun parallel to LOS, zero solar flux strikes the side of the telescope and due to the high
orbit no earth IR is absorbed. In addition, without any I/M coupling there would be
absolutely no heat input to the telescope section which would yield a primary mirror
temperature equal to absolute zero which is unrealistic.
An examination of the sun perpendicular to the LOS case for HEO shows that case
A and D have both dropped 6K as compared to the Geosynchronous orbit case due to
the lack of any earth IR inputs. Case B has dropped only 3°K due to the fully insulated
telescope section. Case C decreased only 1°K due to the insulated telescope section and
the added heat input from the I/M.
The sun parallel to the LOS shows an even greater improvement over the comparable
geosynchronous cases. This is because at HEO there is zero absorbed IR and with this
orientation no solar flux strikes the telescope. The primary mirror temperature is deter-
mined strictly by the heat absorbed from the I/M and the ability of the mirror to re-
radiate the heat out to the telescope walls or directly to space. The case with no MLI on
the anti-sun side of the telescope is the coldest case as would be expected. For comparison
purposes an additional case was run with a 20°C (67.4°F) I/M sink with no anti-sun side
MLI and the sun parallel to the LOS for an HEO orbit. This last case showed a primary
mirror temperature of 73°K as compared to 66K for the same case with a 20°F I/M sink.
lowest temperatures achieved for each case. In addition, the primary concerns regarding
the thermal design are pointed out. In the sun synchronous orbit, the temperature of the
I/M radiators are high due to the earth IR and albedo inputs. Of even more concern is the
flux that enters the telescope baffle and becomes trapped thus raising the primary mirror
temperature. In the geosynchronous orbit, some IR flux enters the telescope raising the
mirror temperature. The HEO orbit avoids the problems of incident flux entering the
telescope but is much more sensitive to heat leaks from the I/M to the telescope section.
are strongly affected by earth radiation must be avoided and a geosynchronous or high
earth orbit is to be preferred. An additional advantage of the high earth orbit is that the
earth flux input on radiators is so negligible that the constraint on spacecraft orientation
disappears.
This study also indicates, however, that the mirror temperature is fairly sensitive to
heat leaks from the spacecraft power dissipating components. The Instrument Module
should be designed to run as cold as possible, and extreme care should be exercised in
the mechanical/thermal design in order to minimize heat leaks.
An attempt should be made to balance the heal input to the mirror resulting from
external fluxes and internal heat dissipating components regardless of spacecraft orien-
tation. This would eliminate primary mirror temperature drifts as the spacecraft moves
from one target to the next.
DISCUSSION
Bender : Can you keep the absoibtivity as low as 0.12 ovet 15 years?
Tnlkoff ! This study did not address the questions concerning coatings in any great depth. It is
doubtful that the coatings absorbtivity would remain as low as 0.12 over 15 years which must be taken
into account when a more comprehensive thermal design is done. The question of thermal coating
stability is currently being addressed for the Polar Orbiting Platform which is being designed at the
Goddard Space Flight Center. Another possible source of information will be the examination of the
coating on the LDEF experiment which is due to be retrieved by the Shuttle early in 1990.
Schember : Can your discuss the specifics of your solar panel modelling, especially optical proper-
ties, W/m' at the beginning and end of life, and efficiencies? Also, what is the average solar panel
temperature and the heat leak from the solar panel to the instruments?
TnlkofE A detailed look at the solar panel was not conducted in this study since they would have
only a secondary effect on the primary mirror temperature. The solar panels would primarly affect the
sun perpendicular to the LOS case the output of the solar panels at 10% efficiency was approximately
2600 Watts at temperatures from 17 to 88C. However, this was below the desired 5 kW. In the case
where the sun vector is parallel to the LOS, the cells produced 5950 W with cell temperatures ranging
fcom 88 to 123 C.
284
CRYOGENICS FOR SPACE OBSERVATORIES:
TECHNOLOGY, REQUIREMENTS, ISSUES
Summary
285
Background; Spacecraft Cryogenics Experience
Cryogenics Technologies
I. Overview
286
spacecraft may be included in this category.
Past experience with TECs dictates close attention to thermal expansion and
interfacing of the device. The TECs on the HST Wide Field Planetary Camera,
carrying 60 mW at 178 K, had specially designed flexible silver straps to
accommodate movement during thermal cycling. Petri ck (1987) has described the
selection and interfacing of TECs to detectors.
287
which successfully reduce the parasitic and environmental loading, allowing
lower temperatures and higher capacities. These designs make use of multi-
staging, special low conductivity supports and highly specular surfaces to
reduce the conductive and radiative loads on the coldest stage due to
parasitic heat leaks and environment. Specular surfaces are extremely
reflective and are usually made from highly polished metal for which reflects
incoming thermal radiation at same angle as the angle of incidence. (Non-
specular surfaces exhibit diffuse reflection in which incoming rays of thermal
radiation are reflected hemispherically following a cosine law.) Specular
surfaces permit the designer to strictly limit the environmental heat loads
reaching the highly emissive (and thus highly absorptive) cold radiator
surface. Furthermore, specular surfaces allow radiative parasitic heat leaks
to be directed away from the radiator, as best exemplified by the V-groove
radiator concept. The drawback is that these surfaces must be kept very clean
-- contamination which diminishes the reflectivity or specularity could be
disastrous.
Since the capacity of the radiator is directly proportional to its area, and
proportional to the difference between the fourth powers of the radiator and
the surrounding cold sink temperatures, cryogenic radiators can be quite bulky
and heavy. The placement of the radiator can severely limit the allowable
spacecraft attitude. These issues may mean that although a radiator may be a
possible solution, it may not be a practical one.
Open cycle coolers are moderately complex systems which expend a fluid to
provide cooling. The fluid is not recaptured for reuse, although it may be
channelled during its rejection in order to reduce parasitic heat loads.
There is a great deal of developmental work currently being performed in the
area of open cycle coolers. There are three subclasses of open cycle coolers:
First, the liquid storage or dewar type, in which either the heat capacity of
the fluid or the latent heat of vaporization is used to absorb the heat load.
The second type is fluid storage with Joule-Thomson (J-T) expansion In this
.
method the fluid is stored at high pressure and then throttled through a J-T
expansion valve to a lower pressure; cooling is produced through the expansion
process. The fluid must be below its inversion temperature or the throttling
will not produce cooling. Most fluids have reasonably high inversion
temperatures, however there are some exceptions such as Helium (40 K) and
Hydrogen (204 K). The third method, solid crvogen storage, relies on the
latent heat of sublimation to maintain temperature. Due to the difficulty of
managing cryogenic liquids, only systems employing solid cryogens, superfluid
He and compressed gasses have been used on spacecraft.
Open cycle systems are inherently lifetime limited; once the fluid has been
expended, cooling ceases and the assembly assumes its equilibrium temperature.
Once the heat loads and temperatures have been established, open cycle coolers
may be considered from two vantage points: Either a lifetime requirement is
imposed, in which case the quantity of cryogen required may be calculated, or
the quantity of cryogen is specified (by specifying an allowable volume or
mass), and then a lifetime may be calculated. Refill systems have been
proposed to extend the lifetime of cryogenic dewars, however, significant
development and testing are required in this area. The implementation of a
288
refill or replacement system is the only method available to extend the
lifetime of an open cycle. In the absence of a resupply mechanism, most
designers attempt to achieve a minimum heat load to provide the longest
lifetime possible. IRAS operated for 10 months; SIRTF lifetime is estimated
at 6 years.
The temperature range for open cycle systems is extremely large ranging down
to below 2 K. (Helium is used for the \/ery lowest temperatures.) Figure 2
shows operating temperatures for a sampling of typical expendable fluids.
Only single phase cooling can occur above the critical point. A useful limit
on the low temperature is given based on sublimation of the solid cryogen.
Saturation conditions dictate this lower limit because the cryogen is in
equilibrium with its vapor. To maintain a constant temperature, the pressure
must remain constant, thus the vapor produced by sublimation must be vented,
with 0.1 Torr being the practical lower pressure limit which is possible given
current vent valve capabilities. The lower temperatures shown in Figure 2
correspond to the saturation temperature at this low pressure.
There have been a number of open cycle systems flown and many are currently in
the planning stages. An important consideration to support these systems is
the expended fluid, which may itself be a source of contamination to sensitive
surfaces on the spacecraft. In addition, for reasonable lifetime, these
systems can be very large and bulky. A major component of the spacecraft
total mass will likely be the cooling system. Finally, these systems are
sensitive to the heat loading -- a small change in heat load (say milliwatts)
may drastically change the sizing/lifetime. As an example of this. Figure 3
displays the lifetime sensitivity of the LDR Helium dewar to heat load
variations.
289
The Walker classification scheme separates all refrigerators into two classes:
Recuperative and Regenerative. A Joule-Thomson closed cycle cooler is a prime
example of a recuperative type refrigerator. In the J-T type refrigerator,
cooling is provided as in the open cycle J-T, but now the low pressure fluid
is collected on the down side of the throttling valve, repressurized and
returned placed back in the high pressure line. The term recuperative comes
about because the cold fluid on the low pressure side is allowed to exchange
heat with the incoming high pressure fluid through the adjoining walls of the
tubing thus recuperating the heat. These systems have the same weaknesses as
the open cycle J-T systems. A critical element in a closed J-T system is the
compressor which provides the high pressure gas for expansion. There have been
a number of problems resulting from contamination by, or leakage through, the
seals of mechanical compressors. Certain types of coolers do not use
mechanical means to provide compression for the refrigeration cycle. The
systems, the foremost of which is the sorption refrigerator, use other, non-
mechanical compression techniques, such as heating, or physical or chemical
adsorption. These refrigerators are still developmental, but show strong
promise for long lifetime, quiet cooling.
290
The efficiency of regenerators drops off with decreasing temperature, and
regenerative coolers are unable to operate much below 15 K. Extremely low
temperatures (below 10 K) can be reached using other techniques currently
being researched. All refrigerators rely on a cycle that makes use of entropy
variations as a function of temperature and some other variable, such as
magnetic field, electric field, chemical potential, concentration or surface
tension. (In a conventional refrigerator this second variable is pressure).
Extremely low temperature refrigerators use some novel means to provide this
other variable. Among these are the dilution refrigerator, various kind of
magnetic refrigerators relying on the magnetocaloric effect, and the He-III
evaporation refrigerator. All of these refrigeration schemes rely critically
on the ability to provide a very cold operational environment, which means
that some other more conventional cooler is used as an upper stage to provide
cooling to, say, 10 or 20 K, Adiabatic demagnetization refrigerators are
currently basedlined for SIRTF and AXAF to provide the extremely low
temperatures (<1 K) needed by the detectors.
Detectors and sensors are cooled to reduce the flux of stray radiation and
thus improve signal quality. In general the cooling requirements of various
detectors may be summarized as:
291
References
Hudson, R.D., Infrared Svstem Engineering . Wiley, New York, NY, 1969.
Wolfe, W.L and Zissis, G.J. (eds.). The Infrared Handbook revised edition .
292
TOTAL HEAT LOAD
1
PARASmCS DETECTOR
I
I
I Heater/
Supports Sensor
Solar Planetary Spacecraft
Wires
Conduction Radiation
Emission Abedo
, Sensor
Supports Signal Wires Heater
Wiring
Spacecraft
405. Oi
^'^2J f^239.B
„
190.0^
Liquid Phase
D 154
Solid Phase '="; I
i -0
I
140
133.8
I
126 I
125.0
120
T3
X
o
O
c
Su
ra
U
E BO
I
60 I
59 8
44. S
4T8 48 1
4^-^
40 s yr4 '
^ Crilica Point
Crilical
33.2
- Boiling Point at 1 aim
- Triple Point
20
Minimum Temperature
^ (at 0.10 mm
Solid
Hg Vapor Pressure)
5.2 E^
He H, Ne Nj CO A Oj CH^COj NHj
Coolant
sub-mm
Preamps
^
-Bolometers
I • I I
I —
I •-
10 100
1
COOLING TEMPERATURE, K
and Performance.
Figure 4. Detector Cooling Requirements versus Refrigerator Temperature
(Figure courtesy of R. Ross, JPL)
DISCUSSION
Stockman : Since most spacecraft and instrument designers are conservative, we do not have much
experience with closed-cycle coolers. Do you know of upcoming missioni which will provide useful
lifetime and performance data?
Schember : EOS will have Stirling coolers. Many other spacecraft uses are proposed, as is a full
complement of tests. (Funding for a series of tests on various coolers is limited and sporadic). The
lifetime will probably not be as long as one might be interested in for these telescopes (5 years?).
Eisenhardt : The Oxford Stirling Cooler has been run on the ground for long periods, but apparently
with some problems.
Schember : The Oxford Stirling Cooler (single unit) has been on life test for approximately 3 years.
It hag run continuously, except for power outages, electrical problems, etc. It is not the production
model British Aerospace cooler, however.
Thompson: It will be useful for this afternoon's panel to discuss cryogenics and cooling as a discrim-
inator between lunar telescopes and free flyers.
295
Visible and Ultraviolet Detectors for High Earth Orbit and Lunar Observatories
Bruce E. Woodgate
NASA/Goddard Space Flight Center
Abstract
The current status of detectors for the visible and UV for future large observatories
in earth orbit and the moon is briefly reviewed. For the visible, CCDs have the highest
quantum efRciency, but are subject to contamination of the data by cosmic ray hits. For the
moon, the level of hits can be brought down to that at the earth's surface by shielding below
about 20 meters of rock. For high earth orbits above the geomagnetic shield, CCDs might
be able to be used by combining many short exposures and vetoing the cosmic ray hits,
otherwise photoemissive detectors will be necessary. For the UV, photoemissive detectors
will be necessary to reject the visible; to use CCDs would require the development of UV-
efficient filters which reject the visible by many orders of magnitude. Development of higher
count rate capability would be desirable for photoemissive detectors.
nanoamp/cm^ fully inverted. The latter corresponds to 1 electron per pixel per hour at
296
-70C. Operating fully inverted requires the use of an extra implant during manufacture
(called Multi Phase Pinned (MPP)) to restore a useful full well.
The full well depends strongly on pixel size, ranging from about 10,000 to over a
million electrons. Typically for a 21 micron square pixel, the full well is 300,000 electrons
non-inverted, or 130,000 electrons inverted (MPP).
Sensitivity uniformity between neighboring pixels is typically 2-3% non-inverted, and
1% inverted or partially inverted.
Nonlinearity is typically less than 1%, and often less than 0.1% away from the full
well limit.
Quantum vary widely with wavelength, illumination direction and surface
efficiencies
treatment. Resiilts are shown in Table I. Recently the Tektronix backside treatment with
anti-reflection coating has achieved 85% at 700nm and 65% at 400nm (Reed and Green,
1989). Separately, 80% at 900nm was obtained by Ford and Photometries using deep high
resistivity silicon and an anti-reflective coat (Schempp, et. al., 1989).
Quantum efficiency hysteresis (QEH), where the is dependent on previous
sensitivity
illumination, can occur in backside illuminated CCDs at wavelengths where the photon is
absorbed close to the surface, usually in the blue and UV. It can be eliminated by careful
shaping of the internal backside electric field by UV flooding, biased thin metal gates, or
implants followed by annealing, or by using fluorescing coatings.
Modern CCDs generally have no residual image (except after radiation damage), do
not require a fat zero, and recover from a fully saturated condition after one frame erasure.
Future Possibilities
Larger format CCDs would be desirable to obtain larger fields of view. Current CCDs
are very far from the formats of photographic plates. Small pixel 8192x8192 CCDs aue
probably achievable now, although larger silicon wafers such as are being used in Japan
woidd be desirable to obtain good yields. For CCDs of 2048x2048 pixels and above, it
would be desirable to provide more than the current 4 amplifiers, with several along the
serial registers, in order to keep the readout time within bounds. The Space Telescope
Imaging Spectrograph (STIS), (Woodgate et.al.,1987), will take 80 sec to readout the
whole tirray through one amplifier, and 20 sec through all 4 amplifiers. Larger ajrays will
surfaces requires extreme care and materials selection and control of the environment,
even with warm surfaces. It is yet more difficidt with a cold CCD, as has been found
with the WF/PC program (Westphal, 1988). Rigorous cleaning, bakeout and sealing as
for vacuum tubes is probably necessary.
Even then, the main problem of using CCDs in the UV is not solved. Many astro-
nonucal sources, such as cool stars, planets, or galaxies containing cool stars, have 8-9
orders of magnitude more visible light emission than in the UV. It is therefore necessary.
297
for spectrographs and even more for cameras, to remove the visible light to avoid con-
tamination of the UV to be observed. Possible methods include alkali metal filters (not
so far successful), crossed polarizers , or tandem gratings. Even if future developments
were to maJce these successful, it is unlikely that the CCD/filter combination would rival
UV photocathodes for quantum efficiency.
The use of non-destructive readout in the "skipper" amplifier can allow less than
one electron readout noise, over limited regions of interest, at the expense of very long
readout times (Janesick, 1988a and 1989a). Possibly this method could be used in a data-
dependent mode, with the number of non-destructive reads being dependent on the signed
level of the first read, or by inserting a preliminary non-destructive amplifier ahead of the
"skipper". Alternately, the development of low-capacitance lower noise JFET technology
on the chip would allow sub-electron readout noise without multiple reads. This could be
useful in measuring photon energy directly in the EUV, as is done now with X-ray CCDs,
or for combining large numbers of images in the high cosmic ray environments above the
earths geomagnetic shield, as will be discussed below.
Radiation Damage
CCDs can be damaged by particle radiation from the earth's trapped (Van Allen)
radiation belts. In low earth orbits (LEO) a few hundred kilometers high, spacecraft
pass through the lower edge of the inner belt at the South Atlantic Anomaly (SAA) at
low orbit inclinations, and also through the polar caps at high orbit inclinations. The
electrons in the SAA are relatively easily shielded agsunst with about 1 cm of aluminum,
but the protons with energies up to a few hundred Mev can only be partially shielded out.
The EST Wide Field and Planetary Camera (WF/PC) by using considerable weight for
shielding, with 1 cm of tantalum, has limited the predicted 5 year proton dose to 600 rads
(Gunn and Westphal, 1981). This problem could be greatly reduced by launching with
a very low orbit inclination, as was done for the Small Astronomical Satellites, including
UHURU, but this would reduce the available weight for a given launch vehicle from Cape
Canaveral, or would require the use of a foreign equatorial launch site such as San Marco
or Kourou, with lower launch weight capabilities.
Recent tests by JPL at U.C. Davis, and by GSFC and Ball Aerospace at Harvard,
have shown that protons can be much more damaging to CCDs than an 'equivalent' dose
(measured in rads) of gammas, for example &om cobalt-60 (Janesick, 1989c; Delamere et.
al.,1989; Brown et. al.,1989). This is because gammas produce ionization damage, where
holes migrate to insulating layers, act as traps and alter potentials in a manner highly
dependent on the dopant and applied electric fields, while the protons also produce lattice
displacement damage. It can be highly misleading to demonstrate 'radiation hardening'
of electronics by gamma testing, when the use environment is mainly protons.
Figure 1 shows results of average dark current tests for three different CCDs. A new
lot of Tektronix 1024x1024 MPP 21 micron pixel CCDs was built for the STIS program.
One of these chips was subjected first to SkRads of cobalt-60, which had a relatively
small effect, and then to 2 kRads of protons, which had a much larger effect. Even so,
the average dark current was only 10 - 17 electrons/pix/hour at -80C after irradiation,
when operated fully inverted. However, an earlier commercial Tektronix 512x512 CCD
showed much higher dark currents both before and after irradiation. A WF/PC II Texas
Instruments CCD showed no effect with 600rads of protons at 30 electrons/pix/hour at
-95C (Trauger,1989 private communication).
298
These results are quite encouraging. However, a large number of pixels become 'hot',
or have dark noise much higher than the average, when irradiated. Preliminary results
suggest that these pixels were already abnormal prior to irradiation, which brought out
latent defects.
The lack of smearing of the lines in test images of a grid pattern of 6 electrons recorded
signal, taken with an unirradiated STIS Tektronix 1024x1024 test CCD, suggests excel-
lent charge transfer efficiency (CTE), even at very low signal levels. Figure 2 shows a
composite graph for several CCDs, of CTE/pixel as a function of charge. Note that the
measured CTE for the same unirradiated Tektronix 1024x1024 CCD above is well below
that required for STIS. It also appears inconsistent with the 6 electron image. The mea-
surement was made using the Edge Pixel Extended Response (EPER) method (Jtmesick
et al, 1988). We need to understand the effects of CTE as measured by the different test
methods on the photometric and resolution properties of CCDs. The effects of proton
damage on CTE are also shown. At each charge level the CTE degrades monotonically
with proton dose, and the degradation increases to lower charge levels. It is easier to ex-
press the damage by the Charge Transfer Inefficiency (CTI = 1-CTE). A rough empirical
fit to the data at -60C shows CTI/krad = 0.14 x Q-°-^, where Q is the charge in electrons.
This ignores the different pixel sizes, and the effects of temperature and readout time.
The parallel CTE improves rapidly as the temperature is lowered down to -140C. The
serial CTE is much less affected by the protons. These two effects may be due to the
proton-induced trap lifetimes being between the parallel and serial readout times at -60C,
but longer than either at -140C (Janesick, 1989c,d). This suggests a strategy for future
missions of running the CCDs very cold, and doing a light flood and etase before exposing.
Meanwhile, other techniques avoiding the need for consumable cryogenics, such zis a duct
within the readout channels, and the effect of nitride insulators are being investigated.
The damage problem can be reduced by using higher orbits which remain above
the radiation belts. Highly elliptical orbits which go through the radiation belts will
generally have a worse problem than the low orbits which just skim through the SAA.
Geosynchronous orbits remain in the upper radiation belts which contain mainly high
energy electrons which czmnot be completely shielded out. While their damaging effects
are limited, they can produce an increased background for data taking, as seen with
lUE. Circular orbits with more than 2-4 day periods, or Lagrangian orbits, or siting on
the moon, will avoid this trapped radiation damage problem. However, similar levels
of damage can occur from the occasional very large solar flare. The levels are very
unpredictable, since one large flare can produce more damage than a decade of smaller
flares. The damaging particles are a mixture of gammas. X-rays, protons, neutrons and
heavier ions.
While the effects of particle damage may be less in high orbit than in low, apart
from the risk of a large solar flare, the effect of the primary cosmic rays on the detector
background is much worse. Figure 3 shows the energy spectrum of cosmic ray protons
(the dominant particle - see question discussion below), for various orbits. The 'free space
proton' curve, above the earth's magnetic shield, shows a total flux 10-20 times higher
than for a 32 degree low orbit, or 200 times higher than the flux at the earth's surface.
It is instructive to calculate the amount of shielding required to reduce the flux to the
level found at mountain observatories on the earth. From the 'free space proton' curve,
299
we must stop protons less than 10 Gev energy. At an energy loss rate of 0.5 Mev/mm, we
need 20 meters of silicon. The detailed calculation for moon rock requires knowing the
density and composition of the moon down to these depths. It is suggested that a lunar
observatory using CCDs should direct the beam from the telescope down to a focal plane
of order 20 meters below the moon's surface. Measurements of the radioactivity of lunar
soil show that is is less than that of granite. Clearly, shielding detectors on a spacecraft
against primary cosmic rays is impractical.
The charge deposited in a typical thin CCD from a cosmic ray is about 1000 electrons,
with a broad charge distribution. A 15 minute exposure in high orbit would leave 5-10%
of the pixels affected, so that it would be impossible to detect faint images from individual
exposures. Ground-based astronomers typically limit their exposures to about an hour
to limit the probability of a cosmic ray hit in a pixel to about 0.1%. However, the
advantages of retaining the high quantum efficiency of CCDs in the visible and near IR
are considerable. What is necessary to remove cosmic ray hits from the images?
Table 2 shows the allowed integration times to clean a combined image of cosmic
ray hits by comparing individual registered images, and rejecting any pixel with counts
significantly greater than the minimum value. A 2048x2048 21 micron pixel CCD is
assumed (STIS case), with three cosmic ray environments; 0.01 cm~^3~^ for the earth's
surface and 20 meters under the moon, 0.1 CTn~^3~^ low earth orbit (HST),2Lnd 2
for
cm~^3~^ high orbit (proposed 10 meter site).
for Perfect sub-pixel registration between
images assumed, to minimize the effects of mismatch of edges in the structure of the
is
target image, and each cosmic ray count is spread amongst 4 pixels of the re-registered
images. For example, to obtain a total 25 minute exposure in the high orbit case, it
must be split into four 6.3 minute exposures. At low signal/noise, for spectroscopy where
the diffuse background is small, combining the images jdso increases the effective readout
noise, and increases the exposure time needed for a detection. The Table shows the
additional exposure factor required for a signal/noise of 5. This analysis assumes a low
density of hits in each image, and ignores any difficulties in distinguishing the cosmic ray
hits &om real image structure, for all intensity values and slopes.
A feature of the above analysis is that some of the pixels (e.g. 2.2%, or 9000 out of 4
million, in the case above for a 25 min exposure in high orbit) have only the count value
from one individual image of the set to contribute. It would be necessary to subdivide
further the totsd image to assure a reasonable minimum signal to noise in each pixel. For
example, to ensure all pixels have at least half the images with real data, (N-1) images
should be added to the N in the analysis above.
Even these exposure factor increases are small compared to the differences between
CCD quantum efficiencies and photocathodes in the near IR, so that CCDs should not
be out for high orbits, especially for broad band imaging, where diffuse background
rtiled
typically dominates over detector background. More sophisticated computer analyses and
real CCD simulations of these effects are required before basing a high orbit mission on
using CCDs.
Photoemissive Detectors
Detectors based on photoemissive photocathodes are much less sensitive than CCDs
to radiation damage and to radiation background while taking data. Their primary advan-
tage is that photocathodes can be selected which are sensitive to the UV and insensitive
300
to the visible/IR. In addition they can be used for fast timing applications without adding
readout noise.
Their primary disadvantage in the visible and near IR is that there are no photocath-
odes that can compete with CCDs for QE. For example, in the blue at 400nm the best
photocathodes are bialkalis, peaking at 20-25% QE compared to a good CCD with 65%,
and in the red at H-alpha the best trialkali photocathodes have about 7% QE compared
to a good CCD with 85%. Another problem, particularly for broad band imaging, is their
limited upper counting rate, particularly when used in photon counting modes.
UV Photocathodes
The difficulties of using CCDs in the UV are discussed above, including the sensitivity
to scattered or leaked light from the visible, and contamination of the cooled surface by
UV-absorbing layers.
Photocathodes are available such as CsTe (< SOOnm), Csl (< 170nm), KBr (<
ISOnm), MgF (< llOnm), which are sensitive below and insensitive above these cutoff
wavelengths. UV sensitivities cjui vary from QEs of 10-25% for semi-transparent cath-
odes to 30-80% for opaque cathodes. High purity cathodes can cutoff sharply with QEs of
10~* to 10~^ to the long wavelength side of the cutoff wavelength. Primarily for this rea-
son photoemissive detectors with Csl and CsTe cathodes, MultiAnode Microchannelplate
Arrays (MAM As), were selected for the STIS instrument.
Timing
Many readout schemes using photocathodes readout the event promptly, or store for
times short compared to unintensified CCDs. This makes them competitive even in the
visible for tasks such as speckle imaging and spectroscopy, astrometry, flare stars and
pulsars, and many applications in medical, laboratory, and military situations. Prompt
readout detectors can identify pulse arrivzd times to less than one microsec, if needed.
Radiation effects
301
Intensifiera
Readout Methods
A readout methods which encode each recorded photon event promptly such
class of
that they may be time-tagged, that is each event is recorded with a two-dimensional
position and a time, is possible using the increased electron charge output by a MCP.
These detectors may also be used in accumulate mode by adding counts into an image
memory. This class includes the Multi-Anode MicroChannel Array (MAMA) (Timothy,
1983; Timothy and Bybee,1986), selected as UV detectors for the STIS and SOHO pro-
grams, the Wedge and Strip as selected for EUVE (Siegmund et. al. 1986), the Delay
Line Anode (Lampton, Siegmund and Raffanti, 1987), and the Mepsicron (Lampton and
Paresce, 1974). As photon-counters using MCPs at high gain, they are all limited in local
counting rate by the MCP. In addition their total counting rate over the whole array is
limited by the readout electronics speed to 10^ - 4x10® counts/s in current designs.
Alternatively, intrinsically frajning readout schemes can be used, where the intensified
electron signalis temporarily stored and then readout sequentially into memory or a tape
recorder or downlinked directly via telemetry. The storage medium can include a CCD
(Broadfoot, 1987; Stapinski, Rodgers and Ellis, 1981; Williams and Weistrop, 1984; Green,
1989; Jenkins et al 1988; Swartz et al, 1988; Poland and Golub, 1988; Chen, 1987), or a
'vidicon' (Boksenberg, 1975; Machetto et. al., 1982; Boggess et al, 1978). If the frames
are readout sufficiently rapidly that only one event is recorded in each pixel in each &ame,
then the detector may be used as a photon counter. Then the upper count rate is usually
even more severely constrained than for the MCP, since it can only reach a fraction of
the frame rate. For example the Faint Object Camera of the EST becomes non-linear at
less than the frame rate of 30/s. Such a photon counter can centroid individual events
to improve internal resolution and effectively increase the format size. The on-the-fly
processing required often further decreases the upper count rate limit. The charge storage
medium, CCD or 'vidicon', can alternatively be read out in analog mode, recording the
charge representing many photons, as in the lUE detector. The SEC Vidicon used in
302
lUE has a limited dynamic range, with only a factor of about 10 between noise level and
saturation, which has required a choice between sensitivity and signed-to-noise. Using
a CCD in analog intensified mode could in principal help reduce the effect of recording
cosmic rays by adjusting the gain to adjust the relative charge representing a photon. The
cumulative radiation damage problem would still occur, but the effects could be reduced
by operating at higher charge levels. The concept used in the one dimensional Digicons,
as used in the HST Faint Object Spectrograph and High Resolution Spectrograph (Beaver
and Mcllwain, 1971), with prompt readout of a diode array, is difficult to extend to two
dimensions because of the large number of amplifiers needed.
Assessing the possibilities for development in the future, the major unmet need is for a
UV camera capability with high dynamic range (ability to cope with targets brighter than
a 23rd magnitude B star when used with a 10 meter telescope), and large format. One
development path is to devise higher count rate MCPs, where the potential distribution
can be restored much more quickly to the depleted channel. Using separate materials
for secondary emission and for establishing the potential distribution could help. Also
a possible method is a discrete dynode MCP constructed of several layers of alternate
resistive and conducting sheets with holes. Such improvements would extend the utility
of the majority of modern intensified detectors, which use MCPs. A temporary expedient
of scanning the image over the detector has been used successfully at Ball Aerospace to
move bright points to a fresh location until the MCP has recovered. This works with
a dilute image consisting of bright features on a relatively dark background, such as a
star field, but does require a mechanism such as a continuously nutating mirror. Another
development path would be to convert current framing detectors using photon counting
to also distinguish many events per pixel per frame in analog mode. This is being done at
GSFC with the detector for the Coronal Diagnostics Spectrograph instrument on SOHO
(Poland and Golub, 1988). It would be particularly attractive if it could be used with
the opaque photocathode method with its higher quantum efficiency in the UV (Jenkins
et al, 1988).
303
These and other methods using cryogenic noise reduction would require considerable
sustained funding to develop into useful practical detector systems, but the advantage of
having simultaneous two dimensional imaging and spectroscopy is also very large.
Because of their high quantum efficiency, CCDs are the preferred detector for the
visible and near IR for most purposes. Larger formats are desireable, up from the current
10^ pixels to 10® - 10^ pixels. A combination of monolithic pixel number increzises and
mosaic arrays are needed, with midtiple simultaneous outputs to maintain reasonable
readout times. Further quantum efficiency optimization is needed to extend the highest
efficiencies over the full wavelength range.
Rejection of cosmic ray hits from the images is a signiHcajit problem in orbit, and very
severe in high orbit above the earth's magnetic shield. Methods of combining majiy frames
should be simulated and tested for effective rejection in the presence of real images before
committing to a high orbit mission using CCDs. Further reductions in readout noise vrill
be necessary to maintain signal to noise when many images are combined. The problem
can be avoided on the moon by burying the CCDs under 20 meters of rock.
Radiation hardening of CCDs against proton damage as well as against ionizing radi-
ation is required, especially for maintaining the charge transfer efficiency. Both manufac-
turing and operational considerations are important, including temperatures, time scales
auid voltage distributions. UV Detectors
For extension of CCD use into the UV, visible rejecting filters would be desireable,
but contamination of cold CCDs by UV absorbing films is idso severe.
Currently, photoemissive detectors are preferred in the UV because of their ability
to reject the visibleand being able to operate without cooling. Larger formats are de-
sireable, which in some designs will require mosaicking of both intensifiers and readout
arrays. Some further quantum efficiency improvements may also be possible. Matching
detector front surfaces to curved focal planes is desireable for large format cameras and
for spectroscopy.
The primary unmet need is for higher dynamic range for UV
cameras. Both higher
rate MCPs and analog use of framing camerais should be pursued. Energy Resolving
Detectors
Long term consistent and sustained development efforts would be required to obtain
detectors which could provide simultaneous energy and two dimensional position mea-
surement, but they would be highly desireable.
Orbit Selection
important to consider the radiation damage and cosmic ray rejection aspects
It is
of availableand developable detectors when selecting the best orbit for a mission. For
example, a highly elliptical orbit of a few days period, ideal from the point of view of
observing duty cycle and launch mass, is the most difficult for CCDs for both radiation
damage and cosmic ray rejection. A low orbit with very low inclination to the equator,
such as those used for the Small Astronomy Satellites, would be most favorable for CCD
performance, although more difficult for launch mass and operations.
304
I thank Jim Janesick for providing his viewgraphs for this talk, from which much of
theCCD discussion was derived, and for the continuing flow of broadly distributed memos
which keeps many of us up to date on the latest results and ideas.
References
Note: The several references to the Proceedings of the Conference on CCDs in As-
tronomy, Tucson, Sept 1989, ed. G.Jacoby, are shown as PCCCDA below.
Gunn.J.E. and Westphal.J.A., 1981, SPIE Solid State Imagers for Astronomy, 290,16.
Janesick,J., Elliott,T., Bredthauer,R., Chandler,C. and Burke,B., 1988, SPIE X-ray
Instrumentation in Astronomy, Ssin Diego, Aug88.
Janesick,J., JPL Distributed Memos, 1988 Jan 19, and 1989 Aug 1.
Machetto,F. and the FOC Team, 1982, The Space Telescope Observatory NASA
CP2244, ed. D.N.B.HaU.
Peckerar,M.C., Bosier8,J.T., McCarthy,D., Saks,N.S. and Michels,D.J., 1987 Appl.
Phys. Lett. 50, 1275.
Petroff,M.D., Stapelbroeck,M.G. and Kleinhans,W.A., 1987, Appl. Ph. Let. 51,406.
Poland.A.I. and Golub.L., 1988, An Intensified CCD for SOHO, NASA Report.
Reed,R.and Green,R. 1989, PCCCDA.
Schempp,W., Griffin,F., Sim8,G. and Le88er,M., 1989, PCCCDA.
Siegmund,O.H.W., Lampton,M.L., Bixler,J., Chakrabarti,S., VaIlerga,J., Bowyer,S.,
and Malina,R.F., 1986, JOSA-A, 3,2139.
Stapinski.T.E., Rodgers.A.W. and Em8,M.J.,1981, PASP 93,242.
305
Stern,R.A., Catura,R.C., Kimble,R., Davidsen,R.F., Winzeiiread,M., Blouke,M.M.,
Hayes,R., Walton,D.M. and Culhane,J.L., 1987, Optical Engineering 26,875.
Swartz,M., Epstein,G.L. and Thomas,R.J., 1988, NASA/GSFC X-682-88-2.
Timothy,J.G., 1983, PASP 95,810.
Tiniothy,J.G. and Bybee,R.L., 1986,SPIE Ultraviolet Astronomy 687,109.
WilUamSjJ.T. and Weistrop,D., 1984, SPIE 445,204.
Westphal,J., 1988, Report to the EST Science Working Group, May 4.
Woodgate,B.E. and Fowler,W., 1988, Proceedings of the Conference on High Energy
Radiation in Space, Nov87, Sanibel Island,FL, eds. A.C.Rester and J.I.Trombka.
(AIP Conference Proceedings No. 186.)
Woodgate,B.E. and the STIS Team, 1986, SPIE Instrumentation in Astronomy VI
627,350.
DISCUSSION
Johnson : The speaker mentioned a 20 meter thickness of lunar soil and rock for shielding. SUberberg
and coworkers at NRL have calculated the depth of lunar soil required as 400 to 700 grams/cm^ to
bring the annual dose equivalent to less than 5 rem/year. They were looking at the protection required
for long term human habitation on the moon. It appears that 2 to 3 1/2 meters of compacted lunar
soil would be adequate for cosmic rays, their secondaries, and giant flares {e.g., the flare of February
1956). See Silberberg in Lunar Bases and Space Activities in the 21st Century, W. W. Mendell, editor,
Lunar and Planetary Institute, Houston, Texas, 1985.
MacKay: In your estimate of cosmic ray event rates, did you take into account the much higher rates
that could come from secondary events absorbed by any surrounding structure?
Woodgate: No. According to J. Trombka, the primary protons can produce a somewhat larger number
of secondary neutrons in surrounding structure, depending on its thickness and composition. Typically
in interplanetary space, about 2 protons cm~^s~^ produce about 5 fast neutrons cmr^8~^ . These fast
neutrons can particdly thermalize in large spacecraft (e.g. about 1 meter of aluminum), especitilly if
low density materials such as hydrogen in fuel or water are present. The thermal neutrons can easily
be removed by small quantities of boron or rare earths, as in a paint, for example at the interior of
a shield. The fast neutrons will not produce ionization, and will mostly pass through the detector
undetected. A very few will recoil, and may produce an event which must be vetoed. The number
of these events will be much less than the number of primary protons, but should be quantitatively
assessed for each situation, as should the damage produced by the neutrons.
306
TABLE 1. TYPICAL CCD
QUANTUM EFFICIENCIES
WAVELENGTH (nm)
10000
FIGURE 1. CCD dark current and
radiation damage.
>OO0
o
LU
(f)
\
X QSFC:
TEK S12 27 MICRON
CL THIN
\
(f>
_l
lU
H h««d J
Z A
LU
CC
(X /
D
O y
JPL
Ti BOO 15MICR0N
CC THIN. WF/PC II
0-01 -
o- ooi
FIGURE 3. Cosmic ray proton
energy spectrum.
ENERGY (MeV)
309
Infrared Detectors for a 10 M Space or Lunar Telescope
Rodger I. Thompson
Steward Observatory - University of Arizona
1. Introduction
gion. This conference is limited to the region shorter than 10/im; therefore, the longer
wavelength regions will not be discussed here.
2. Background environment
The background environment of the detector is set by the thermal emission from the
telescope optics at long wavelengths and by the zodiacal background at shorter wave-
lengths. Here the zodiacal background consists of both a scattered component and a
thermal emission component. The thermal emission is characteristic of a diluted 265 K
blackbody. The zodiacal background is modeled by the following function:
where Z is in photons per cm^ per second per micron per steradian and BB is the black-
body flux in the same units. This equation approximates the zodiacal flux at approxi-
mately 45 degrees above the ecliptic.
Figure 1 shows the expected background based on the zodiacal flux and the radiation
from the telescope. The pixel size is the difl^raction limit which scales as A^ in area.
The bandwidth is 50% of the wavelength which scales as A. The product of pixel size
and bandwidth cancels the A" ^ dependence of the scattering term of the zodiacal light.
For this reason the background is flat in the spectral region out to 3 /xm where the
thermal emission begins to dominate over the scattered light. It is also evident that the
thermal emission from the telescope begins to dominate over the zodiacal light at longer
wavelengths. For true natural background limited operation, the telescope should be at a
temperature of 60 K or colder. The minimum photon flux at short wavelengths is about
1 photon per second.
310
Figure 1
2 3 4 5 5 7 8 10 11
WAVELENOTH IN MICRONS
Figure 2
10MI
o
'
CO
(O
z
o
I- -1
o
O -3
o
U -4
Thespectroscopic background is the same as the imaging background except reduced
by the much higher resolution. In Figure 2, the background is plotted for a bandwidth of
10~^ instead of 0.5 as in the imaging plot.
The region between 1 and 4/im has background fliixes substantially below any of the
current detector dark currents. This will have to be a primary area of development to
exploit the spectroscopic opportunities. The detectors will also require a large amount of
radiation shielding to achieve the low photon background performance levels.
The low background in the 1 to 3/im region, coupled with the cosmological redshift
offers aunique opportunity for a very deep survey of the Universe. The most advantageous
region is 3.0-3.5/im where galaxies at high redshift have their peak but where the thermal
emission is not yet dominant. The warm temperature of the HST mirror starts to dominate
at wavelengths longer than 1 .7/im and has a much smaller mirror than the proposed
telescope. The cold telescope has a distinct advantage in this area.
The HgCdTe 256x256 arrays developed by the Rockwell Corporation for the second
generation HST instrument NICMOS represent present status of detectors in the short
wavelength region. These are hybrid detectors which bond a HgCdTe photodiode array
to a silicon switched MOSFET readout. The performance levels for 2.5/im cutoff arrays
are given in Table 1. Comparison of the performance levels with the detector environment
for imaging shows that the dark current is on the order of the background flux. Some
reduction of the dark current will be advantageous but the detectors are near the re-
quirements for imaging. Since the dark current rises steeply with cutoff wavelength some
development will be needed to extend the low dark current performance to 4/xm cutoffs.
For short integration times the read noise dominates over dark current and background
as the detector noise source. Reduction of the read noise from 30 electrons to 10 electrons
results in background limited performance after integrations on the order of 100 seconds.
Further reduction is of course desirable but again the present detectors are close to the
requirements. Extension of the pixel format to 1024x1024 wiU cover the expected field of
view. This can be done with mosaicing via focal plane splitting. Unless the pixel sizes are
reduced to approximately 10/xm it will be difficult to extend a single hybrid to 1024x1024
due to the thermal mismatch between the sapphire substrate which carries the HgCdTe
and the silicon multiplexer chip. Direct deposit of HgCdTe on Si may be the answer for
larger arrays.
Further refinement of the HgCdTe growth process should yield quantum efficiencies in
the 80% range throughout the wavelength band. Current detectors can reach this Q.E. at
various wavelengths but no detector to date has this efficiency over the whole wavelength
region.
312
Table 1. Imaging Performance of HgCdTe Array Detectors
Present status:
Dark current < 1 e/sec at 60 K; 2.5 fim cutoff wavelength
Read noise of 30 e
Average quantum efficiency is 65%
256x256 pixel format with 40 fim pixels
Time to background limit for telescope is 1150 seconds
Development goals:
Dark current nearly adequate, extend performance to 4 ^m
Reduce read noise to below 10 e; gives a time to background
limit for the telescope of less than 100 seconds
Increase quantum efficiency to 80 %
Extend the format to 1024x1024 to cover the expected
field of view; larger format applicable at 1 /im
Develop monolithic HgCdTe or HgCdTe/Si manufacturing processes
Blocked Impurity Band (BIB) detectors represent the best current performance levels
in the 4-10/im spectral region. The combination of high background and the excellent
performance levels shown on Table 2 indicates that these detectors already meet most
of the performance requirements for the proposed telescope. Since these detectors have
significant gain in the readout the eqtuvalent electron noise and dark current must be
scaled by the gain. All of the performance figures presented have been adjusted for the
gain effect. Some controversy still remains over the appropriateness of the scaling.
Production of large pixel format arrays stiU needs development from the current 10x50
to the required 512x512 format. There is potential for almost a factor of two gain in
quantum efficiency. Finally the sensitivity of the detector to changes in the background
flux needs to be corrected.
Present status:
Dark current is 4 e/sec (scaled by gain)
Read noise is 12 electrons (scaled by gain)
Average quantum efficiency is 45%
10x50 pixel array format
At natural background limit after 1.2 seconds at 7 fiia
At 100 K telescope limit in less than 1 second
Sensitive to background changes
Performance goals
Increase quantum efficiency to 80%
Increase pixel format to 512x512 to match telescope field of view
Decrease sensitivity to background changes
313
5. Camera performance
Figure 3 indicates the time needed to achieve a signal-to-noise of 10 in a single pixel
versus the flux in Janskys falling on that pixel. For diffraction limited sampling the flux
from a point object is spread over several pixels, therefore, the limiting flux for a single
object is modified by the area in pixels of the point spread function. Since this varies
for different cameras we have simply given the results in terms of limiting flux per pixel.
The wavelengths of 3 and 7/xm are chosen as being typical of the band. Again the pixels
are chosen to be diffraction limited and bandwidth as 50% to calculate the background
flux on the pixel. The change in slope of the curves from 1 to 1/2 occurs when the
background or dark current noise dominates the read noise. When the observation is read
noise limited the signal to noise improves directly with integration time. For dark current
or background noise the signal to noise improves as the square root of the time. Shot
noise or source noise has some effect on the 7/im curve since it has a read noise of only 4
electrons whereas the shot noise for a signal to noise of 10 is 10 electrons in the absence
of other noise. The 3/im curve has a read noise of 30 and is not significantly affected by
shot noise for a signal to noise of 10.
Figure 3
"
-i
C -6
X
i -
2 -8
o
-10
Current HgCdTe detectors are not at the theoretical limit of the dark current. Gains of
several hundred are theoretically possible, although the source of the excess dark current
is not well understood. It wiU have to be the astronomical community which drives the
materials research to achieve the low dark currents as most other applications are well
served by the currently achievable dark current levels.
The read noise goal of 3 electrons comes from the qualitative requirement that a useful
spectrum should have a signal to noise of at least 10 which requires 100 photons with a
shot noise of 10. At 3 electrons the read noise is significantly less than the shot noise.
Read noises of 3 electrons and less have been achieved with opticcd CCDs which gives
some confidence that similar noise levels can be achieved with the infrared arrays.
For high resolution spectroscopy larger formats are required to accommodate cross
dispersed spectra and multi-object spectroscopy with fibers. Formats of 2048x2048 are
useful in this context. Image plane splitting and reimaging may be more difficult in the
spectroscopic mode and therefore a strong driver for the alternatives to the hybrid tech
niques described earlier.
Present status:
Dark current < 1 e/sec at 60 K; 2.5 fim. cutoff wavelength
Read noise of 30 e
Average quantum efficiency of 65%
256x256 pixel format with 40 fiia. pixels
For R=10,000 Spectroscopy, never at the telescope background limit
Development goals:
Reduce the dark current by 10,000 or to radiation limit
Reduce the read noise to below 3 e; at this level the read
noise does not significantly affect a S/N=10 Spectrum
Increase average quantum efficiency to 80%
Extend the format to at least 2048x2048 to accommodate multi-object
cross dispersed spectroscopy
Thegoals for the detector development in the 4-10/im range are similar to those of
the 1 4/xm region. At the short wavelength end the same reduction in dark current is
-
required. At 10 /xm a reduction of about 40 is required to reach the zodiacal limit. If the
telescope is not cold enough to reach the zodiacal limit less stringent requirements may
be accommodated.
315
Table 4. Spectroscopic Performance of BIB Arrays
Present status:
Dark current is 4 e/sec (scaled by gain)
Read noise is 12 electrons (scaled by gain)
Average quantum efficiency is 45%
256x256 pixel array format
At the natural background limit after 5700 s at 7 /im, R=10,000
At 100 K telescope limit after 1,500 seconds at 7 /xm, R= 10,000
Performance goals:
Dark current reduced by 10,000 or at the radiation limit
Read noise < 3 electrons for S/N=10 spectra
Increase quantum efficiency to 80%
Increase pixel format to 2048x2048 for multiple
object cross dispersed spectroscopy
8. Spectrometer performance
The predicted spectrometer performance (Figure 4) has two distinct regions, sinailar
to the imaging performance chart. At short integration times the read noise dominates
and the limiting flux vs time relation is linear. At longer integration times the dark
current limits the 3/im spectrum and the natured background limits the 7/im spectrum.
In both of these cases the limiting flux decreases with the square root of the time. The
higher long wavelength background causes the cross over of the two curves around 200
seconds.
Figure 4
K telescope a very exciting prospect. Future lowering of read noise and dark current
levels will greatly increase the spectroscopic performance levels and will to a lesser degree
enhance the imaging Emphasis should be applied to finding the best detector
capabilities.
array to exploit the region around 4 microns where the background is near a minimum
and the flux from highly redshifted objects will be at a maximum.
In order to exploit the low background levels of such a telescope, significant effort
needs to be applied to reducing the radiation levels on the detectors and to hardening
them against radiation. Particularly important is the hardening against energetic protons
which is the main component of the background high energy radiation. There is a trade-
off on telescope location which wiU be important in this context. Although it imposes
operational constraints, the low earth orbit of HST has a significantly lower radiation
density than the high earth orbit proposed for the next telescope. Location on the lunar
surface may allow more shielding by either attached material or shielding by luneir mate-
rial. From a scientific production viewpoint there is not yet a clear discriminator between
DISCUSSION
Thompson : I am not an expert in this area, perhaps other people could comment.
Ejgenhardt : Si BIB detectors will operate to 28 /tm. We should consider making Amoa = 30 fim for
this telescope, even if its not background limited, its still far better than observing from the ground,
and complements LDR. Also SIRTF Pis are doing tests on IR detectors with protons and 7s.
Thompson : We have just begun radiation testing of short wavelength cut off HgCdTe. It recovers
immediately from 7-ray and e-radiation doses equivalent to the SAA. We still have to do proton and
long duration tests.
Rehfield : What temperature was the Rockwell Array operating at for demonstration?
Thompson : The array was operating at liquid Nj temperature, 77° K. The NICMOS array will be
operated at SCK where the dark current will be about a factor 10 less.
317
Session 6
Space Logistics
319
Advantages of High vs. Low Earth Orbit for SIRTF
Peter Eisenhardt* and Michael W. Werner, NASA-Ames Research Center
1 Introduction
While the subject of this workshop, which we'll refer to as ET
Enlightenment Telescope), is a
(for
dazzling successor to the Hubble Space Telescope, its location is Low Earth Orbit
unlikely to be the
(LEO) used by HST. Locations suggested for ET include High Earth Orbit (HEO) and the moon.
The first space telescope to occupy HEO will be the liquid helium cooled Space Infrared Telescope
Facility (SIRTF - Figure 1). (For reasons given in Section 3.3.2, we do not consider geosynchronous
orbits, such as that occupied by the International Ultraviolet Explorer telescope, to be in the HEO
class.) The selection of HEO for SIRTF was the outcome of a recent study led by the Ames Research
Center which showed significant advantages for SIRTF in HEO vs. LEO. This article summarizes
the main results of that study. We begin with a review of SIRTF's rationale and requirements,
in part because the IR capabilities and low temperature proposed for ET make it something of a
successor to SIRTF as well as to HST. We conclude with some comments about another possible
location for both SIRTF and ET, the Earth-Sun L2 Lagrangian point.
321
Figure 1: The Space Infrared Telescope Facility (SIRTF), shown in its new high altitude Earth orbit
(HEO) configuration.
322
ATMOSPHERIC TRANSMISSION
MOUNTAIN TOP
1.0^
to
CO
<
NFRARED BACKGROUND
MOUNTAIN
TOP
BALLOON
323
Table 1: SIRTF Instrumentation Summary
Infrared Array Camera G. Fazio, Smithsonian Wide field and diffreiction limited imag-
(IRAC) Astrophysical Observa- ing, 1.8 — 30/xm, using arrays with up to
tory 256 X 256 pixels. Simultaneous viewing in
three wavelength bands, selectable filters.
Polarimetric capability.
Infrared Spectrometer J. Houck, Cornell Uni- Grating spectrometers, 2.5— 200/xm, using
(IRS) versity two dimensional detector arrays. Resolv-
ing power from 100 to 2500. Low and high
resolution options at most wavelengths.
Multiband Imaging Pho- G. Rieke, University of Background limited imaging and photom-
tometer for SIRTF Arizona etry, 3 — 200/im, using small arrays with
(MIPS) pixels sized for complete sampling of Airy
disk. Wide field, high resolution imaging,
50 — 120/xm. Broadband photometry and
mapping, 200 — 700/im. Polarimetric ca-
pability.
studies of composition and physical conditions in planetary atmospheres, the interstellar medium,
and external galaxies. The spectrograph will also be used to determine the nature of objects discov-
ered in SIRTF's surveys, many of which will be too faint to be studied in detail from other platforms.
SIRTF's instruments are more fully described in Ramos et al. (1988). More complete discussions
of SIRTF's scientific objectives and potential can be found in Rieke et cU. (1986) and in a series of
articles in Astrophysical Letters and Communications (Vol. 27 No. 2, pp. 97 ff., 1987).
SIRTF's scientific objectives require not only high sensitivity but also excellent performance in
many other areas. The and requirements are listed in Table 2. Also
resulting system parameters
shown comparison are the parameters for the Infrared Astronomical Satellite (IRAS). SIRTF's
for
gains over the IRAS survey in wavelength coverage, spatial and spectral resolution, sensitivity, and
lifetime call to mind the gains of HST over the Palometr sky survey, as befits a Great Observatory.
SIRTF's biggest gains over IRAS result from the fact that its instruments will be equipped with
large arrays having up to 65,000 detector elements. The current status of IR detector development
is the subject of the 1989 Proceedings of the 3rd Infrared Detector Technology Workshop, C.R.
McCreight, editor.
We have also
listed in Table 2 the corresponding parameters for ET, assuming a temperature
of 100 K
and 10% emissivity. Using diffraction limited pixels, SIRTF and ET will see the same
background levels from 2 to about 6/im, beyond which the thermal embsion of ET starts to domi-
nate. This region includes the 3/im cosmic "window" and extends to wavelengths where extinction
,
coefficients are only a few percent of that in visible light. ET's wavelength range should probably
extend to 30^m, for overlap with the Large Deployable Reflector, and because much of this regime
is unobservable from the ground. ET's larger collecting area will provide over one hundred times
more signal per pixel than SIRTF for unresolved sources, which translates directly into improved
sensitivity from 2-6 ^nn. Even with its higher temperature, ET's sensitivity will be about nine
times better than the 6 /iJy achieved by SIRTF at 10 /im (5 mJy is a comparable sensitivity from
current ground based telescopes), equal to SIRTF's at 20 pm, and only two times less than SIRTF's
at 30 fiin. ET integration times to achieve the same point source sensitivity at 10 fixn will be ten
thousand times shorter than for a groundbased 10 m telescope such as the Keck, even if adaptive
324
Table 2: Comparison of SIRTF, IRAS and ET System Parameters
Figure 3: Relative size of the Earth in comparison to low Earth orbit (LEO - left), and high Earth
orbit (HEO -
right). Also shown on the right are the relative size of a geosynchronous orbit (36,000
km and of the Van Allen belts (adapted from Ness 1969). The inclination of LEO and HEO
altitude)
for SIRTF is 28.5°, while the Vau Allen belts are shown in the noon-midnight plane (inclination
90°) with the sun to the left. The tilted axis of symmetry of the belts is the geomagnetic equator.
326
Figure 4: Cutaway view of the SIRTF telescope concept for the HEO mission.
327
3.2 Technical Concerns
The two HEO SIRTF option were
principal technical concerns which arose during the study of the
in the areas of launch vehicle capability These issues are fundamental for ET as well.
and lifetime.
The present HEO baseline configuration for SIRTF has a mass of 4370 kg. By comparison, the
expected Titan FV/Centaur launch capability to 100,000 km for a SIRTF sized payload is at least
5770 kg. It is felt that the 1400 kg difference (36% of the current mass excluding the helium) is
adequate margin in this critical area. However, launching a ^ 15,000 kg, 12 m wide ET into HEO
is clearly a formidable challenge. (Note that the Titan's launch capability to geosynchronous orbit
The cryogen lifetime is an issue for SIRTF in HEO because on-orbit cryogen refill, which was to
be used in LEO to achieve the five year lifetime, will not be available in this option. Preliminary
optimization of the 4000 liter HEO system yields an estimated useful on-orbit cryogen lifetime
(exclusive of losses due to launch holds, on-orbit cooldown, etc.) of six years, giving a 20% margin
over the requirement. The dramatic increase over the 2.5 year lifetime predicted for the same size
dewar and the same instrument complement in LEO reflects:
1. the lower outer shell temperature (llOK vs. 220K) achievable in HEO due to the greatly
reduced heat load from the Earth and to a lesser extent to the shading provided by the fixed
solar panel; and
2. reduction by more than an order of magnitude in the 'aperture load" - the power radiated
into the dewar by the aperture shade - which results from the fstct that the aperture shade in
HEO is both smaller and colder than in LEO.
Cryogen refill may be an issue for ET also: ET's IR detectors will almost certainly require temper-
atures lower than can be achieved using passive cooling. The difficulty of access to HEO after the
initial launch may
be a broader concern for ET, because of the possible need for on-orbit servicing
in general to achieve long life, and for on-orbit assembly as a way to reduce the launch diameter
and/or mass.
Figure 3 shows the relative sizes of the Earth and SIRTF'S orbit for the HEO and LEO missions,
while Figure 5 compares the general configuration of the orbiting observatories in the two missions.
Both missions have an orbital inclination of 28.5° to maximize the launch capability and to permit
servicing in the LEO mission. The 280-fold reduction in solid angle subtended by the Earth in
HEO leads to the thermal advantages cited above and also permits more freedom in telescope
pointing, even though the solar and terrestrial avoidance angles are larger in HEO. These larger
avoidance angles cannot be satisfied when SIRTF passes directly between the Earth and Sun in the
LEO mission. The 122" angle subtended by the Earth permits Sun and Earth avoidance angles no
greater than 59° at these times, which occur for several orbits approximately every 28 days. The
use of the larger avoidance angles (80°) in HEO
leads to a smaller aperture shade, which in turn
permits the forward portion of the telescope bafHe tube to be shortened while maintaing the required
rejection of stray radiation. The reduction in aperture shade size and forebafBe length lesid to the
overall shortening of the system shown in Figure 5. Note also in Figure 5 that the solar arrays and
antennae are fixed in the HEO system but must be deployable and steerable in LEO. The dewar,
optical system,and instruments are essentially identical in the two concepts. Thus the analysis and
technology work done on these system elements for LEO carries over directly into the HEO system.
328
m^
Figure 5: General configuration of the LEO (left) and HEO (right) SIRTF concepts. The masses
find lengths of the satellites iire6620 kg and 4370 kg; and 8 m and 5 m respectively.
The locations of the orbits considered in the study were largely driven by the ionizing radiation envi-
ronment, which has three main components: particles trapped in the Earth's magnetic field, galactic
cosmic rays, and energetic particles produced in solar flares (see Stassiniopoulos and Raymond 1988
for a comprehensive discussion).
Protons with energies up to 500 MeV are trapped in the Ecirth's magnetic field above an altitude
of « 1000 km, while trapped electrons with energies up to 7 MeV are found to 70,000 km altitude
in the solar direction, and substantially farther in the anti-solar direction. These regions define
the Earth's Van Allen belts, and the LEO and HEO altitudes are, respectively, just below and
somewhat above them (Figure 3). In LEO, the main radiation effects are due to the intense fluxes
of trapped protons encountered during passages through the South Atlantic Anomaly (SAA), a low
lying region of the Van Allen belts. These passages occur almost every orbit and render about 25%
of the time unsuitable for observations. Because these high fluxes alter detector response, additional
time would be required for post-SAA annealing and recovery. Analysis by J. Miheilov at Ames of
satellitedata shows that in Heo, fairly high electron fluxes are still present at 70,000 km altitude
in the anti-solar direction, and that moving to 100,000 km substantially reduces these. While the
electrons are easily shielded, the 7 ray bremsstrahlung they produce is not, but at HEO the "hit"
rate h-om bremsstrahlung is below that of cosmic rays (see below) for all but a few percent of the
time.
Note that geostationary orbits (GEO) at 36,000 km altitude are in the heart of the electron belt
and are therefore a very noisy place for sensitive astronomical detectors: it is for this reason that
lUE, when not near apogee in its elliptical 24 hour orbit, has "bright" time. Another problem with
GEO is that the Earth's thermal load is no longer negligible (see the paper by Phil Tulkoff). Because
this misconception was voiced by several speakers at the workshop, it bears repeating: HEO is not
GEO!
329
The second source of ionizing radiation is galactic cosmic rays, mainly protons and alpha particles
whose energy spectra peak near 1 GeV/nucleon. These high energies are impossible to
differential
shield against in a spacecraft, although the Earth's magnetic field and atmosphere eire effective
shields, and they are modulated by solar activity. Integrated fluxes in HEO range from 0.4 (at solar
mcLximum) to 1 (solar min) per cm'^ of surface area per second, while in LEO the range is from
0.15 at the geomagnetic equator to 0.6 at high geomagnetic latitudes (Mason and Culhane 1983).
In cases where performance is read noise limited, and the cosmic ray hit rate limits the maximum
5.5.5 Operations
On-orbit science operations in HEO will benefit from the reduced Earth solid angle, the long orbital
period (100 hours vs. 100 minutes in LEO), and the absence of the South Atlantic Anomaly (SAA
- see section 3.3.2 above). In addition, the contamination constraint introduced in LEO that the
telescope not point into the "wind" created by the orbital motion of the spacecraft does not apply
in HEO, where the atmospheric density is negligible. Depending on the relative location of the Sun,
Earth and spacecraft, SIRTF can view instantaneously 14 to 33% of the sky in HEO but only < 1 to
12% in LEO. In HEO, there are zones over the orbit poles (m 200 square degrees in total) which can
be viewed continuously, and accessible targets elsewhere in the sky can typically be observed for 50
or more consecutive hours. By contrast, there is no direction which can be viewed continuously in
LEO, because the combination of Earth limb and "wind" avoidance constraints limit the maximum
viewing time per target to 15 minutes. These and similar considerations suggest that the observation
planning and scheduling process will be much more straightforward in HEO. Maintaining the Earth
avoidance constraint in LEO means that tens of minutes in each 100 minute orbit must be spent
in large angle slews. Observing efficiency simulations for LEO including sky visibility, slewing, and
time lost to the high proton fluxes in the SAA, indicate that the average efficiency (fraction of time
on-target) would be about 45%. The corresponding efficiency in HEO is estimated at 90%. We
would expect similar gains for ET as compared to HST.
The HEO mission appears to have lower operational risk for SIRTF than the LEO option. In LEO
the performance and system lifetime depend critically on maintaining the cleanliness of the telescope
and the aperture shade, which are subject to contamination both by the residual atmosphere and by
problems which could occur during a servicing mission. In addition, an emergency safe-hold mode is
simply achieved in HEO by pointing at the constant viewing zone, while in LEO a complex series of
pointing constraints must be continually satisfied. Finally, the use of fixed solar arrays and antennae
in HEO eliminates the risk of failure during deployment or operation of these mechanisms.
330
• The doubled means that the HEO mission will support twice the number
on-t<irget efficiency
of investigations, and produce twice the quantity of data, as the LEO mission.
• The longer on-target times and greater sky accessibility in HEO will allow SIRTF to operate
in a true observatory mode, in which a single scientific investigation can be scheduled in an
unbroken block of time. Data from the completed observation will be available to the observer
more quickly, and will be more uniform and easier to calibrate and reduce, than if it were
obtained on many successive orbits in the LEO mission.
• The long wavelength (> lOO/xm) performance will be improved in HEO. SIRTF's sensitivity
at these longwavelengths will be influenced by radiation from the telescope itself. To reach
natural-background limited performance at 300fj.m requires a forebaffle temperature of 7K or
below; the predicted temperature b 4K in HEO and 8 to 14K in LEO.
The longer on-tcirget times and much less frequent eclipses in HEO imply that the temperature
will bemore stable as well as lower. These effects together should allow the HEO mission to
achieve far better performance in the cosmic window at 300^m, where recent results (Mat-
sumoto et al. 1988) indicate that many important cosmological questions can be investigated.
For ET, this thermal stability may be important to meet its pointing and image quality re-
quirements.
• Performance in the 3/im window may also be improved in HEO. The extremely low background
photon rate means that observations (especially spectroscopy) near this wavelength are likely
to be limited by detector read noise in 15 minutes, the maximum integration time in for SIRTF
in LEO. Under these circumstances, n^ 15 minute integrations must be combined in LEO to
equal the sensitivity of a single (n X 15) minute integration in HEO. However, the higher cosmic
ray rates in HEO will probably limit continuous integration times to about an hour, depending
on pixel size <tnd the fraction of "hit" pixels acceptable.
• The high galactic latitude sky (latitude greater than 60°) can be observed at any time during
the HEO mission, but is accessible only about 25% of the time in LEO. This is important
because many of SIRTF's (and ET's) most important scientific objectives, including deep
cosmological surveys, will require extended access to high latitudes.
million km in the anti-solar direction), has been examined by Ronald MuUer of the Goddard Space
Flight Center. L2 is still being considered for SIRTF, and should certainly be considered for ET.
This mission would use a Titan IV and modified Centaur to place SIRTF into a halo orbit about
the metastable L2 point. Because no fuel is needed to circularize this orbit, the Titan IV launch
capability is substantially greater to L2 than to a 100,000 km circulcir orbit (or to GEO). Midcourse
corrections and stationkeeping are needed to achieve and maintain this orbit. As seen from L2, the
Earth, Sun, and Moon all lie in the same direction, and therefore viewing constraints are minimal,
and no aperture shade is required. Even lower outer shell temperatures than for the 100,000 km orbit
may be achievable. The chief concerns which kept the L2 mission from being adopted as SIRTF's
baseline were: the Centaur upper stage development required; and the long transit time (over 30
days) with course corrections, which may require the aperture be covered to avoid contamination,
and therefore reduce cryogen lifetime.
331
4 Summary
The consensus of the SIRTF Study Office at Ames, the other NASA centers which participated in the
Mission Options Study, and the SIRTF Science Working Group is that the high orbit offers significant
scientific and engineering advantages for SIRTF. These advantages have been summarized above.
As a result, the HEO mission has been adopted by NASA as the new baseline. This selection of an
orbit optimized to meiximize the scientific productivity of the mission, together with the dramatic
recent advances in IR detector performance, has brought the promise of SIRTF close to realization.
Perhaps the fact that LEO has not been mentioned as a candidate for ET is an indication of
how compelling the advantages of HEO are. The L2 point may be an even better choice for ET
than a 100,000 km circular orbit, while many other factors need to be considered in evaluating a
lunar bsised site. (We remind the reader again that geosynchronous orbit is not a desirable location
for sensitive astronomical detectors.) Of these locations, launch capabilities are greatest to L2 and
smallest to the lunar surface, while post-launch access for servicing or assembly is poorest for L2,
and best for a lunar site (once a lunar base is established).
No matter where it is located, ET will enlighten us with the answers to myriad astronomical
questions, including many raised by discoveries made by SIRTF in its high Earth orbit.
References
Mason, I.M., and Culhane, J.L. 1983, IEEE Transactions on Nuclear Science, NS-30, p. 485.
Ramos, R., Ring, S.M., Leidich, C.A., Fazio, G., Houck, J.R., and Rieke, G. 1988, Proc. SPIE 978,
p. 2.
Rieke, G.H., Werner, M.W., Thompson, R.I., Becklin, E.E., Hoffmann, W.F., Houck, J.R., Low, F.J.,
Stein, W.A., and Wittebom, F.C. 1986, Science 251, 807.
Stassinopoulos, E.G., and Raymond, J.P. 1988, Proc. IEEE 76, 1423.
Werner, M.W., Murphy, J.P., Wittebom, F.C., and Wiltsee, C.B. 1986, Proc. SPIE 589, p. 210.
DISCUSSION
Darnton : In view of the fact that liquid He is light and SIRTF is heavy, small spacecraft weight
reductions afford large potential liquid He dewar volume increases. Is the 4000 liter volume fixed, or
Eisenhardt : The 4000 liter volume was used for comparison with the LEO baseline. Larger volumes
ate being considered for HEO as part of SIRTF's Phase B studies.
Angel : A comment on the The Kodak ion polishing technique would be ideal for getting the
optics.
desired optical performance when cold. The nearly-final optics could be cooled, tested, and then in
one pass all the surface aberrations resulting from the cooling could be fixed up.
332
Orbital Sites Tradeoff Study
Abstract
A comparative study of typicalEarth orbits for their impact on certain spacecraft and science
instrument parameters was conducted using models and formulae from the literature and models
specifically derived for this study. We conclude that low Earth orbits are undesirable because
of high disturbance level and space debris and favor high Earth orbit sites.
1 Introduction
The purpose of this study is to determine what would be the best site in space for a large astro-
nomical observatory operating in the optical and near-IR. To that end we selected a number of
parameters aifecting the performance and operation of a telescope in space and compared them
as
a function of the orbits. Some parameters were chosen because of their impact on attitude control
and the lifespan of the Others were chosen because of their impact on the efficiency of
satellite.
science observations. The orbits chosen and their elements are listed in Table 1.
For the purpose of this study the baseline telescope is a 10 meter full aperture mirror with
a closed tube extending just beyond the secondary. We adopted the nominal parameters for this
telescope as presented in Table 2. The baffling prohibits the telescope from pointing within 90° of
the Sun. The Hubble Space Telescope (HST) was also included in the study and its parameters
are also presented in Table 2. We chose to include HST in the study because the properties of its
mission are well studied and this allowed us to check our results.
Parameter
Table 2. Satellite Characteristics
Parameter
year. The estimates presented here may be in error by as much as 50% depending on the actual
The orbital lifetime due to cierodynamic drag was found to be 7.3 years for the 10
solar activity.
meter telescope in a low Earth orbit (22.5 years for HST). The lifetime at higher Earth orbits is
effectively infinite.
3.2 Torque
The gravity gradient torque was calculated using the moments of inertia and the angular velocity of
the satellite.The solar radiation and aerodynamic torques were calculated from the solar radiation
and atmospheric pressures and the offset between the center of gravity of the satellite and the
center of pressure for the worst case satellite orientation. The solar radiation pressure was taken
to have a single value (4.4xlO~^ N/m^) for all of the orbits under consideration. The atmospheric
pressure was calculated from Wertz and the same qualifications mentioned above regarding the
fluctuation of the atmosphere also apply here. The torque was calculated assuming a circular orbit
and a constant cross section perpendicular to the motion of the satellite.
3.3 Damage
The probability that the satellite would suffer severe damage over a 15 year period due to both
meteoroids and space debris was calculated by extrapolating a study investigating these factors for
HST (Lockheed). The was updated to reflect projections for the year 2000
level of space debris
(Kessler, et. al.). We were unable to locate any data on space debris at orbits higher than 1500
kilometers and therefore the probability of damage at the geosynchronous orbit is likely to be higher
than what has been calculated for the meteoroid level alone.
335
4 Results and Discussion
From Figure 1 and the discussion in section 3.1, it is obvious that the effect of the atmosphere
on a voluminous spacecraft such as the one considered is very large for low Earth orbits and may
still be significant at the Sun synchronous orbit altitude. This effect becomes negligible very quickly
It is interesting to note how little the meteoroid damage probability changes with different orbits
(Figure 2). Most of this change is due to the Earth's gravitational focusing of the ambient meteoroid
field. The effect of meteoroids trapped at the Lagrange points was not considered although we note
that these particles would have very low velocities relative to a satellite at one of the Lagrange
points.
The space debris damage projected to the year 2000 presents a serious concern for the Sun
synchronous orbit. Lower Earth orbits are depleted of space debris by orbital decay, but particles
above these altitudes are fairly stable. The smaller number of spacecraft at higher altitudes com-
bined with the larger volume of space results in a much lower probability for damage. This may
not not be true at the geosynchronous orbit due to its popularity. We were unable to acquire data
on the geosynchronous orbit, but due to the large number of spacecraft and launch equipment at
this orbit there may be enough space debris to make the probability for damage non-negligible.
The average fraction of sky available increases with distance from the Earth cind is essentially
limited by the Sun avoidance angle (Figure 3). The high maximum values shown result from eclipses
of the Sun, during which nearly the entire sky is available for a short period of time. The minimum
values correspond to periods when two bright solar system bodies are at opposition.
As noted above the sky coverage fraction presented for the Sun synchronous orbit is highly
uncertain. With the 90° avoidance angle assumed for the Sun we can estimate the average sky
coverage fraction would be somewhat less than 0.5 due to the Earth and the Moon.
Our model does not include observing efficiency as a parameter, and a detailed study would
require a simulation of the observations and schedules. However an estimate of the maximum
efficiency can be obtained by a analytic model in which the important factors are the overhead
time per observation and the average duration of an observation. Figure 4 shows the maximum
efficiency of primary observations (i.e. excluding any parallel science) as a function of the exposure
time for the high Earth orbit, given two possible values for the overhead time per observation.
Also shown is the estimated efficiency of HST observations from a simulation by Johnston. The
greater efficiency of the high Earth orbit compared to the HST is caused by the same factors which
determine the fraction of sky available at a given orbit, namely, the avoidance angles of the Earth,
Sun and Moon. The further these bodies are from the satellite, the more efficient the observatory
and the more sky available for observations. Note also that for the high Earth orbit, the efficiency
increases much more rapidly with an increase in exposure duration (or a decrease in overhead time)
compared to a lower orbit.
336
LE01 LE02 SS GEO HEO
Figure 1. Torques
0.01
0.0075 .
0.005 .
0.0025 .
0.75 .
337
100
m
c
_0
0.8-
a
>
• HEO 23 min o'head
a 0.7-
Ji
O HEO 37 min o'head
0.6-
0.5-
0.4-
u
c 0.3- HST (Johnston 1985)
0.2'
E
3 0.2
E
0.1
S
0.0
10 20 30 40 50 60 70 80 90 100
338
field would still be a consideration, but we estimate that over a 15 year period the probability for
severe damage would be less than one tenth of one percent.
We calculated the fraction of sky available from two Lunar
sites; an equatorial site located at the
constrained to point within 70° of the zenith, and that the bright Earth could be shielded to within
two degrees of its subtence. We have also assumed that no observations would take place when the
Sun was above the horizon due to thermal and scattered light problems.
For the equatorial site we calculated that the fraction of sky available on the average would be
only 0.15. This is due primarily to the constraint imposed by the Sun. This can be improved to a
value of 0.33 if the observatory were placed at one of the poles of the Moon in a crater whose walls
would shield the observatory from the light of the Sun and the Earth. Contact with the Earth
could be maintained through a relay station placed on the rim of the crater. The Earth would be
shielded from an observatory placed in a crater anywhere along the limb of the Moon. Shielding
the Earth in this manner adds about one hundredth of the sky to the average fraction available.
It should be noted that large areas of the sky would be permanently unavailable to a Lunar
observatory. An equatorially placed observatory would be able to minimize this area at the sacrifice
of overall efliiciency. A better way to alleviate this would be to place two observatories on the Moon,
one in each hemisphere.
6 Conclusions
We conclude that were equally feasible to place a space telescope at any of the orbits under
if it
study, the best choice would be either the high Earth orbit or the Lagrange orbit. Observing
efficiency would be greatest there and attitude control problems would be minimized. In addition
the probability of damage by space debris is minimized.
ALunar based observatory offers essentially the same advantages as that of the high Earth
orbits except that it would retain the traditional problems of limited sky coverage encountered by
ground based telescopes.
7 References
Greynolds, Alan W. 1980, "Radiation Scattering in Optical Systems", proceedings, SPIE, Vol 257,
p.39.
Leaton, B. R. 1976, "International Geomagnetic Reference Field 1975", Trans., Amer. Geophysical
Union (E ® S), Vol 57, p. 120.
Lockheed 1989, "Hubble Space Telescope Meteoroid Protection Analysis", Lockheed Report,
LMSC/F158270C.
Kessler, D. J., Grun, E. and Sehnal, L. 1985, "Space Debris, Asteroids and Satellite Orbits", JASR,
Vol 5, No. 2.
Tulkoff, P. 1989 "Passive Cooling for Low Temperature Operation", these proceedings.
Wertz. James R., ed. 1978, Spacecraft Attitude Determination and Control, D. Reidel Publishing
Co., Dordrecht, HoUand.
339
DISCUSSION
Swanson : How can you put 23,000 leg in anything except a low Earth Oibit? This is the zetoth oidet
issue. If this cannot be answered, most of the other orbital issues tradeoffs are irrelevant.
Neil : Our study demonstrates the penalty paid by having a limited lift capability and provides some
fuel for the Are warming the efforts to increase our launch capability. It is also true that the Soviet
Union does have this capability.
Mallama : Your analysis of the sun synchronous orbit was interesting, but it only considered one
possible altitude. Precession rate is a function of altitude and inclination. Thus, a wide range of
altitudes could be chosen by adjusting the inclination. One way to avoid thermal cycling is to put the
nodes 90 degrees iiom the solar longitude.
Neil : We have selected the traditional 900 km altitude because it is just below the Van Allen belt.
And yes, you are right, the 6 a.m.-6 p.m. orbit is the best (minimizes thermal cycling and supplies
continuous solar power).
340
THE MOON AS A SITE FOR ASTRONOMICAL OBSERVATORIES
Jack O. Burns
Department of Astronomy
New Mexico State University
Las Cruces, NM 88003
INTRODUCTION
For the past five years, a small but active group within the
astronomical community has been investigating possible
observatories for the Moon in the 21st century (see e.g.. Burns
and Mendell, 1988) . These studies were in large part motivated by
a recognition of the limitations imposed by observations from the
surface of the Earth or from low Earth orbit (LEO) For example,
.
341
.
342
.
343
.
CONCLUSIONS
ACKNOWLEDGEMENTS
REFERENCES
Publication #2489.
344
. . .
Publication, in press.
Burns, J. O. Johnson, S. W.
, and Duric, N. 1990, A Lunar Qptical- ,
Coins, N. R. Dainty, A. M.
, and Toksoz, M. N. 1981, J Geophys .
Hoffman, J. H. Hodges, R. R.
, and Johnson, F. S. 1973, Proc. ,
2503.
Kulkarni, S. 1990 in A Lunar Optical-UV-IR Synthesis Array NASA .
345
DISCUSSION
Stockman : You mentioned that the lunar surface is very quiet. Yet I recall that one of the discoveries
of the Apollo days was that moonquakes were quite persistent. Can you explain this?
Burns : There are two main categories of lunar seismic signals, based on the depth at which they
originate. Almost all occur deep within the Moon at depths 700-1100 km; on average, about 500 deep
events were recorded each year during the 8 years that the Apollo network operated. These deep
moonquakes are related to tidal forces. The less frequent (~ S/yr) shallow quakes occur at depths of
< 200 km. Overall, seismic activity on the moon is drastically less than on Earth, with much smaller
amplitudes. Seismic waves are intensely scattered near the lunar surface. This causes the energy of
the waves arriving at a given point to be spread out, so the damaging effects of a moonquake are much
less than those of an earthquake of the same magnitude.
Bely: I was intrigued by your proposal to design a lunar telescope to operate at two distinct temper-
atures. Could you expand on that?
Bnrns : One possibility is to operate an optical/IR telescope in two modes. Optical during the day
and IR at night. Thermal cycles are long enough to make this practical. One needs to see if such a
hybrid, thermally deformable telescope is possible in detail.
Johngon : We should try to use the telescopes both in the daytime and at night. It may require
heating of some elements at night, and baffling and shielding during the day. We don't want to throw
away the observing time and so we should try to adjust the system to operate in both environments.
Angel: One note of caution about interferometry. You have a very stable baseline on the moon.
However, it will not be pointing in the direction that you want to point. So you have to chase around
to find the null of the interference fringes, just like you have to do on earth. So there is a lower scale
of separation for interferometers where free flyers have a very big advantage because you can point
them. That makes beam combining very much better, with much wider fields. One needs to look at
the transition point, that is, the size, where the problem of the stability of the baseline overcomes the
disadvantage of not pointing in the right direction. The experience on a fixed surface, on the earth,
shows that is it easy to do interferometry when the elements point as a single unit. Whereas, when
you spread them out over a surface and have to make path length corrections you add a considerable
complication.
Bnrns : The tradeoffs between optical interferometers and single aperture systems have really not
been addressed yet, particularly for the moon, and those tradeoffs retJly need serious study.
niingwortli: I agree, and would emphasize the need for a broad technical and scientific evaluation.
The statement is often made that experience has shown that the future for radio astronomy is with
interferometers, and so the jump is made to the statement that the future in optical-IR is also with
346
inteiferometeis. I think that there are real technical diiferences in these two aieas that need to be
explored and understood to define the reeilms of applicability of interferometers and filled apertures.
To say that interferometers are the future overstates the potential of interferometric systems.
Buns: The technology is fundamentally different in the optical and the radio. Radio interferometry
is made so simple and so elegant because of the availability of heterodyne receivers. The technology is
well developed in the radio, it is well understood, and it is very flexible. We do not have heterodyne
receivers in the optical and so it is different and harder to make predictions. Interferometry may be
the best way to go, but it is not proven yet.
niingworili: We should define the S/N that is needed for observations of astronomical objects that
play an important role in a variety of fundamental problems, and compare the interferometry and filled
aperture approaches.
Burns : Even with the technological leaps that we have made on the ground to make interferometers
function through our turbulent atmosphere, we are still dealing with relatively primitive interferome-
ters. Progress in space may allow us to leap some of the technological hurdles and to see if the basic
technology will give us interferometers that will be productive in a very general sense.
347
.
Stewart W. Johnson-'-
John P. Wetzel^
ABSTRACT
INTRODUCTION
THE CHALLENGE
•*•
Principal Engineer, Advanced Basing Systems. BDM International,
Inc., 1801 Randolph Road., S.E., Albuquerque, NM 87106.
348
intergalatic medium. Roger Angel of the Arizona Steward Observatory
offers the prospect, with a 16 meter instrument, of using the ozone
as a tracer of life on distant planets. These are exciting prospects.
The possible schematic of this 10-16 meter telescope is shown in
Figure 2.
Large Telescope (10 to 16 m UV-Vlslble-IR)
• Requirements
• System and specifications
definition
• Site selection and characterization
• Control capability (stringent requirements limiting differential settlements,
tide compensation)
• Lunar surface layout requires locating and modifying a suitable site
• Dynamic response of lunar soil to movement of telescopes
• Preservation, cleaning, and renewal of optical surfaces and coatings
349
/ Secondary Mirror
r/////// '>//777jn77T7T/Tj^fj^77777T777TTT7TT
Contamination/Interference Control
Fine-grained particulates from the lunar surface - stick to surfaces
• Natural
• Induced by operations
- rocket plumes
- outgassing from excavations/fill in soil and
mi ni ng/manuf acturi ng
- outgassing from suited workers
Radio frequency - interference problem for radio astronomy/communication
Other:
• Reactor radiation
• Waste heat from power sources
Possible Contamination/Interference
Contamination effects research
• Determination of effects
• Development of acceptable standards
352
CONTAMINATION/INTERFERENCE CONTROL TECHNOLOGIES
353
calibration of telescope systems are an important aspect for the
prelaunch modeling, test, and evaluation process.
MANUFACTURING TECHNOLOGIES
CONSTRUCTION TECHNOLOGIES
354
should be made for teleoperation and maintenance workers in space
suits if unanticipated difficulties arise. Prelaunch test and
evaluation efforts on Earth will focus on various aspects of
teleoperated operation and maintenance to predict and resolve
difficulties before arrival at the Moon.
The vehicle associated with the telescope should be able to
operate in several different modes as needs dictate change from manual
operation to local teleoperation or to remote teleoperation, or perhaps
to autonomous operation and hybrid modes. Technical issues with
the vehicle design relate to vehicle size and mass, load carrying
capacity and range, communications and control, number of wheels
(or tracks), manipulator capabilities, power, and how the vehicle
copes with the environment (e.g., the soil, rock, and terrain; vacuum;
meteoroid impact; radiation; extremes of temperature; and diurnal
cycles of solar radiation). The robotic vehicle system that supports
the construction on the lunar surface will be required to support
all phases of the effort including transport according to the
predetermined plan, emplacing a communications/computer station,
and performing maintenance and repair tasks. The vehicle must have
flexibility to meet unanticipated needs such as coping with the
unexpected, unusual terrain, soil variability, and layout adjustments.
The prime power source for the lunar astronomical observatory
and associated facilities will be either solar or nuclear or a
combination. Power requirements will probably be much less than
100 KW. Solar arrays appear to be suitable if backed by sufficient
energy storage capacity (batteries or regenerative fuel cells) to
continue operations during the lunar night. There is a strong need
for development of regenerative/rechargeable power storage devices
both large and small for use with solar energy devices to furnish
power during the 14 Earth-day lunar night. One option for the next
generation battery is a Na/S battery being developed at the Aero
Propulsion Laboratory at Wright-Patterson Air Force Base, Ohio (Sovie,
1988). Radioisotope thermoelectric generators also are possible
power sources although they are inefficient and generate relatively
large amounts of heat. Focal plane arrays for optical telescopes
on the Moon will need to be cooled. Much technology development
is required for cryocoolers to fill this need. One option is the
development of an integrated radioisotope- fueled dynamic power
generator and cryocooler to cool the focal plane arrays.
355
Disturbance Issues
• What are the disturbances?
critical
- Natural seismic shock, thermal
-
Structures Issues
• What approaches can be taken to build light-weight, high-stiffness
structures optimized for the lunar 1/6 g and extreme thermal
environments?
- Structural parameters - how ascertained?
- Improved models (computational)
- Test and instrumentation challenges
- Optimization
- Assembly/erection/inspection
Control Issues (for orienting mirrors)
• Control - structure interactions
• Transients and damping in structures optimized for 1/6 g
• Experiments and tests of control mechanics
Testing Issues
• Ground testing on Earth vs. on Moon
• Scaling of terrestrial structures tests to larger structures at 1/6 g
• Measurements/instrumentation for terrestrial/lunar use
Surface accuracies
Stable frameworks
There are many technology drivers for these optics. They include
optical coatings that resist delamination, optics that are stress
free after manufacture, and refractive materials which do not darken
or develop color centers. Refractive materials should have low
scatter. Adaptive optics will be important for lunar optical telescope
applications. Actuator and controls development and power and thermal
control for adaptive optics should be pursued.
356
For mirrors on the lunar surface, active cleaning and
contamination control techniques will be needed. Polishing techniques
need to be improved; renewable coatings may be required. Materials
used for telescopes need to be thermally stable. The appropriate
degree of coating hardness against the ultraviolet and X-ray
environments of the lunar surface will be needed. As always, the
telescope optics will require the necessary vibration isolation.
Material systems should be developed to function as protective
shields for antenna structures and mirrors against the worst extremes
of the lunar thermal environment and the micrometeoroid environment.
CONCLUSION
The need for all the observatories under consideration and for
all extraterrestrial facilities is to engineer them with technologies
that make it possible to perform well for long periods of time with
minimal intervention by humans or robots. Better astronomy can be
done if contamination and interference (gases, particulates, ground
shock, and extraneous RF radiation) resulting from nearby operations
can be kept to very low levels by limiting the need for nearby
operations. An obvious need is to strive for facilities compatibility
in lunar surface operations at various sites by controlling and
reducing functions (e.g., proximity of mining operations or rocket
launch pads to optical astronomy facilities) that lead to undesirable
consequences. This need for compatibility implies the enforcement
of a broad-based systems engineering discipline to all lunar
engineering, construction, and operations.
ACKNOWLEDGMENTS
APPENDIX 1. REFERENCES
357
.
in preparation.
358
DISCUSSION
Schember : What did you mean about the need for an improved clock?
Johnson : A suitable clock on the moon would be helpful for a radio astronomy experiment using
lunar communications antennae in conjunction with ground based antennae. This experiment would
be analogous to the radio astronomy experiment done with TDRS some time ago. The clock is not a
major issue.
BntM : Dust may be a problem on the moon. The Apollo astronauts were literally covered with fine
grain dust, presumably produced by "static cling." Calculations indicate that dust on the moon may
migrate from light to dark areas. Depending on the magnitude of this process, a telescope behind a
sunshade could be a target for dust contamination.
Johnson : Dust less than 20 microns constitutes about26% of the lunar surface layer. We ultimately
may find ways to stabilize the lunar surface layer. One possibility is through microwave processing.
Until we can stabilize the dust, we must operate on the moon in ways that will mitigate transfer of
dust to optics, thermal control surfaces, and mechanical parts.
359
Space Logistics: Launch Capabilities
Randall B. Furnas
NASA Headquarters - Code MD
The current maximum launch capability for the United States are shown in the figure
below. This is an optimistic maximum based on absolute maximum launch rates as well
as lift capability. Actual average annual lift capability is likely closer to 800,000 lb.
baseline deployment scenario remains the NSTS orbiler fleet. Finally, the last two ranges
displayed on the chart represent specific SDI payload ranges that are predicted.
Current maximum lift capability in the US is defined by the grey band representing
the NSTS Orbiter and Titan IV. This can be contrasted with the Soviet Energia on the
top dashed line.
220
200 +
ENERGIA
a
160
POUNDS
TO 120 +
LEO
80--
f:>::>.::>:r::"<;syft:.mi?i
40-- SHUTTLE/TITAN IV
NASA is studying the following options to meet the need for a new heavy-lift capability
by mid to late 1990's:
- Shuttle- C for near term (include growth versions), and
- the Advanced Lauching System (ALS) for the long term.
which have expanded diameter payload bays. A three-engine Shuttle-C with an expected
lift of 145,000 lb to LEO is being evaluated as well.
The Advanced Launch System (ALS) is a potential joint development between the
Air Force and NASA. This program is focused toward long-term launch requirements,
specificallybeyond the year 2000. The basic approach is to develop a family of vehicles
with the same high reliability as the Shuttle system, yet offering a much greater lift
capability at a greatly reduced cost (per pound of payload). The ALS unmanned family
of vehicles will provide a low end lift capability equivalent to Titan IV, and a high end
lift capability greater than the Soviet Energia if requirements for such a high-end vehicle
are defined.
361
ADVANCED LAUNCH SYSTEM
FAMILY OF VEHICLES
(120 KIb 1.5 STA GE OPTI ON)
I
SIZING P0INF~^
PAYLOAD 26 5° (KLB)
PAYLOAD 90° (KLB)
GLOW (MLB)
CORE SUSTAINER (580KLB)
CORE BOOSTER (580KLB)
BOOSTER (580KLD) 3X7
RELIABILITY
RECURRING $M 100.3
RECURRING $'Lb 316
The proposed vehicle processing flow for ALS is shown conceptually on the next fig-
.100 -1
280 -
260 -
240 -
220 -
T TITAN IV
200 T NSTS
180 -\
160
140
CO 120
O
u KM) -
80 -
60 -
40
20 -\
In conclusion, the planning of the next generation space telescope should not be
constrained to the current launch vehicles. New vehicle designs will be driven by the
needs of anticipated heavy users.
363
DISCUSSION
rnrnag : This answer is entirely dependent upon the chosen upper stage. Also, the joint USAF/NASA
program heis not yet determined what the "largest" member of the family will be. At this early stage,
you might calculate capability to HEO based on a booster of approximately 200 kib lift combined with
a Centaur (Titan IV version).
rnrnas : So ALS with 200 - 320 klb to LEO is essentially as capable as Energia.
Thompson: Can you give us any timescales on ALS? This is important because it impacts the size
and weight limits that would be available at the point where we need to fix the design of a large space
telescope.
Furnas : This depends on several factors, one being the timescale and the way in which the lunar
outpost program develops, and on when the Shuttle-C and ALS programs get their new starts. Both
of these programs are in for new starts in 91-92. The baseline technology work is currently in progress.
mingworih: Are there plans being developed for a higher performance upper stage to match the
projected performance of ALS?
rnrnas : The ALS vehicle has no upper stage, as currently envisioned. The vehicle simply places the
payload into a transfer orbit. Each payload will need to be assessed individually for available upper
stages {e.g., Centaur) to determine combined ALS/upper stage performance.
Eisenhardt : A difference between GEO and HEO is the time required by the upper stage to achieve
the orbit. With the Titan FV/Centaur, the time required to reach HEO pushes Centaur design on
battery life and perhaps other issues.
Bely : What is the future of the OMV and OTV in the context of the ALS Program?
rnrnas : The future of OMV and OTV most likely rides with the future of the Lunar/Mars Initiative.
These unfortunately are unknowns at this point due to the federal budget situation. The budget action
that Congress takes in response to a National Space Council proposal for Lunar/Mars will be the best
indicator of the future of both OMV and OTV (or STV).
364
On-orbit Assembly and Maintenance
David R. Soderblom
Space Telescope Science Institute
construction. The basic concept is that M&R allows one to periodically replace worn or
malfunctioning parts; to fix failures that compromise the success of the mission; to reboost
the orbit to palliate atmospheric drag; to improve satellite performance through upgrades;
and to replace the sateUite's Science Instruments so as to keep it operating with up-to-
date devices. Thus M&:R was to make a satellite more like a ground-based observatory in
that one started with a telescope and an initial complement of instruments, then changed
the instruments and support devices gradually over the years to take advantage of new
technology.
In its original concept, M(S;R for HST meant bringing back to the ground every five
it
years for extensive refurbishment. This was soon seen to be impractical: Where would
the satellite be taken upon return? Could it be kept contamination-free? Could NASA
afford two shuttle flights per refurbishment?
On-orbit maintenance is now the modeHST's M&R. The intervals at
foreseen for
which it will be done are not clear. At first, two-year intervals were selected because of the
anticipated rapid decline of the solar arrays and nickel-cadmium batteries. Improvements
in both those areas have led to an estimate that we can wciit five years before an M&R
mission.
The motivation for M&R has been economic: by repairing an existing satellite perhaps
much of its considerable cost can be saved, while still providing for advanced instrumen-
tation as technology progresses. The desire to stretch the interval between M&R missions
has also been economic: shuttle flights (especially ones that require extensive training
like an M&R mission) cost at least $200 million each. Even if that cost is ignored, flights
are a scarce commodity for the foreseeable future. There is also a considerable cost in
maintaining a reserve of parts to repair the satellite and the people with the knowledge
to use those parts.
The future may vindicate HST M&R, but it now appears that the best path to take
is to build the quality in at the beginning and to place a satellite in the operationally
best orbit you can afford, given whatever launch vehicle is available. If M&R is a real
need, there is no entirely satisfactory your choice of orbit. In Low Earth
compromise for
Orbit, only about 24 work-hours total are available during one refurbishment flight, so
that the range of activities that can be considered is limited. High Earth Orbit is much
to be preferred for operational reasons, but is currently inaccessible for M&R. The Space
Station may be available for M&R, but in an era when SS is being used as a stopover
for a lunar base it will necessarily be a very dirty environment that is not suitable for
ultraviolet telescopes such as HST. The original plans for SS called for a maintenance bay
at the upper (clean) end, but that has been dropped. The obvious advantage of SS is the
longer time available for work to be done. In the more distant future, the availability of
an Orbital Transfer Vehicle could make much higher orbits accessible to M&R.
365
Working in space, in the absence of gravity, is very difficult. Maintenance work could
be done much more easily on the Moon, especially if the telescope is near a lunar base.
However, the costs would be extremely high.
In retrospect, M&R started with good intentions but on the whole will probably turn
out to be unsatisfying. There are likely successes: At a cost of about S300 to 400 million
(including launch and other costs), the NICMOS instrument will provide dramatic new
capabilities that could not have been otherwise achieved except at a much higher cost.
On the other hand, the need for access to HST for M&R has, as much as any other
factor, forced the choice of a Low Earth Orbit, which will significantly complicate and
compromise the science done by HST over its lifetime.
M&R has been pursued because of the very high costs of construction of complex
HST. A more satisfactory solution in the long run is to bring those costs
satellites like
down because that means that more and different satellites can be built, with a higher
overall return in scientific productivity. One key to reducing those costs is surely to
drastically reduce the time needed to construct the satellite since most of the cost goes
into labor. Reducing that time would also make undertaking projects like HST much
more satisfying becauseone can hope to see the fruits of ones labors much sooner and
with much less frustration. Thus I see the major impediment to long-term success in space
astronomy as being administrative, not technological (as daunting as the technology may
be). Weneed radically different concepts for the management of technology utihzation
at least as much as we need the technology itself if we are to succeed to the extent we all
hope for in the coming years.
366
TECHNOLOGICAL SPIN-OFF FROM THE NEXT GENERATION
Virginia Trimble
Technological spin-off may not be the most important reason that ast-
ronomy is worth funding and worth doing, but it is the reason that seems
most likely to appeal to some of the organizations and individuals that
will have to be persuaded. A few of the spin-offs are well known within
367
the astronomical community (for instance, the origins of airport security
systems in astronomical X-ray detectors and the applications of radio astro-
nomical ideas to the testing of dishes for communication purposes) . The
panel that is to write the AASC chapter on what astronomy is good for urgent-
ly needs additional specific, concrete examples of spin-off of this type,
including software as well as hardware. Meeting participants included a
large number of experts in the development of photon collectors, detectors,
and processors, many of whom are bound to be aware of such spin-offs that are
not currently known to the panel. Please, if you have any information or
ideas along these lines, send them promptly to the present author, who is
chairing the Panel on Benefits to the Nation of Astronomy and Astrophysics,
which will write the relevant chapter.
368
List of Participants
369
TEIEPHONE NUMBERS OF THE WORKSHOP SPEAKERS AND ORGANIZERS
Angel
PARTICIPANTS
371
George Field Shireen Gonzaga Robert Jackson
Center for Astrophysics Space Telescope Sc. Inst. Space Telescope Sc. Inst.
372
Mike Krim Kevin McAloon Bland Norris
Perkin Elmer Optical Gp. TRW Space Defence Perkin Elmer Optical Gp.
Jet Propulsion Lab. Mail Stop 186-134 3700 San Martin Drive
4800 Oak Grove Dr. Jet Propulsion Lab. Baltimore, MD 21218
Pasadena, CA 91109 4800 Oak Grove Dr.
Pasadena, CA 91109 Goetz Oertel
Kenneth Lorrell AURA
Lockheed Missiles & Space Glenn Miller 1625 Massachusset Av
O/92-30 B/205 Space Telescope Sc. Inst. Washington, DC 20036
3252 Hanover St 3700 San Martin Drive
Palo Alto, CA 94304 Baltimore, MD 21218 John F. Osantowski
Goddard Space Flight Ctr
373
Carl Pilcher Francis H. Schiffer Hervey S. Stockman
Office of Exploration Space Telescope Sc. Inst. Space Telescope Sc. Inst.
Kaman Aerospace Corp. 3700 San Martin Drive Johns Hopkins Road
Suite 1200,
Paul Swanson
Arlington, VA 22209 Amit Sen
Jet Propulsion Lab.
Space Telescope Sc. Inst.
Mail stop 168-327
Courtney Ray 3700 San Martin Drive
4800 Oak Grove Dr.
jllU Applied Physics Lab. Baltimore, MD 21218
Pasadena, CA 91103
Johns Hopkins Road
Laurel, MD 20707 Michael Shao
Richard Terrile
Jet Propulsion Lab.
Jet Propulsion Lab.
Mark Rehfield Bldg 169, Rm 214
4800 Oak Grove Dr.
Hughes Aircraft 4800 Oak Grove Dr.
Pasadena, CA 91103
2000 E. El Segundo Blvd. Pasadena, CA 91109
El Segundo, CA 90245 Dominick Tenerelli
Michael Shara LMSC
Guy Robinson Space Telescope Sc. Inst. P.O. Box 3504
Jackson & TuU 3700 San Martin Drive Sunnyvale, CA 94088
MD Trade Center 1, Baltimore, MD 21218
Suite 640B Lee Thienel
7500 Greenway Center Dr. David Skilmann Space Telescope Sc. Inst.
374
Art Vaughan Bruce Woodgate
Jet Propulsion Lab. LASP, NASA/GSFC
4800 Oak Grove Drive Code 681
Pasadena, CA 91109 Greenbelt, MD 20771
Nolan R. Walborn
Space Telescope Sc. Inst.
Steve Watson
USAF Inst, of Technology
Wright-Patterson AFB,
Ohio 45433
Harold A. Weaver
Space Telescope Sc. Inst.
Edward J. Weiler
Astrophysics Division
Code EZ
NASA Headquarters
Washington, DC 20546
Richard L. White
Space Telescope Sc. Inst.
Allan Wissinger
Perkin Elmer
761 Main Street
Norwork, CT 06859
375
(j(,|,J|'j|j-EV COLLEGE LIBRARY
N49 1989
V
Astro qOB 500.268 .
\'>.