Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Understanding Light Microscopy
Understanding Light Microscopy
Understanding Light Microscopy
Ebook2,784 pages27 hours

Understanding Light Microscopy

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Introduces readers to the enlightening world of the modern light microscope

There have been rapid advances in science and technology over the last decade, and the light microscope, together with the information that it gives about the image, has changed too. Yet the fundamental principles of setting up and using a microscope rests upon unchanging physical principles that have been understood for years. This informative, practical, full-colour guide fills the gap between specialised edited texts on detailed research topics, and introductory books, which concentrate on an optical approach to the light microscope. It also provides comprehensive coverage of confocal microscopy, which has revolutionised light microscopy over the last few decades. 

Written to help the reader understand, set up, and use the often very expensive and complex modern research light microscope properly, Understanding Light Microscopy keeps mathematical formulae to a minimum—containing and explaining them within boxes in the text. Chapters provide in-depth coverage of basic microscope optics and design; ergonomics; illumination; diffraction and image formation; reflected-light, polarised-light, and fluorescence microscopy; deconvolution; TIRF microscopy; FRAP & FRET; super-resolution techniques; biological and materials specimen preparation; and more.

  • Gives a didactic introduction to the light microscope
  • Encourages readers to use advanced fluorescence and confocal microscopes within a research institute or core microscopy facility
  • Features full-colour illustrations and workable practical protocols

Understanding Light Microscopy is intended for any scientist who wishes to understand and use a modern light microscope. It is also ideal as supporting material for a formal taught course, or for individual students to learn the key aspects of light microscopy through their own study.

LanguageEnglish
PublisherWiley
Release dateMar 28, 2019
ISBN9781118696811
Understanding Light Microscopy

Related to Understanding Light Microscopy

Titles in the series (8)

View More

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Understanding Light Microscopy

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Understanding Light Microscopy - Jeremy Sanderson

    Introduction

    Presumably you are reading this book because you are interested in light microscopy and are either self-taught, attending a course, responsible for using and maintaining a microscope at work – or all of these things. The author, and those who have supported writing this book, have taught about light microscopes for many years, are passionate about microscopy and wish to help you. That is why this book has been written: to help you understand, set up and use a research light microscope properly.

    The aim in writing this book is to educate and not merely to train you in the use of the light microscope. The distinction is important. Education involves learning about the individual parts of the microscope and how they function, a process inseparable from understanding how and why they work as they do, particularly appreciating their limitations – hence the title of this book Understanding Light Microscopy. Training, on the other hand, consists only of knowing how to operate the various microscope controls, from switching on the lamp to pressing the digital camera button: procedures set out in the manufacturers’ instruction manuals. Education is a journey from instruction to understanding. In one sense it is never complete, although you may stop where you wish.

    An important aspect of education is that it is a far more enjoyable and rewarding process than training; being educated in the use of the microscope you will truly become in control of the instrument and empowered in several ways: knowing the ‘envelope’ of proper operation you will be able to adjust the microscope properly, acquire and record far better images – which is the primary goal. You will also be able recognise and correct faults where they may occur and be able to devise special techniques to suit your own specimens.

    These potential goals may be daunting, but each step taken is small. You do not need to know very much physics or mathematics1 to understand microscopy or to work your way through this book. Essentially, the hard work was done by several geniuses from the late 17th century to the late 19th century, one of these being Ernst Carl Abbe, of whom more later. The author is an image facility manager who started out as a biological sciences technician; he is not a physicist or optician. So, please do not worry; mathematical formulae have been kept to an absolute minimum. Each physical concept and mathematical formula or procedure is explained clearly as they arise.

    When we want to see something in more detail, we bring it closer to our eyes. At some point no further detail can be seen or we cannot get sufficiently close to the object. In both cases we use an optical aid to help us distinguish, or resolve, further detail. If we cannot get sufficiently close, we might use a telescope or a pair of binoculars. Where we can get close up to the object, we use a magnifying glass or microscope.

    It is commonly thought that the function of the microscope is to magnify, and of course they do magnify. This is the layman’s view:

    ‘Yes, I have a pair of eyes,’ replied Sam, ‘and that’s just it. If they wos a pair o’ patent double million magnifyin’ gas microscopes of hextra power, p’raps I might be able to see through a flight o’ stairs and a deal door; but bein’ only eyes, you see my wision’s limited.

    [Sam Weller to Sergeant Buzfuz in: The Posthumous Papers of the Pickwick Club, chapter 34, page 240; Charles Dickens (1837)]

    However, magnifying power is not the principal function of the microscope, which is resolving power – the ability to perceive, or discriminate, fine detail in the object and transfer that to the image. A second function is to provide contrast in the image, which is of particular importance for transparent specimens that would otherwise be invisible. Only when the microscope can provide adequate resolution and contrast will it be useful to magnify the image for comfortable viewing. There is little that the microscope user can do to improve resolution of the image – this property is determined by the objective lens and has to be paid for, though it is commonly made worse by misuse and misadjustment of the microscope. Likewise, magnification is usually fixed by design of the objective and eyepieces (and any intermediate lens system) by the manufacturer. Contrast, on the other hand, can often be improved in simple do-it-yourself ways, which can be devised by the ingenious and informed user.

    A microscope is any instrument designed to provide information about the fine details of an object. The term thus includes the single-lens magnifying glass (termed the simple microscope) the familiar compound microscope which uses an objective lens and an eyepiece, electron microscopes, and also instruments that form their images from a mechanical or electrical interaction between a stylus and the surface of the specimen. In this book we shall confine ourselves to microscopes whose images are formed by the visible part of the electromagnetic spectrum, which we call light.

    There are many types of light microscope, which differ because of the kinds of specimen they are designed to examine and the techniques used. The traditional microscope is the one generally used for biological work and configured for transmitted light (light that passes through a specimen, rather like light illuminating a stained-glass window in a church). Most explanations in this book will centre on this design, partly because it is so common but also because with the optical components arranged one after the other, it is the easiest to explain and also to understand.

    Microscopes designed for use in materials science and also for fluorescence microscopy are configured to examine opaque or fluorescently-stained transparent specimens by reflected light, which illuminates through the objective lens. The objective therefore also acts as its own condenser. These microscopes are generally rather easier to use than the transmitted-light design, because whenever the objective lens is changed, the condenser is also changed appropriately and no major readjustment or centring are required.

    Transmitted- and reflected-light microscopes are made in both the conventional upright design in which the specimen is examined from above, and also the inverted form where it is examined from below. Inverted microscopes are particularly useful (and widespread) in biological research laboratories, where they are used to view specimens in water or culture medium. An example would be cell monolayers or tissue explants adherent on a coverslip or grown on the bottom surface of a culture vessel, such as a Petri dish with a coverslip bottom.

    In the last thirty years, light microscopy has benefited tremendously from advances in computing, fluorescent markers and laser technology. Fluorescence microscopy has become a ubiquitous technique in biological and medical science in recent years, and most biological microscopes can be adapted to illuminate the specimen with short-wavelength radiation and to observe the emission of longer-wavelength light from fluorescent compounds and proteins stained, or inherent within, the sample.

    Most microscopes are designed to operate at high resolution, but a consequence of this is that they have extremely shallow depth of field where the full extent of the object is in focus in the image. Stereomicroscopes adopt a different approach, sacrificing ultimate resolution for increased depth of field. Stereomicroscopes are useful for examining solid objects by reflected light (including fluorescence), although they can be used with transmitted light also. The design essentially uses two microscopes, one for each eye but offset in slightly different directions to produce a three-dimensional effect. In this respect, stereomicroscopes are more like our eyes than the compound microscopes described above and so are particularly suited for educating school children.

    These technological advances that heralded the renaissance of the light microscope have come at a price. Unlike other scientific instruments, which will not work unless they are adjusted properly, the light microscope will give an image – however poor – at the flick of a switch, however badly it may be set up. For this reason superficially, practicing microscopy appears simple, but scientific journals are replete with examples of poor quality images. Imaging is more than just ‘the bit tacked on the end’ of the experiment.

    To correctly capture images using a modern microscope, researchers must have a good grasp of optics, an awareness of the microscope’s complexity…such skills can take months or even years to master [which] owing to inexperience or the rush to publish, are all too often squeezed into hours or days. … Inept microscopy, and subsequent analysis, can easily generate results that are misleading or wrong.

    [Pearson, H. (2007). ‘The Good, the Bad and the Ugly’. Nature vol 447 (7141): 138–140, p. 138.]

    Modern research microscopes are both very expensive and complex, because viewing anything but the thinnest fluorescent cells and tissues requires optical sectioning to discriminate fine detail within the sample. In order to provide a range of different approaches to detecting and visualising the precise location of fluorescent markers by optical sectioning and kindred techniques such as multi-photon microscopy, several different fluorescence microscopes may routinely be used. Therefore, research institutions and university departments increasingly pool their resources to fund and operate these instruments within centralised facilities.

    Image facilities vary enormously and are usually very busy places. Some incorporate both light and electron microscopes together with flow cytometers and high-throughput plate readers; others house light and electron microscopes separately. By their very nature, light microscope facilities have many clients: typical numbers are between 150 and 300 users in any one university or institute. Generally, people who manage imaging facilities have been drawn into the role via a research background as a scientific investigator. Some facilities have several staff to advise about, maintain and operate equipment; others have a sole manager/operator.

    If you are reading this book prior to using an imaging facility, it may help to bear in mind that staff, although knowledgeable, may not have as much time as they would wish to help you. Also, there may be greater emphasis placed on image acquisition than subsequent analysis – not least because there is an ‘activation energy’ associated with the learning curve for new software and your particular application. However, good facility managers should not only be familiar with their equipment but should be also approachable towards their client base.

    Image acquisition and analysis is time consuming. The raison d’être of this book is to enable you to set up your microscope and collect good images competently and efficiently, unaided.

    Treating the microscope like a powerful holiday-slide viewer ignores the fact that with the microscope, we are using light for ‘something it wasn’t designed for’ – studying little things whose dimensions are close to the wavelength of the illuminating light itself. When we use a microscope we do not look at the object directly, but at an image of it. Therefore – like a painting or a sculpture – the microscopical image is only a representation of the object, and that image can be altered by the quality of the microscope, its objective lens and how light interacts with the specimen.

    The prime purpose of this book is to guide you in avoiding these pitfalls whether you are beginning in microscopy or have been a microscope user for some time and wish to learn more. Lack of instruction is nothing new. Henry Baker’s The Microscope made Easy (1742) had this advice in chapter 15, entitled ‘Cautions in Viewing Objects’, which is still pertinent. Baker warns us thus:

    Figure 1.1:

    Source: Baker, H. (1742). ‘The Microscope made Easy’, Chapter 15, pp. 62–63.

    Most books and courses start teaching microscopy by explaining some optics prior to evaluating the instrument itself. The layout of this book is broadly similar, but it is important not to forget the specimen, the object of study in the first place, whose fine detail we wish to resolve, to study and to understand. It is the interaction of light with the specimen that gives rise to any detectable signal. If this specimen is living, is marked with fluorescent probes and is being studied over time, then the constraints on collecting suitable images are different – and intrinsically more challenging – than for fixed immobile samples.

    Chapters 1–3 in this book cover the need for microscopes, some basic optical principles and a treatment of ray optics. Chapters 4 and 5 explain how different microscopes differ in design according to their function and how to apply ergonomics to the safe use of the microscope. Chapters 6–8 cover the objective and other components of the microscope, as well as a treatment of optical aberrations. Chapter 9 explains how to adjust and set up a microscope properly. Chapter 10 tells you about diffraction and wave optics: how images are formed. Chapters 11 and 12 explain how to generate contrast with both transmitted and reflected light. Chapters 13 and 14 explain the theoretical fundamentals of polarised light and how to apply these in a practical manner. Chapters 15–26 cover all aspects of the theoretical and practical application of fluorescence microscopy, including optical sectioning, colocalisation and super-resolution imaging. Chapter 27 advises on which fluorescence technique to use, as well as giving advice on using an imaging facility. Chapters 28 and 29 discuss sample preparation for biological and materials specimens respectively, while the final two chapters 30 and 31, cover how best to record the image.

    As stated at the outset, the aim is not merely to instruct but to help you understand. Where applicable, explanations are underpinned by a few simple experiments that you can try yourself to aid your understanding. Illustrated text boxes are used to emphasise key facts or convey points of interest. The didactic approach comes from experience teaching students, those attending the Royal Microscopical Society’s Summer Schools in Light Microscopy, and also from managing light microscope imaging facilities over several years. I and my colleagues acknowledge the debt from students for their questions and feedback about our teaching. We know that many students come to us with misconceptions, which they are honest and humble enough to admit to, and so ask for help. We admire them.

    Jeremy Sanderson

    Oxford, January 2018

    Notes

    1It is entirely possible to understand the underlying principles of microscopy without a formal qualification in physics or mathematics. In his study Einstein kept three portraits: Newton, Maxwell and Faraday. He considered them all to have profoundly underpinned progress in science, physics and technology. However, Faraday was noted for his mathematical inability, and used no complex equations in any of his 450 or so publications.

    1

    Our Eyes and the Microscope

    Outline

    1.1 Introduction

    The eye-brain system

    The features of the eye

    1.2 How Our Eyes Work

    Response to visible electromagnetic radiation

    Rods and cones

    Basis of colour vision

    Eye sensitivity to brightness

    Dark vision

    Ability to see shades of grey

    1.3 The Anatomy of the Eye

    Anatomy of the eye

    Floaters

    Lens focusing

    Macula and fovea

    Visual acuity

    Eye dominance

    1.4 Aberrations of the Eye

    Spherical aberration

    Chromatic aberration

    Astigmatism

    Saccades – tiling our visual field

    1.5 Binocular and Stereoscopic Vision

    Stereoscopic vision

    Parallax

    1.6 Why We Need Optical Aids

    Near point

    Accommodation

    Nearest distance of distinct vision

    The magnifying glass

    Magnification of a simple lens

    1.7 Using Lenses to Correct Eye Defects

    Myopia

    Emetropia

    Hyperopia

    Presbyopia

    1.8 Seeing the Scientific Image

    Filling in and bias

    Images as scientific data sets

    1.9 Sizes of Objects

    The requirement for measurement

    BioNumbers database

    1.10 Nomenclature and History

    Nomenclature

    1.11 Chapter Summary

    8 points

    Key Reading

    References

    Further Reading

    Internet Resources

    1.1 Introduction

    ‘Ah, I see!’ demonstrates how dependent we are on our sight, and how it is linked inextricably with understanding. We are a visually oriented species, and our two eyes are our ‘window on the world’. It is easy to forget that not all animals depend upon sight to the extent that humans do. Bats use echolocating sonar to navigate, and bees exploit ultraviolet (UV) light to discriminate foliage. At the other end of the visible spectrum, vipers sense infrared radiation to detect warm blooded prey. For those creatures whose habitat is underground (e.g. moles), vision necessarily cedes priority to other senses such as touch and smell. Other animals, chiefly birds, are more dependent upon their sight than humans, with a much greater proportion of their head devoted to large eyes.

    Via the optic nerve, the eyes are a direct extension of the brain. The human eye is a wonderful device. If it were a camera, it might boast autofocus, wide-angle lens, auto-exposure, high sensitivity to light, automatic colour balancing, one-hundred megapixel resolution (each retina contains around 100 million sensitive cells) and 3D imaging (when used in pairs). We can see in bright daylight or discriminate a single light in the dark several miles away. The eye has naturally evolved to suit its principal function: helping its owner navigate the world.

    Many modern microscopes are designed to be operated from a computer screen, rather than be used by viewing the image directly down the eyepieces. Nevertheless, all images must eventually be seen by our eyes, so in this chapter we shall discuss how our eyes function. We shall consider how well our eyes are adapted for their purpose but also the limitations and aberrations that they suffer from. These limitations are relevant to microscopy, affecting how we acquire and interpret scientific images (i.e. data) using the light microscope. We shall see how a magnifying glass works and why the microscope was developed to assist our vision.

    1.2 How Our Eyes Work

    Any image must ultimately be seen by the eye. Light is electromagnetic radiation that stimulates the eye. It is merely a fraction of the entire electromagnetic spectrum (Figure 1.1). Only a small proportion of solar radiation reaches the earth’s surface. We depend on the ozone layer for protection; atmospheric dust, smoke, air molecules and water vapour also absorb a significant proportion of insolation, or incident solar radiation. Our eyes evolved from aquatic animals and contain a significant amount of water. Human sensitivity to electromagnetic radiation (Figure 1.2) corresponds closely to the wavelengths of minimum water absorbance, located away from harmful UV radiation and towards the infrared end of the spectrum.

    Image is of the electromagnetic spectrum. It consists of two separate bands. The lower band represents wavelength (UV, visible, IR) that extends from 10 nm to 40000 nm. While the upper one represents the rays (Gamma, X-rays, UV, IR, and radio waves) that extends from 0.0001 nm to 100 m

    Figure 1.1 The Electromagnetic spectrum The electromagnetic spectrum extends from beyond low frequencies used for modern radio to very high-frequency gamma radiation at the short-wavelength end of the spectrum, encompassing wavelengths from thousands of kilometres down to a fraction of a nanometre. Humans are able to discriminate the visible wavelengths from long wavelength, low energy, red at around 710 nm to short wavelength, high-energy violet radiation at around 380 nm. A very small number of people can see into the infrared, and people suffering from cataracts who have their lens removed (i.e. who are aphakic) can see into the UV, even if they can’t focus well.

    Source: Glencross et al. 2011. Reproduced with permission from Oxford University Press.

    Image shows a graph representing Human sensitivity to light. X-axis represents the wavelength and Y-axis the absorption. Different wavelengths represents different regions (visible, IR, vis).

    Figure 1.2 Human sensitivity to light Visible light corresponds closely to the wavelengths of minimum water absorbance on the electromagnetic spectrum. See also Chapter 2, Section 2.5. Since our eyes are composed largely of water, it is advisable to use a heat filter (e.g. Schott KG5) in the light path to cut out any absorption of infrared light.

    Source: Adapted from Kebes at English Wikipedia under the terms of the Creative Commons Attribution licence, CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0).

    Our eyes respond to the visible part of the electromagnetic spectrum from near UV at a wavelength of 380 nm to deep red at 710 nm. This stimulation depends on both the energy (frequency, expressed as wavelength) and the quantity (number of photons) of light. The wavelength of light is perceived as colour, and the quantity of light (expressed as the amplitude of the light wave) is seen as intensity1 (see also Appendix A1.1.3 and Table A1.1, pages 45–46 in: Tilley, 2010). Suppose we have four LEDs: blue emitting at 490 nm; green at 555 nm; far red at 670 nm and infrared at 940 nm (commonly used for remote controls) all emitting the same radiant flux of 5 mW absolute power, measured in radiometric units. If we measure the respective light output of each of these LEDs in photometric units, the green LED will be the brightest (3.4 lm); the blue will be the second one (0.75 lm), and the far red will be the third (0.1 lm). The infrared LED will have a recorded emission of zero lumens (this example is taken from Tilley, 2010).

    Further details about the electromagnetic spectrum and the nature of light are discussed in Chapter 2, Section 2.2 onwards. Light itself has no inherent colour; our perception of different hues is fundamentally a complex judgment experienced as a sensation by our brains. We discriminate colour very well, although brightness less so. This is why we choose paint in a range of colours, rather than different intensities.

    The retina is the photo-sensitive tissue of the eye (Figure 1.3), within which there are two types of receptors. These are called rods and cones. Rods are more numerous (about 100 million per eye) and are better suited to night vision because they are about 100 times more sensitive to light than cones. Rods are thus adapted to low-intensity light, whereas cones (about 5 million per eye) give us our daylight vision in colour with high visual acuity (visual acuity refers to the sharpness and clarity of vision and our ability to resolve, or distinguish, fine detail with the naked eye).

    Image shows the structure of retina.

    Figure 1.3 Structure of the retina The upper row shows a photomicrograph of a section of primate retina, together with a cartoon diagram showing the organisation of the retina, which is between 200 and 250 mm thick in primates and is a delicate structure. Here the interface between the dense black choroid and the pigmented epithelial cells has separated during histological processing. The connective tissue of the sclera coat is seen at the top of the section. Section: haematoxylin & eosin; scale bar = 100 μm.

    Source: (top left) Tortora and Grabowski 2003. Reproduced with permission from John Wiley & Sons.

    The lower row shows the organisation of the rods and cones. The tangential section has been taken at the level of the inner segment of the photoreceptors. R = rods, C = cones; this area lies outside the fovea. Light passes through the layers of nerve cells to stimulate the photoreceptors. The retina lies on a layer of pigmented epithelial cells, which attach it to the choroid.

    Source: (bottom left) Allen and Triantaphillidou 2011. Reproduced with permission from Taylor & Francis Group. (bottom centre) Stevens and Lowe 1996. See also the relative distributions of rods and cones in Figure 1.12.

    Most humans have three colour pigments, or photopsins, giving trichromatic vision in red, green and blue hues. Some people lack one of the pigments, resulting in colour blindness or, more correctly, colour-deficiency. Other animals do not perceive colour as well as we do, though almost all mammals see colour to some extent. They may have only one or two pigments (e.g. dogs and cats), or may have different pigments from us altogether, relying more on motion, sound or the sense of smell for external awareness. Tree-dwelling and fruit-eating mammals have a strong sense of colour. Discrimination of colour allows us to differentiate between objects whose surfaces have equal luminance but which differ in hue. Other species have sacrificed colour discrimination in order to see better in low-light conditions. The basis of our colour vision is discussed further in Chapter 11, which covers contrast in the microscope image, and also in Chapter 31, which discusses how best to display the recorded image for the benefit of the colour-blind.

    Our eyes are very sensitive to brightness: rods can signal the absorption of a single photon. From full sunlight to starlight represents a luminance ratio of 10 million to 1. We don’t see equal increments of luminance as equal increments of brightness; rather2 we see logarithmic increments of luminance as equal brightness steps (Figure 1.4). Therefore, as luminance increases, we require larger changes to discriminate a noticeable difference. As the illuminance (i.e. intensity) of light around us falls, we switch from daylight, or photopic, vision to night, or scotopic, vision. This occurs at luminance levels around 10−2 to 10−6 cd/m²; below this light level we cannot read even large print text or recognise very small details, because the fovea3 is inactive.

    Image shows three images. The first one shows light—dark grey scale and dark grey scale. The other image is of two rectangles. The dark grey colour rectangle consists of inner lighter grey colour rectangle while the other consists of the light grey colour rectangle with inner dark colour rectangle. The last image is of a scale that consists of 11 orders of magnitude an eye is able to perceive.

    Figure 1.4 Response of the eye to luminance Each greyscale luminance step of brightness on the left-hand scale differs by an equal arithmetic ratio, whereas those on the right vary logarithmically by the power of two. In the absence of a background, the eye sees the logarithmic scale as equal brightness steps, not so the arithmetic one. This is known as the Weber-Fechner law: the relationship between a stimulus and its perception is logarithmic. This is unlike a semiconductor CCD sensor, which discriminates brightness in a linear fashion. The eye responds logarithmically so that familiar objects tend to maintain their perceived reflectivity in changing light conditions, as our eyes adapt to the change. Otherwise (like exposure values using a camera) we would continually notice changing brightness and colours as the sun slipped behind, and emerged from, cloud. This also explains why we don’t see stars in daytime, because the ratio of intensity of the stars to the sky is very small, whereas it changes at night as the intensity coming from the sky is small.

    However, if the backgrounds between areas of the same grey intensity are very different, this can alter our perception. In (b), the central grey rectangles are printed at the same intensity but are perceived differently, because of the contrasting adjacent backgrounds.

    (c) shows the eleven orders of magnitude that the eye is able to perceive, although not all at once. At any given time, the eye is adapted to only 2–3 orders of magnitude on the scale shown.

    Source: Falk et al. 1986. Reproduced with permission from John Wiley & Sons.

    Since rhodopsin, contained in the rod cells, bleaches rapidly in bright light, this causes temporary blindness of the sensitive night vision. The practical consequence is that, although partial recovery may take 10 minutes, full recovery can take four times longer and affect visual acuity for viewing dimly-lit specimens (e.g. fluorescent samples). Daylight vision by the cone cells (Figure 1.5) adapts much more rapidly to changing light levels, adjusting well to a change such as coming indoors out of sunlight in just a few seconds. Rods are insensitive to light of wavelengths longer than about 640 nm. Red illumination in a darkened microscope room is useful4 because red light bleaches rhodopsin inefficiently and thus maintains dark adaptation for high-quality microscopy.

    Image shows an X--Y graph. X axis represents the wavelength (in nm) and y-axis represents the relative luminous efficiency. It also represent the photopic (cone) vision at 555 nm wavelength and scotopic (rod) vision at 505 nm wavelength.

    Figure 1.5 Photopic and scotopic vision Daylight vision in bright light is called photopic vision. It occurs above 10 cd/m² when the rod photoreceptors become saturated, as illuminance rises. As the light levels around us fall, mesopic vision occurs when both rods and cones are active. At luminance less than 10−2 cd/m², photopic vision occurs. Sensitivity is plotted relative to peak photopic sensitivity on a log vertical scale.

    At light levels where the cones function, the eye is more sensitive to yellowish-green light (555 nm) than other colours because this stimulates the two most common (M and L) of the three kinds of cones almost equally. At lower light levels, where only the rods function, sensitivity is greatest at 505 nm, a blueish-green wavelength.

    Source: Handprint.com 2009. Reproduced with permission from B. MacEvoy.

    Although rods can detect a photon as a single quantum of light, we are quite bad at seeing in the dark. This has implications when observing fluorescent specimens against a relatively dark background. Figure 1.6 explains why this is so. There is a statistical uncertainty associated with the detection of light by any photoreceptor (this is explained with respect to digital cameras in Chapter 30, Section 30.10). Although a rod can detect a single photon, a threshold exists to avoid spontaneous activation (and depletion) of rhodopsin: a minimum of six rods must be simultaneously stimulated with ≈10 photons in order to register a response at the threshold of vision. In Figure 1.6 there is a central dark area in the image. However, even at the visual threshold, the contrast is insufficient to discriminate this central area from the background. Only at 1000 times threshold is the central dark area surrounded by enough light areas to be visible with sufficient contrast. The converse is also true: small self-luminous fluorescent objects in the sample have to be sufficiently large to stimulate a group of rods in order to be seen against a large dark background.

    Image shows four subimages. The first image is of black square with few white dots scattered. Second image is of black square with many white dots scatter here and there. Third, consists of white dots with very less scattered black space. The fourth consists of all white dots and a diagonally aligned square in the centre.

    Figure 1.6 Sensitivity of scotopic vision All four panels of the figure show the image on the retina of the same (bright) field containing a central dark patch. In panel 1 the illuminance is so low that only 6 of the 100 receptors receive a photon – at or just below the absolute threshold of vision. In panel 2 the illuminance is ten times higher, but the central dark patch is hidden within the noise of the random background photons. In panel 3, at 100 times threshold, the central dark patch is only just discernible but not convincingly so, and it is only in panel 4, at 1000 times threshold, that the central dark area is readily visible against the background with sufficient contrast.

    Source: Land and Nilsson 2001. Reproduced with permission from Oxford University Press.

    We can see – at best – 60 shades of grey, particularly if there is an abrupt (i.e. straight edge) boundary between each shade and the next. The computer can record many more grey levels, shades, or intensities, irrespective of boundaries. This facet of digital imaging, and its consequences, is explained further in Chapter 18 explaining the operation of the confocal microscope in Box 18.1, and in Chapter 30, in Section 30.6, on recording the digital image. Suffice to say here that (a) computers can be used to enhance contrast so that details, and data, which we would otherwise miss, are made visible and (b) we need to use saturation look-up tables (LUTs) to ensure that all the dynamic range of electronic detectors (such as CCD and sCMOS cameras for widefield microscopy and photomultiplier (PMT) tubes for point-scanning confocal microscopes) is used effectively.

    1.3 The Anatomy of the Eye

    The eyeball is a sphere approximately 24 mm in diameter with the cornea bulging slightly outwards. Figure 1.7 shows the features of the human eye that are important for our purposes. Essentially three transparent structures are held in position by three coats of tissue. The sclera is the white fibrous, protective, outer layer of the eye; the cornea is fused as an extension to this coat. A small part of the sclera is seen from close distance around the iris; hence the term ‘the white of the eye’. The choroid is a pigmented black membrane that has two functions: it prevents unfocused light from shining through the sides of the eye and prevents light that enters the pupil from reflecting inside the eye. The third coat, the retina, lies directly on top of the choroid. The three transparent structures within the eye are the aqueous humour, the lens and the vitreous humour.

    Image shows structure of an eye.

    Figure 1.7 Anatomy of the eye The blind spot occurs because no photoreceptors can exist where the optic nerve and retinal blood supply join the eyeball. Because the eye tiles images in the brain, this area is ‘filled in’ unconsciously, despite the fact that the blind spot extends to about 6° in diameter and is therefore quite large. The yellowish macula lutea overlies the fovea, the area of highest visual acuity, which lies directly on the visual axis of the eye.

    Source: Waugh and Grant 2001. Reproduced with permission from Elsevier.

    Any optical instrument – and the eye is no exception – must have its image-forming surfaces (e.g. the cornea and lenses) kept in a steady position with respect to each other and the detector (screen, CCD/sCMOS face-plate, PMT tube, film plane or retina) upon which the image is formed. The flexible tissues of the eyeball and the two surfaces of different radii, or ‘figures’, of the lens are kept firm by the pressure of the internal fluids of the eye. The aqueous humour lies between the cornea and front, or anterior, surface of the lens. It is continually drained and replaced by fresh fluid. The more gelatinous vitreous humour fills the larger space of the eyeball between the rear, or posterior, surface of the lens and the retina. The vitreous humour is not replaced like the aqueous humour and tends to liquefy very slightly as we age.

    When the vitreous humour becomes less viscous, cellular debris or clumped strands from the vitreous humour can float freely. Being less transparent, these floaters cast fleeting shadows on the retina, following the motion of the eye whilst drifting through the vitreous humour. They are more prevalent in short-sighted eyes, and become increasingly annoying as we age (see also Chapter 9, Section 9.13). Floaters are most likely to be seen when using objectives and eyepieces of high magnification, especially if the objective is of relatively low aperture and the microscope is set up with the adjustment of the illumination set too low such that it does not fill the objective with light. The proper method of adjusting the microscope is explained in Chapter 9, Section 9.11. If a floater appears, moving the eye from side to side or up and down can create an internal current that will help move the floater away from the line of sight.

    The lens of the eye is composed of transparent cells held in place by suspensory ligaments connected to the ciliary body, which focuses the lens. The majority of the focusing (approximately 60%) to form the image is done by the fixed cornea5, while the bi-convex lens adjusts to allow fine focusing to maintain a clear image. This focusing is called accommodation and is discussed further below. The focal length of the relaxed lens is 17.1 mm and 14.2 mm when fully accommodated. The iris, which determines the pigmentation of our eye, dilates automatically in response to differing light intensities (irradiance), to regulate the diameter of the central opening or aperture – the pupil –of the eye6. In very bright light, the iris contracts to give a pupil of approximately 2 mm or f/8.3. This diameter increases in young adults to about 8 mm (or f/2.1) or in the elderly to about 5 mm (or f/3.3) at low-light intensities.

    Within the retina, the sizes of the rods and cones differ slightly. Cones are typically 40–50 μm long. A rod photoreceptor is longer (about 60 μm) and narrower than a cone: about 2 μm in diameter, whilst a cone is 2–8 μm in diameter, depending upon its location in the retina. With rods we see in black and white, or monochrome. Overall, there are roughly 1.5 × 10⁶ ganglion nerve cells per eye, with as many as 150 rods connected to a single nerve cell via bipolar cells. This summation, or convergence, increases sensitivity, which is useful when the light intensity is low. However, summation reduces definition – things look less sharp. A single rod is only slightly more sensitive than a single cone. However, cones are innervated as few as 2–6 per nerve cell, making them approximately 30 times less sensitive to light. Cones are essential for resolving fine detail sharply and for colour vision.

    The area of the retina directly behind the lens is called the macula. It is densely innervated and contains the fovea centralis, which has the highest concentration of singly-innervated cones, giving excellent visual acuity in our central vision. The entire fovea is about 1.5 mm in diameter, comprising approximately 5° of the visual field. The rod-free area of the fovea is 0.5 mm in diameter, subtending 1.7°. Whilst the majority of cones (about 94%) are located in the peripheral retina, the 60 000 cones of the fovea are much slimmer. Here they measure 1–2.5 μm in diameter and are packed together in a hexagonal pattern. At the fovea, the layers overlying the photoreceptor layer are much thinner, giving the appearance of a pit (Figure 1.8) to the cross section of the whole retina at this point. Within the central 0.35 mm (1°) of the fovea is the very high-resolution area, the foveola, located at the bottom of the foveal pit. Here in the foveola there are about 12 000 cones; only a tiny fraction of the total number of photoreceptors are responsible for our high acuity vision.

    Image like a sea shore wave representing ganglion cells, bipolar, horizontal, & amacrine cells, cone receptors, foveal pit, and choroid.

    Figure 1.8 The fovea and foveal pit This area of highest visual acuity contains slimmer cones and no rod photoreceptors. The organisation of each layer within the retina is also shown in Figure 1.3. Section: monkey, toluidine blue; scale bar = 100 μm. For further details, refer to Kolb, H. (2007); https://www.ncbi.nlm.nih.gov/books/NBK11556/.

    The size of the cones in the foveal region, and their packing within the retina, has a direct bearing on visual acuity. The average cone-cone centre distance is 3.8 μm. In Chapter 3, Section 3.11, Box 3.4 is an explanation of how the packing of the light-sensitive cones, which make up the fovea, matches the maximum possible resolving power of the eye. At very best we cannot discriminate, or resolve, detail that is closer together than 72 arcseconds, (i.e. 36 cycles per degree, CPD; visual acuity is often measured in CPD), which is equivalent to resolving 0.09 mm (90 μm) at the standard reference viewing distance (which is also sometimes called the nearest, or least, distance of distinct vision) of 250 mm7. This holds true only during conditions of photopic vision at normal levels of light intensity; when we are dependent upon our rods alone, in scotopic conditions, our visual acuity falls off.

    Outside of the fovea, there is summation of both rod and cone cells. Thus the bandwidth of visual processing is highest for visual fields directly in front of us. The fovea comprises less than 1% of the area of the retina, but its importance to our vision is demonstrated by the fact that over 50% of the visual cortex is devoted to processing the information derived from this small region. Since the fovea contains no rods, it is insensitive to low intensity light signals. Astronomers know this: in order to observe a dim star, they use averted vision.

    Broadly speaking, the density of receptors and the degree of summation determine the degree of visual acuity: the greater the number of singly-innervated receptors, the greater the ability of the eye to distinguish individual objects at a distance. We have a mean value of about 200 000 cones per mm², which is good for a mammal but is quite low compared to a large raptor, which has about 400 000 cones per mm² as well as more ganglion cell connections between the receptors and the brain, since it requires a high visual acuity for hunting8. We may have limited visual acuity compared with a falcon, but it is much better than that of a honeybee, which has only 1% of our visual acuity and would not be able to see if there were any books on the shelf across the room, let alone read the titles. However, a honeybee does not need to read but does need to navigate. It can respond to the polarisation9 vector of light and uses this to navigate to food sources.

    Approximately two thirds of the population is right-eye dominant and one third, left-eye dominant, preferring visual input from one eye over the other. This has consequences for adjusting and using the microscope, which is covered in Chapter 9, Section 9.11. The diameter of the exit pupil of the beam of light leaving the eyepiece is determined by the diameter of the pupil of the iris, so that no information about the image is lost. This is explained further in Chapter 8, Section 8.6.

    1.4 Aberrations of the Eye

    Because our eyes have a single lens, they suffer to some extent from both spherical and chromatic aberration. Our eyes may also suffer from aberrations such as astigmatism, for which we require corrective spectacle lenses. The subject of optical aberrations and how they affect the performance of the microscope in forming an image is discussed later in Chapter 6. In the microscope these lens aberrations can be corrected by cementing single lenses together into doublet or triplet combinations. These lens groups require combining convex (positive) and concave (negative) lenses; animal eyes are only bulging, or biconvex – they have no negative configuration. Doublet or triplet elements have a longer focal length than is feasible in a small eye. The eye must therefore rely on natural methods to correct optical aberrations.

    Spherical aberration manifests itself as unsharpness in the image (Figure 1.9). Spherical aberration in the eye is minimised by the different curvature (known as the ‘figure’ of the lens by opticians) of each of the two surfaces of the elliptical biconvex lens. Also, under daylight illumination, our pupils are smaller; this restricts light paraxially, close to the optical axis, and thus minimises the effects of both spherical and chromatic aberration. Additionally, a smaller pupil also ensures that light falls only on the central fovea for sharp vision. Whilst a larger pupil lets in more light to the eye and too small a pupil suffers from image blurring due to the effects of diffraction, there are no advantages in the eye having a pupil too large (Figure 1.10); otherwise, aberrations limit image quality. The direct effect of chromatic aberration in the eye manifests itself as different colours appearing slightly out of focus with respect to one another. Figure 1.11 demonstrates this.

    Image shows spherical aberration.

    Figure 1.9 Spherical aberration The top two illustrations were taken with a highly corrected camera lens (left) and a single lens that had failed in the quality control process (right), of the word ‘spherical’. Spherical aberration is most noticeable at the edges of the image; notice how the letter R is smeared and unsharp. A smaller pupil ensures that only paraxial light rays fall on the central fovea for sharp vision.

    The bottom two illustrations show spherical aberration in the microscopical image. The image on the left is taken with a corrected lens. The right-hand image of the same field of view shows spherical aberration. The image is blurred and unsharp; image contrast is also severely degraded. Section: monkey pancreas, acid fuchsin & azan; scale bar = 50 μm.

    Source: (top) Courtesy of Mr K. Glover. (bottom) Bradbury 1967. Reproduced with permission from Elsevier.

    Image shows a graph with X- axis representing pupil diameter (mm) and Y-axis represent spatial frequency (cycles/degree). The left upward portion of the graph represent the diffraction while the right lower part represent the aberration. The lower image shows the pupil in bright light, normal light, and dim light.

    Figure 1.10 Optimum pupil diameter In bright light, under photopic conditions, a pupil of 3 mm is optimal between the effects of diffraction and spherical aberration and (to a lesser extent) chromatic aberration. As illuminance falls, visual acuity is limited by photon noise rather than optical quality.

    Source: (a) Land and Nilsson 2001. Reproduced with permission from Oxford University Press. (b) Tortora and Grabowski 2003. Reproduced with permission from John Wiley & Sons.

    Image shows red and green parallel stripes. On right, is the blue rectangle. The upper portion of the rectangle state a following quote in the red colour “This is harder to read than yellow on blue.” The lower portion of the rectangle stated the same quote but in yellow colour.

    Figure 1.11 Chromatic aberration Try to focus sharply on the red stripes, and the green stripes will appear slightly unsharp and out of focus. The brain will interpret each colour as being at a slightly different depth, with the red stripes appearing closer. Likewise, the red text on a blue background is harder to read (despite these colours being saturated) than the unsaturated yellow text on blue. This is because red and blue occur at opposite ends of the visible spectrum, and so the focusing mismatch arising from chromatic aberration is greater. The eye tires more easily trying to accommodate and focus on each colour separately.

    Source: Franklin et al. 2010. Reproduced with permission from John Wiley & Sons.

    The entire macula is covered with a yellow carotenoid pigment, very similar in hue to the yolk of an egg, called the macula lutea. Together the cornea, lens and macula filter out much of the harmful UV light. The pigmented macula absorbs blue light also. Since blue light refracts more than red light, absorption of these shorter wavelengths helps minimise the effects of chromatic aberration and glare from scattered light. We have approximately 64% of red (L) cones and 32% of green (M) cones but only 4% of blue (S) cones (Figure 1.12), and these are largely absent from the fovea, where visual acuity is highest. Additionally, our brains respond to light falling on-axis upon the fovea preferentially to other parts of the retina (the Stiles-Crawford effect). This reduces perception of any scattered blue wavelengths, helping further to minimise chromatic aberration.

    Image shows five subimages (a,b,c, d, and e). All three images shows wavelength as X-axis while Y-axis represents absorbance (a), relative sensitivity (b), and spectral luminous efficiency (c). The next image shows distribution of beads of different colours all over the square. The last image shows the parallel striped different colour horizontal bars.

    Figure 1.12 Distribution of cones in the retina (a) shows the sensitivity of each of the three cones to the visible spectrum as a function of wavelength. (b) shows the relative sensitivity of red, green and blue cones to one another, and (c) shows the overall photopic sensitivity of the eye.

    We have very approximately 60% of red (L; long) cones and 30% of green (M; medium) cones but only 5% of blue (S; short) cones. These three types have peak wavelengths near 564 nm, 534 nm and 420 nm, respectively. The images are false coloured so that red, green and blue are used to represent the L, M and S cones respectively. (The true colours of these cones are bluish-purple, purple and yellow). The proportion of S cones is relatively constant across eyes, ranging from 3.9 to 6.6% of the total population of cone photoreceptors. However, the proportions of M and L vary widely. The mosaics in (d) illustrate the enormous variability in the L:M cone ratio – the left-hand panel has an L:M ratio of 1:3; the middle panel, 2:1 and the right-hand panel, an extreme 16:1. The scale bar represents 5 arcminutes, which is approximately equivalent to 25 μm.

    It is because there are only a few blue (S) cones is that blue and yellow is seen with slightly poorer spatial resolution than other colours. This is demonstrated in (e). When held extended, at a large distance from the eye, the thicker blue and yellow stripes appear more saturated, and the thinner set of stripes appear less saturated. This is because the thin stripes are closer to the spatial resolution limit of the S cone mosaic. The consequence of this for microscopical fluorescence images (e.g. for CFP or YFP) is to pseudo-colour the label of interest white (for high contrast) or green (for high sensitivity) rather than cyan or yellow when discriminating fine detail in the image.

    Source: (a,b) Tilley 2011. Reproduced with permission from John Wiley & Sons. (c) Skatebiker at English Wikipedia, via Wikimedia Commons. (d) Professor H. Hofer. Courtesy of Professor Hofer. (e) Adapted from Professor D. Heeger. http://www.cns.nyu.edu/∼david/courses/perception/lecturenotes/color/color.html

    Eyes (and lenses) suffering from astigmatism10 have an unequal curvature of the cornea, the lens or both. This inequality causes the light to be refracted, or bent, to a different degree at different meridians of the cornea or lens. As Figure 1.13 shows, rays that propagate along two perpendicular planes, for example, will be brought to a focus by an astigmatic lens at different distances along the optical axis from each other. Figure 1.14 shows a test for diagnosing astigmatism. When using the microscope, it is necessary to determine whether your spectacles are corrected for astigmatism (see Box 9.5, Chapter 9). This dictates whether you must retain your spectacles for observation down the microscope, or whether you can remove them, if desired. Many people have a mild degree of astigmatism, often without realising it. The condition is severe enough to necessitate ophthalmic correction by spectacles or contact lenses in one third of the population.

    Image shows the vertical and horizontal planes are identified as tangential and sagittal meridians, respectively. When imaged with an astigmatic lens, part of an object (e.g., a vertical bar) in the tangential plane (shown here in green) will be focused at a different point to a separate part of the same object (e.g., a horizontal bar) in the sagittal plane (shown here in red). This means that the entire object can never be wholly seen in sharp focus without the aid of corrective lenses.

    Figure 1.13 Astigmatism The vertical and horizontal planes are identified as tangential and sagittal meridians, respectively. When imaged with an astigmatic lens, part of an object (e.g. a vertical bar) in the tangential plane (shown here in green) will be focused at a different point to a separate part of the same object (e.g. a horizontal bar) in the sagittal plane (shown here in red). This means that the entire object can never be wholly seen in sharp focus without the aid of corrective lenses.

    Source: Allen and Triantaphillidou 2011. Reproduced with permission from Taylor & Francis Group.

    Image shows a star marking formed by eight stripes.

    Figure 1.14 Test for astigmatism Close one eye, and view this figure without glasses or contact lenses. Hold the figure sufficiently close so that all lines look blurred. Steadily move the figure away until all or only one set of lines is in sharp focus. If all lines are in sharp focus, you don’t have astigmatism. If one set is in focus, rotate the figure to see if the lines become blurred (another set may become sharper). Re-orientate the figure as before; continue to move it until lines perpendicular to the first set are sharply focussed. Repeat the procedure with your glasses or contact lenses to see if your astigmatism is corrected.

    Source: Falk et al. 1986. Reproduced with permission from John Wiley & Sons.

    Our eyes are sometimes likened to a photographic camera, but unlike the exposure of film in an analogue camera or the capture of photons and conversion to an electronic signal in a digital camera, our vision is not instantaneous but rather a continuous process. We build up a panoramic view by extremely rapid scanning microsaccadic movements, which our brains then tile into a sharp image. We do this for three reasons: scanning allows the fovea – the region of highest cone density and the greatest sensitivity – continually to image a scene. The fovea sees only the central 1.7° of the visual field, roughly equivalent to the width, at arm’s length, of both thumbnails placed side by side. Without microsaccades, our vision would be severely limited: only our central vision would be at all sharp, with the rest an indistinct blur (Figure 1.15). Secondly, scanning allows the brain to tile images and thus fill in over the blind spot, the area without photoreceptors where the optic nerve connects to the brain. Thirdly, staring fixedly at a scene without microsaccadic scanning would also cause the rods and cones temporarily to cease operating, since they must constantly regenerate their pigment, which is bleached by light.

    Image shows a blurred image of a road, cars, and buses.

    Figure 1.15 The visual field If our eyes were to operate in a fixed fashion without employing microsaccadic movements to allow our brains to tile together a wider and sharper field of view, this is what we would see – assuming that the photopsins of the cones are not bleached. Our vision would suffer from spherical aberration, being particularly noticeable at the edges of our field of view.

    1.5 Binocular and Stereoscopic Vision

    Despite the limitations of a single lens and consequent aberrations, our eyes are suitably adapted for their function. We can see into the distance and also close up to a near distance of distinct vision of about 250 mm. We can detect peripheral movement with ease, although with eyes at the sides of their heads, many animals are able to see very large visual fields, in some cases behind themselves, without moving their heads. This is clearly useful for detecting moving predators; the trade-off is having very limited 3D vision, which only extends in a narrow sector to their front for these animals. Predators, on the other hand, tend to have their eyes positioned on the front of their heads. We have forward-facing eyes on the front of our faces separated by the inter-pupilliary, or interocular, distance of between 50 and 75 mm. With each eye offset by about 5° from the optical axis through the centre of our head, this disparity gives us binocular vision. Each eye presents different views simultaneously to the brain. Complex neural circuitry matches each set of points from one view with the equivalent set from the other eye. We exploit binocular disparity to fuse in the brain the image from each eye to provide visual depth and three-dimensionality, allowing us successfully to navigate the world around us.

    Parallax is the apparent displacement of an object due to a change in the position of the observer. For example, the needle on an analogue speedometer, because it is raised away from the face of the instrument dial, will indicate a different speed to a passenger in the car than the driver. The passenger sees the needle from an angle and observes an apparent speed; the driver observes the needle face on and notes the true speed of the car.

    The process by which the brain exploits parallax to compute depth perception and to estimate distances is known as stereopsis. The etymology of the word is from the Greek stereo meaning ‘solid’, and opsis meaning ‘power of sight’. Birds, such as chickens and pigeons, who have eyes on the sides of their heads, are unable to view a single object stereoscopically with both eyes. They cannot use parallax to judge distances, as we do, and must judge distance by moving their heads – viewing an object with each eye independently. This is referred to as motion parallax11 and is why pigeons can be seen continually bobbing their heads up and down. People who are stereo-blind, who lack vision in one eye, or have both long- and short-sighted eyes, often have poor depth perception, but it is not entirely absent. We learn strong monocular clues to judge both distance and depth. These include a change in the size of the retinal image and linear perspective: as a car drives away it apparently becomes smaller. With aerial perspective a change in colour occurs, that is, distant mountains appear blue. Experience and expectation of light and shade also help, as does interposition: an object which is overlapped appears further away. A lack of binocular vision affects about 12% of the population. Stereo-blindness will not prevent you from adjusting and using a microscope normally.

    Béla Julesz, a Hungarian neuroscientist, developed the computer-generated random-dot stereogram that, with a stereoscope, allowed the brain to see 3D shapes from a two 2D flat images. Within the random dots a pattern is horizontally displaced in each image. Observers with binocular vision can process this disparate information to see a 3D image in depth above the random background. This proved that depth perception is a neurological phenomenon, rather than a purely visual process. Christopher Tyler, a student of Julesz, together with Maureen Clarke further developed the autostereogram (employing a repeating pattern) so that the 3D effect could be seen without the aid of special glasses or a stereoscope. These were later to give rise to the popular Magic Eye books.

    1.6 Why We Need Optical Aids

    When we want to see something in more detail, we bring it closer to our eyes. It is difficult or impossible, for example, to read a newspaper at a distance of several metres. As the newspaper is brought closer to the eye, the image is spread further over the retina (Figure 1.16), and the print becomes easier to read until a certain distance is reached, when it is no longer possible to see it clearly in focus. Alternatively, the object we are viewing may be so distant that it is not possible to get sufficiently close to bring it closer to the eye and see the detail that we would like.

    Image shows a ray diagram of an image being formed on the retina.

    Figure 1.16 Ray diagram of an image being formed on the retina The diagram shows the ray construction of a distant arrow in blue with the central ray passing through the nodal point of the lens of the eye. The nodal point, N, lies between the front and back focal points of the relaxed eye, which is 17.1 mm; it is effectively the centre of the lens. The visual angle θ is the angle subtended by the blue arrow at the eye, usually stated in degrees of arc. As the arrow is brought closer to the eye, the image is spread further over the retina, is detected by more and more cones and becomes easier to see. The visual angle, θ, in degrees, is given by θ = 2 × arctan(S/2D).

    Source: Adapted from Abbott 1984 with permission from Pearson.

    In both cases we can use an optical aid to help us distinguish, or resolve, further detail. If we cannot get sufficiently close, we might use a telescope or a pair of binoculars. Where we can get close up to the object, we use a magnifying glass or microscope.

    These examples highlight two of the limitations of the eye. When the newsprint is far from the eye, only a small image of it is formed on the retina. Several details may fall on any one sensitive photoreceptor so that individual fine details are not discriminated, or resolved. As the paper is moved closer, the size of the image increases and falls on more sensitive cells in the retina, enabling more detail to be discerned. The important factor here is the viewing angle, subtended at the eye by a feature in the object, and which increases as the object is moved closer to the eye.

    The other limitation is exhibited when the lens of the eye reaches the end of its close-focusing adjustment (Figure 1.17, its accommodation), usually with the object at a distance of about 250 mm for a normal adult eye. Moving the object closer will now no longer help because, although the viewing angle is increased, the lens cannot bend sufficiently to maintain the image in focus. This near point defines the nearest distance of distinct vision, sometimes called the least distance of distinct vision. It is the point at which the lens of the eye is maximally accommodated.

    Image shows shape of the lens during accommodation.

    Figure 1.17 Shape of the lens during accommodation In the relaxed eye the lens is about 3.6 mm thick at the centre; in the accommodated eye, it thickens to about 4.5 mm. The focal length changes from about 17 mm to about 14 mm, respectively.

    Source: (left) Waugh and Grant 2001. Reproduced with permission from Elsevier. (right) Adapted from Franklin et al. 2010 with permission from John Wiley & Sons.

    Young people generally have a near point closer than 250 mm, but as we age, our power of accommodation decreases. The lens and ciliary muscle both become less flexible, and we gradually lose the power to accommodate. Our near point recedes, and we must perforce hold a newspaper (or this book!) further from our eyes12 in order to read the print. This condition is called presbyopia (Figure 1.18) and is the reason why older people need spectacles for close vision.

    Image shows an X-Y hyperbolic graph. X axis represents the age (years) and Y-axis shows the amplitude of accommodation (dioptres). Image shows the position of near point changes as one age.

    Figure 1.18 Presbyopia As we age, our ability to accommodate is gradually lost as the lens and ciliary muscle become

    Enjoying the preview?
    Page 1 of 1