En WP LSM Plus Practical Guide of Deconvolution
En WP LSM Plus Practical Guide of Deconvolution
En WP LSM Plus Practical Guide of Deconvolution
1 µm
Contents
Cover image:
Mitochondria in an Arabidopsis thaliana cell. mCherry (green) is targeted to the matrix and GFP (magenta) to the intermembrane space.
Comparing Airyscan SR (left) and Airyscan Joint Deconvolution (right). Courtesy of J.-O. Niemeier, AG Schwarzländer, WWU Münster, Germany
2
Authors: O livia Prazeres da Costa, Ralf Engelmann, Martin Gleisner, Lutz Schäfer, Xianke Shi,
Eva Simbürger, Max Voll, Klaus Weisshart, Georg Wieser
Carl Zeiss Microscopy GmbH, Germany
Date: December 2021
Introduction
Since the introduction to widefield fluorescence microscopy in 1983, deconvolution has witnessed the
development of a wide variety of algorithms. It has been successfully and routinely applied to almost
all microscopy techniques. There has been especially a sharp rise in the popularity of deconvolution
for the past decade, mainly fueled by the rapid improvement of computer hardware, particularly by
the advance of NVIDIA® CUDA® technology and the parallel processing power of graphic processing
units (GPUs). The speed of deconvolution, which used to be the bottleneck of the method, has been
dramatically improved, e.g., up to 30x faster with a modern GPU compared to traditional central
processing units (CPUs). Due to these advantages in computer hardware, deconvolution has left its
somewhat dated state and has experienced an impressive revival. It is now an integral part of many
microscopy applications.
For widefield microscopy, deconvolution is the method of choice to improve image quality.
The algorithm reassigns the out-of-focus blur in the 3D stack, the primary source of noise in
widefield microscopy, back to the in-focus plane as the signal. The result is better contrast, higher
signal-to-noise ratio (SNR) of the in-focus structures, and increased resolution. The on-the-fly
implementation of deconvolution for widefield fluorescence microscopy thus provides a gentle
and fast 3D capability, highly suitable for cell biology and bacteriology application. In recent years,
images from laser scanning confocal and spinning disk confocal microscopes have also been regularly
processed with deconvolution. Using mainly iterative-based algorithms, coupled with scanning
oversampling and reduced pinhole size, confocal deconvolution has been proven to increase lateral
and axial resolution beyond the theoretical diffraction limit. At the expense of speed and sensitivity,
confocal deconvolution has become the most affordable super-resolution (SR) technique. The most
recent advances in deconvolution are in the traditional hardware-based SR microscopes: Structured
Illumination Microscopy (SIM) and Image Scanning Microscopy (ISM such as ZEISS Airyscan). Both SIM
and ISM require dedicated multi-phase raw data acquisitions. The subsequent reconstruction process
usually involves a linear inverse filter-based deconvolution, most noticeably a Wiener Filter. However,
by carefully implementing an iterative-based algorithm and a dedicated point spread function
(PSF) model, it is possible to drive the resolution down to the sub-100 nm domain, opening new
possibilities for SR live-cell imaging. This has been successfully demonstrated by the dual iterative SIM
process of ZEISS Lattice SIM, and the Joint Deconvolution process of ZEISS Airyscan 2.
This white paper aims to provide a practical guide for users of deconvolution in an easy-to-
understand language and with minimum specialized terms. It will address three main questions:
what is deconvolution? Why should I use deconvolution for my microscopic images?
How do I use it correctly?
3
What is Deconvolution? Algorithms implemented in ZEISS ZEN (blue edition)
Generally speaking, deconvolution is a mathematical method Deblurring
used to reverse the unavoidable impact of optics and electronics. Deblurring attempts to subtract contributions from approxi-
Deconvolution has different areas of application such as seismol- mated out-of-focus information. The main advantage is that it
ogy and astronomy as well as microscopy. The reversal has shown speeds up computation and thus allows this method to be part
to improve resolution, contrast and signal to noise ratio (SNR). of data acquisition. Deblurring is also known as no neighbor or
sometimes as unsharp masking. By virtue, this method cannot
The point spread function (PSF) is used to describe the blurring reconstruct quantitatively correct results.
of the sample seen in the resulting image. In widefield micros-
copy for example, the PSF has a double cone-like shape that Nearest Neighbor (NN)
extends infinitely into axial direction NN was the first pragmatic attempt (Castleman 1975) to de-blur
3D microscopic data with a 2D method similar to deblurring.
Photon assignment and reassignment, what does this mean? Due to computational limitations of the time, rigorous
Assignment (forward model) mathematical treatment was not an option. Therefore, only a
The picture of a sample seen through a microscope often simplified additive model was considered in which approximated
appears blurry. This is due the fact that optics merge (by adding) out-of-focus contributions that were subtracted from in-focus
information from locations other than specific points of interest information. It does not consider a true 3D inversion as today’s
in the sample. Thereby, the size and form of the spread from rigorous methods. Instead, it uses 2D planes with the side effect
these locations is known as PSF. In contrast, what the micro- of being insensitive to axial sampling. This allows to successfully
scope does is often called super-position or convolution. use image data that does not adhere to Nyquist sampling
restrictions. Otherwise, by design this algorithm will not provide
Reassignment (reversing the forward model) quantitatively correct results.
The idea to reverse the microscopes blurring is obvious. It would
require subtracting intensities where they have been wrongly Regularized Inverse Filter (RIF)
added, but also adding intensities to locations where they were RIF (or linear least squares) uses a direct, linear inversion of the for-
taken from. Overall, in this process called deconvolution, the ward model. It can also be seen as a standard convolutional filter
sum of all intensities remains the same before and after the using an inverted PSF. To counter instability due to ill-posedness
procedure. Therefore, conservation of energy is preserved, and (e.g., noise and PSF singularities) regularization is added (Tikhonov
quantitative measurements can be conducted. and Arsenin 1977), also known as Wiener Filter (Wiener 1949).
4
Fast Iterative: Richardson–Lucy classical iterative What is the point spread function (PSF)?
To overcome the disadvantage of the Meinel algorithm, which The PSF describes the measure of blur in a given imaging system.
only works on perfectly symmetric PSFs, a rigorous statistical An optimal PSF is key to improving the quality of the image as a
approach such as maximum likelihood is required (Richardson result of the deconvolution. Kinds of PSFs:
1972, Lucy 1974). The maximal likelihood of the sample is
determined by the given observation and its parameters (e.g., Measured
PSF). Surprisingly, this leads to a simple extension of the Meinel Measuring a PSF is done by acquiring an image of sub-resolution
algorithm. The only downside is that many iterations (sometimes sized fluorescent beads, typically 100 nm in diameter. By using a
thousands) are required. Also, another caveat is that with a software wizard, the user can select the beads from a batch that
rising number of iterations, the result becomes instable, and the are spaced sufficiently away from each other and are in decent
likelihood will decrease again at some point. condition. The selected beads will then be registered automati-
cally and averaged to form the final space invariant PSF.
Fast Iterative: Accelerated Richardson–Lucy
To reduce the high number of iterations, a somewhat heuristic How to generate a measured PSF?
gradient-based acceleration scheme has been developed Using a measured PSF can potentially improve the deconvolution
and shows promise for success (Biggs 1986). This algorithm result, especially with data acquired using high NA objectives
essentially converges to almost the same result as the Richard- (NA>1.2). PSF measurement is also an excellent approach to
son–Lucy algorithm but in about an order of magnitude fewer evaluate the current optical condition of your microscopes
iterations. and should be carried out regularly. However, the procedure
is daunting to most first-time users, and it requires proper
Constrained Iterative: MLE preparation and practice. View a brief guide on how to generate
Given the promising results of the accelerated Richardson–Lucy a measured PSF here.
algorithm, there is still the possibility for instability during
very high numbers of iterations due to missing regularization. Theoretical
Additionally, the acceleration based on linear gradients may not It is faster and easier to compute a PSF than to use the physical
lead to robustly converged results. instrument PSF. Here we can choose between scalar and vec-
torial PSF models. The vectorial PSF is usually more precise and
Faster convergence (fewer iterations) combined with variable allows for polarized illumination light (Born, M. & Wolf, E. 1959).
choices for the types of likelihood and regularization can Most PSF models in the ZEISS deconvolution system can be
be obtained by a generalized conjugate gradient maximum adjusted for spherical aberration (Gibson, S. & Lanni, F. 1992)
likelihood algorithm (Schaefer et al 2001). This is the current caused by layers of varying refractive index due to mounting
ZEISS standard for high performance deconvolution. of the sample. These layers consist of immersion, coverslip and
embedding material.
Combined (joint) Deconvolution
It is well understood that more and diversely collected informa- Depth variant (theoretical)
tion around the sample can reveal greater detail and therefore Furthermore, a PSF can be computed to be variable over the
higher resolution than single view observations. Today, in the sample depth in the form of multiple PSFs for each chosen
era of SIM, ISM, LFM and SPIM, several instruments appeared depth. In this case most processing methods will operate in
on the market that would take multiple views of one sample depth variant mode, reconstructing spatial features along the
with the expectation being to reveal more information. While optical axis until the image is significantly improved.
each of these methods focuses on different physical properties
of illumination and observation angles, the resulting images ZEN PSF standard:
always have at least one or more added raw data dimensions. The ZEN CZI format of a PSF has been standardized for all
With respect to the sample, this offers not only the possibility available microscopes manufactured by ZEISS and others.
of reconstruction, but also including their respective PSFs in this It will work with all deconvolution functionalities, despite their
process. An algorithm that can fuse and reassign all collected possibly different options.
spatial content into one single reconstruction, called joint
deconvolution, is superior to previously known methods. It can
be shown that the various viewing angles of the sample result
in a larger optical-transfer function (OTF) support, in addition to
the observed detail out of every view. Therefore, an exceptional
increase in resolution to and below 100 nm can be expected
and has been shown with the Airyscan joint iterative deconvolu-
tion recently released by ZEISS.
5
Why do we have optical distortion? must result in an image where the size of each pixel is less than
Noise, scatter, glare, blur (see also Wallace 2001) half of the size of the structure you want to resolve, provided
that the system supports this sampling. The theorem of such a
Noise originates from: sampling is called Nyquist–Shannon criterion for sampling. This
1) Fluorescence generation and Poisson distribution of means, you need to know the resolution of the optical system
emission as a function of observation to get the sampling right – especially if the sampling can be
2) Electronic noise, thermal noise, readout noise and changed, like on a confocal system. Typically, your confocal
discretization noise (mostly additive Gaussian distributions system provides information on the current sampling rate for
in the observation) a given acquisition setting. The sampling can then be changed
Both sources are covered by the imaging model. to comply with Nyquist or even go beyond (like 2 × Nyquist for
Airyscan, as detailed below). When the acquisition is performed
with a camera, the optical system is designed to comply with
Nyquist and settings don’t have to be changed. However, you
need to be careful when you increase the size of the pixels by
binning. This method then requires special consideration, and
you must know the resolution of your system to be able to
predict potential consequences.
Figure 1 Murine brain Astrocytes labeled for GFAP (Alexa 488). How do you generate such an image from your
Comparison of a slow (left) and a fast (right) scan on an LSM 800. specimen?
For widefield systems, the camera has a set number of pixels
• Scatter which is typically high enough to process the resulting images
Caused by light passing through turbid media within the with deconvolution. For confocal systems, the sampling needs
sample. Due to complicated modelling this is currently not to be defined for the current optical parameters which include
considered. the objective, zoom and detection range apart from the size
of the pinhole. Modern user interfaces do provide some way
• Absorption to set the sampling to the necessary rate to match the Nyquist
Caused by light passing through absorbing, dense media criterion.
within the sample. Due to nonlinearity, it is not considered
in the imaging model. Once the sampling is set the acquisition parameters need to be
balanced to achieve a bright signal with low noise. With camera
• Glare images this means optimizing exposure time to get a bright
Caused by internal reflections of bright sources of light in image. However, the brightness should not exceed the value
low light areas. This cannot be considered in the imaging of 50% of the total dynamic range. This can be checked when
model. looking at the histogram, a graph to display the distribution of
the signal intensities over the given grey scale. If the intensities
• Blur are mainly located in the range up to 50% of the grey scale, the
Caused by the impact of the PSF. This is fully covered in the settings are good to go – for imaging one sample plane. For
imaging model. confocal systems, this prerequisite is less of an issue because
the applied algorithms take care of this. In any case saturating
the image absolutely must be avoided; no pixel should be at or
over the maximum grey level. Check with a range indicator that
Practical Guide shows saturated pixels in an obvious color scheme.
General recommendations
What are the general recommendations for image For confocal acquisition, there are additional ways to optimize
acquisition settings if the resulting images are meant to the signal to noise ratio. One way is to increase excitation
be processed with a deconvolution algorithm? light and lower the detector gain, but this might result in
The key to best results for deconvolution is to present an phototoxic effects like bleaching, which are unwanted effects
optimal input image to the algorithm. And the optimal image is since bleaching again reduces the signal, but only to a smaller
the one with the highest possible signal and the lowest possible extent reduces the noise. To find a good balance, one trick is
noise. In addition, the sampling rate, which in a confocal system to mimic the acquisition, first without laser light, then increase
then determines the number of pixels in the resulting image, the detector gain to the limit where electronic noise is not yet
must be high enough. What is high enough? The sampling rate visible. Other strategies do not change laser settings but rather
6
change the illumination time per pixel by decreasing imaging If the highest resolution is not the most important aspect to be
speed or by averaging the signal over 2 or more scans. achieved with DCV but rather the reduction of noise and some
But again, this increases the light dose onto the sample. deblurring, then using a lower sampling rate might be a good
option. It helps to speed image acquisition and reduces the light
Typically, you want to look at three-dimensional structures, impact onto the sample. This approach will still produce high
whether you use a camera for imaging, maybe even paired quality images with better visible structures, although they are
with structured illumination, or you use a confocal system. not maximally resolved.
The Nyquist–Shannon theorem, our sampling prerequisite, also
applies to the third dimension. The required number of images What about a moving target?
in Z is again defined by the point’s emission signal as specified Whenever you have a fixed sample and nothing is moving,
by the PSF. However, for any deconvolution algorithm to work then the point of origination for the emission signal is known.
with 3D data, you need to acquire enough “dense” images in Z Whenever you have a living sample like cultured cells, organoids,
as well as enough images. Typically, the software provides some or whole organisms of various size, the structure you want to
indications for how to choose the right sampling in Z. look at may be in motion while you acquire a single image or an
To determine the range of images required in Z, you need to image stack series. Depending on the speed of this movement,
know the attainable resolution of your system. If you know the application of deconvolution to those images may be
the extension of the PSF in Z, the imaged Z-range should cover impossible, not only with scanning systems where line by line
at least 3 – 5 times the size of the PSF. When using a confocal the point of interest might move but also with camera images.
system, the sampling rate is dependent on the slice thickness Even a short exposure time can result in a blurred image. If you
indicated in the user interface, with the optimal settings sug- cannot shorten imaging time, then try to slow down potential
gested. The total range is dependent on what you want to see movement of the sample. Knowing the sample and its potential
in Z but a minimum of 5 slices is required for the DCV to work. for movement is crucial in this case.
Now that you have optimized the settings, you will figure out
that this does have some consequences to the overall imaging If imaging requires a specific duration, how can you
process. speed the time-to-result?
The total amount of data needed determines the total
First and immediately observed, image acquisition takes longer. processing time. The aforementioned measures of modern
The higher number of pixels in XY and the higher number of programming using graphical processors brought the time-
images in Z both increase acquisition time compared to other to-result to a range where it became way more attractive to
settings in which you don’t consider these parameters. Imaging use deconvolution algorithms in the first place. A very good
for the purpose of applying deconvolution can be time consum- approach to processing large data sets is to do it during image
ing especially using laser scanning confocal microscopes – but acquisition. System set-ups with additional processing comput-
most often it’s worth it! However, longer acquisition time means ers are offered, with an automatic transfer of data to start the
longer excitation time, which means greater risk of bleaching processing whenever enough data have arrived. You will then
the sample. see the result shortly after the experiment has finished.
How can you deal with this drawback? Anything else to consider? A word on the objective lens
The most effective thing to do is to choose as small an area as The objective you choose is the first interaction with the sample.
possible for imaging to reduce the field of view to the required Any damage to the objective will result in bad images from the
size only. High resolution or super-resolution is typically applied start, no matter how you take care of the rest of the set-up.
to already tiny, subcellular structures, so restricting the scanned Keep it clean. Chose an objective where the refractive index of
area to the structures of interest reduces the frame size and the immersion medium matches the index of the embedding
therefore the imaging time for each frame. This not only helps medium. Higher numerical aperture (NA) of the objective will
to speed image acquisition but also speeds the subsequent data not solve the problem of getting higher resolution unless these
processing. For a widefield system this might not be so relevant, indices are correct. Change the embedding medium if needed
however, if binning of pixels still provides a high enough sam- – you will see the result immediately. For a detailed discussion
pling rate, at least data processing will be faster. The amount of on cleaning microscopes, please refer to ZEISS “The Clean
data the algorithm must process determines the amount of time Microscope” brochure.
it takes to get the result.
7
Deconvolution methods The PSF serves as the basic building block of an image. Anything
The classical deconvolution for fluorescence widefield smaller than the PSF is not directly resolvable, regardless of
microscopy magnification or pixel size. Unfortunately for fluorescence
Fluorescence microscopy has significantly advanced life science widefield microscopy, the spreading of the PSF in the axial
research. In the past century, many major discoveries in cell direction is much longer than in the lateral direction. The axial
biology and neuroscience were possible only because of direct extension is theoretically infinite, and it significantly limits the
visualization of cells, subcellular components, and their mobility axial resolution. What is worse, such unwanted axial extension
and interaction with others. Fluorescence microscopy plays (or out-of-focus blur) is mixed with the in-focus signal in another
a pivotal role in such direct visualization. By labelling specific layer which further reduces the SNR of the in-focus structure.
molecules or structures with a light-emitting fluorophore, small Often the resulting fluorescence widefield image is associated
particles and delicate structures can be identified and quantified with lower resolution, poor SNR, and reduced contrast, espe-
with much higher specificity and precision. Fluorescence micros- cially with high NA objective and thick samples. Popular optical
copy was pioneered by ZEISS at the beginning of the twentieth sectioning methods such as confocal improve the resolution
century. Fast forward to today and it remains one of the most and contrast by physically removing such out-of-focus blurs. The
widely used instruments in any biology research lab. hardware implementation of the laser-based point illumination
and a pinhole can efficiently stop the out-of-focus blurs from
Traditional fluorescence widefield microscope has a straightfor- reaching detection. However, this process inevitably removes
ward yet elegant epi-illumination beam path design. The essential a significant number of photons from the sample and reduces
feature is to provide a mechanism to illuminate the sample with a overall detection efficiency. Wouldn’t it be better to “reuse”
selected wavelength range, then separate and detect the shifted such out-of-focus blurs? This is what restorative widefield 3D
longer wavelength fluorescence light. Since the fluorescence deconvolution aims to do. Suppose we have an extraordinarily
signal is typically three to six orders of magnitude weaker than thin sample, and the optical system generates a perfect PSF, free
the illumination light, the challenge is to produce a high-power of aberration and noise. In that case, we can mathematically
illumination while simultaneously and efficiently separating and reassign all the out-of-focus blurs back into the focal point
detecting weak fluorescence emission. This is historically achieved with high confidence. Sadly, such a perfect imaging condition
by using a powerful mercury arc lamp as a light source and does not exist. Practically, we must deal with multiple imaging
implementing special short-pass, bandpass, or long-pass filters artifacts, like light scattering, spherical aberration and additional
and beam splitters. Modern fluorescence widefield microscopes noise from the sample and camera.
benefit enormously from the continuous development of optics,
coating, light-emitting diode (LED) and detector technologies. The previous sections have already discussed in general how
The availability of more than a dozen high-power LEDs provides to properly acquire images for deconvolution. For fluorescence
flexible, stable and gentle fluorescence illumination. The apochro- widefield microscopy, there are some additional considerations.
mat optics and objectives correct the color shift across the entire
visible light range. Advanced coating technology has pushed 1) Work with thin samples only, ideally within 10 µm. Light
the filter efficiency close to 100%. State-of-the-art EMCCD and scattering is strictly a random phenomenon. The level
sCMOS cameras detect the faintest fluorescent signals with above of scattering depends on the light wavelength (a longer
95% quantum efficiency (QE). These advances have made it pos- wavelength has less scattering), the optical heterogeneity of
sible to observe and record very challenging samples, like living the tissue and most prominently, the tissue thickness. Thick
cells and biomolecules in three dimensions, for a long period of tissue significantly randomizes all signals. Without an addi-
time with minimum disturbance. tional mechanism to differentiate in-focus and out-of-focus
signals, such as the pinhole in confocal or the grid projection
However, those hardware developments do not address a in Apotome, the deconvolution photon “reassignment” is
fundamental problem of fluorescence widefield microscopy: marred by error.
image degradation and optical blur caused by the light diffrac-
tion through an optical system. The ever-improved sensitivity 2) Pay special attention to image sampling and axial ranges.
of the hardware makes such degradation even more apparent. A fluorescence widefield image has a fixed pixel size with
The model of the optical blur is based on the concept of a PSF. a given set of objectives, tube lens, camera and camera
In a fluorescence widefield microscope, the shape of the PSF adapter. It does not have the pixel size flexibility of a confocal
approximates an hourglass, and the size (or the level of the or a pre-calibrated configuration like in SR-SIM. The user, in
spread) is decided by the wavelength and numerical aperture many cases, is required to confirm the pixel size and the sam-
(NA) of the objective. pling manually. Most microscope software will automatically
calculate and display the pixel size. Nevertheless, it is still nec-
8
essary to compare that with the theoretical resolution. A GFP
A B C D E
imaging channel using a 63× / 1.4 objective with a 1× tube
lens, a ZEISS Axiocam 705 mono camera, and a 0.63× camera
adapter all give a pixel size of 87 nm. The theoretical resolu-
tion is ~220 nm, which translates into a lateral sampling rate
of 2.5× – ideal for deconvolution. The axial sampling rate
can be controlled by the Z motor, and many types of imaging
software would suggest an “optimal” 2× sampling. For
widefield deconvolution, it is advisable to acquire axial ranges
slightly above and below the physical sample thickness, about Figure 3 Example of deconvolution improvement by additional corrections.
half of the axial PSF size, so that the additional out-of-focus A) a cross-section view of the raw data of a U2OS cell in PBS solution, acquired
using Plan Apochromat 63× / 1.4 oil immersion objective. Strong spherical
blurs can also be utilized (see Figure 2). If a given fluorescence
aberration is present due to refractive index mismatch. B) Default constrained
widefield image does not have an optimal sampling, it is iterative deconvolution. C) Constrained iterative deconvolution with background
recommended to use the neighbor-based algorithm like correction activated. D) Constrained iterative deconvolution with background
correction and aberration correction of PBS as embedding medium. E) Constrained
nearest-neighbor instead.
iterative deconvolution with background correction, aberration correction, and
depth variance correction with 10 PSFs. All done with ZEN Deconvolution.
10
Figure 4 Giant live fluke stained with Hoechst 33342. The homogeneous fluorescence in the inner parts of the widefield image (left) poses a serious problem for
the background correction algorithms (center). Some structures remain, but generally, there are too many black spaces between the cells. This becomes visible
when comparing the results to an optical section, acquired with ZEISS Apotome (right). Notably, the prominent rim around the structure, as seen in the background-
corrected image in the center panel, is an artifact of interference in the widefield image, which is not seen with an optical sectioning system.
to be internally transformed to match the geometry of the entirely different. Without knowing the true image, it is almost
deskewed data. To simplify the workflow and avoid potentially impossible to realize this. With Apotome, quantitative optical
incorrect PSFs, ZEN Lattice Lightsheet processing combines sectioning calculations can be used for a variety of samples.
the deconvolution and deskewing steps in one operation.
Deconvolution improves the quality and resolution of images
Deconvolution for Apotome – reliable and easy to use acquired with Apotome even further. Compared to deconvolu-
Optical sectioning allows efficient minimization of out-of-focus tion of widefield images, the patented algorithm for Apotome
light to create crisp images and 3D renderings. ZEISS Apotome 3 deconvolution uses the additional information present from the
uses a grid to generate a pattern of intensity differences. After structured illumination. This allows a better reconstruction of
the fluorescence of a grid position is acquired, the grid moves the sample without introducing artifacts. Apotome deconvo-
to the next position. If out-of-focus light is present at a certain lution uses linear approaches to yield quantitative results. The
region of the sample, the grid becomes invisible. From the signal to noise ratio is improved and a higher optical resolution
individual images acquired with structured illumination, reliable can be achieved.
optical sections can be calculated using well documented
algorithms. One of the benefits of Apotome 3 is the ease of using it. General
recommendations with respect to signal to noise ratio and
sampling of the specimen must be taken into account. Despite
this, few additional settings need to be adjusted. The most
important parameter is the number of images with the projected
grid. While Apotome needs at least 3 images with the projected
grid for processing of the optical section, increasing the number
to 5 or 7 images can improve image quality. Further increasing
the number of grid positions for an optical section does not lead
Figure 5A Conventional fluorescence Figure 5B Apotome 3 to significant improvements, but it does decrease frame rate and
Drosophila neurons, blue: DAPI, yellow: GFP. Objective: Plan-Apochromat 20×/0.8. may introduce photo bleaching.
Courtesy of M. Koch, Molecular and Developmental Genetics, University of
Leuven, Belgium
Deconvolution with Apotome is also easy to use. By default, a
theoretical PSF is automatically calculated using the information
Despite hardware-based methods for creating optical sections, in the metadata. The strength of the deconvolution is selected
purely software-based solutions have emerged over the last automatically by the software but can be set manually, although
years. As pure software solutions can use only the acquired there is the risk of introducing artifacts for settings not matching
widefield image, users must trust that these black-box solutions the sample and acquisition parameters. The refractive index of
produce structures that are real and do not remove structures the sample medium and the distance to the coverslip can be set
when “enhancing” the image. if they are known to correct for aberrations.
Figures 4 shows a comparison of a widefield image, a back- For calculation of the optical section, with or without deconvo-
ground-subtracted image processed using a software algorithm lution, the image can be corrected for bleaching. Local bleach-
and an image acquired with ZEISS Apotome. Even though the ing corrects the bleaching for each pixel and yields typically the
background-subtracted image shows a high contrast that is best results. Alternatively, none or a global bleaching correction
pleasing to the eye, features are missing, and structures look can be chosen. Additionally, a Fourier filter of different strength
11
Widefield
Apotome 3
Apotome 3 + Deconvolution
Figure 6 Cortical neurons stained for DNA, microtubules and microtubule-associated proteins.
Courtesy of L. Behrendt, Leibniz-Institute on Aging – Fritz-Lipmann-Institut e.V. (FLI), Germany.
Widefield
Figure 7 Autofluorescence of a Lotus Japonicus root infected with symbiotic bacteria stained with mcherry. Courtesy of F. A. Ditengou, University of Freiburg, Germany.
12
can be used if remaining stripes happen to be present in the Even though a 2D deconvolution can be applied to Apotome
image. Bleaching and Fourier filter correction also can be images, 3D deconvolution improves the results because more
combined if necessary. information is available. Figure 6 and 7 show examples of wide-
field, Apotome and Apotome + deconvolution images showing
When using Apotome deconvolution in the processing function of the increase in image quality and resolution.
ZEN, additional processing options are available, but typically they
are not required. These corrections are not Apotome-specific and Confocal Deconvolution – LSM Plus
can be applied to other deconvolution methods, too. The history of deconvolution for confocal data is rather long,
but the history of truly embedded and tailored deconvolution in
To remove offsets, the background subtraction option can be confocal systems is rather short. The advances in development
used to subtract the intensity of a smoothed average. If light have taken place only in the last few years. At ZEISS, these
sources having a non-constant light output were used for image advances have produced the new LSM Plus configurations of the
acquisition, flicker correction can be implemented. State of the LSM 900 and LSM 980.
art LED light sources typically do not suffer from flickering and
their usage improves the data quality during acquisition. The In the last 10 years, ZEISS has developed two major improve-
pixel correction function replaces the bad pixels that cameras ments to their confocal instruments: parallel spectral detection,
can have by considering the mean value of the neighboring represented by the Quasar (Quiet Spectral Array) detector
pixels. State-of-the art CMOS cameras like ZEISS Axiocam design, and resolution and speed detection, represented
designed for scientific applications and typically have only a few by Airyscan. For both, ZEISS implemented parallelization in
if any, bad pixels. The fluorescence decay correction compen- acquisition as a tool to boost SNR and minimize sample stress.
sates for bleaching during a Z-stack. If bleaching is present and This fulfills a dream voiced by many users to combine these
not corrected, it significantly alters deconvolution results. This detection methods in confocal instruments. This also was one
option uses the average intensity of the individual slices of the of the catalysts for the LSM 9 series, introduced in 2019, which
Z-stack to correct for bleaching and improves the deconvolution provides the first implementation of these developments.
results for Z-stacks with noticeable bleaching, thereby improving
the deconvolution result for this type of data.
9
1
3
5 7
2
4
10
1. Solid state laser lines
2. Twin Gate main beam splitters 8
3. Galvo scanning mirrors 11
4. Objective
5. Pinhole and pinhole optics
6. Secondary beam splitters
7. Recycling loop
12
8. Quasar detection unit
9. NIR detectors
10. Emission filters
11. Zoom optics
12. Airyscan detector
Figure 8 Beam path of LSM 980 with Airyscan 2 and NIR detection
13
In 2021, ZEISS has made a big step in the direction of embedded Bringing both technologies together was a challenge for ZEISS,
and tailored deconvolution as an important technology with the since both features produce large amounts of data per acquired
release of the new LSM Plus. So, what was done here exactly? pixel. Consider spectral detection with up to 36 channels, and
Spectral detection advanced from 32 channels (8 with 4 read- Airyscan with 32 elements. This would have resulted in 36 × 32,
outs) to 36 channels. Wider data bandwidth including Online totaling 1152 channel elements for each pixel, a data load that
Fingerprinting, and GaAsP cathode technology also contributed exceeds any readout electronics on the market now and in
to increased versatility in the few last years. This meant a more foreseeable future. So, if spectral Quasar detection cannot fully
cost effective 6-channel solution in 2018/19, and a more power- come to Airyscan, can some Airyscan maybe come to Quasar?
ful 36 channel solution, including special NIR detector channels, That indeed was the breakthrough that ZEISS started to develop
in early 2021, all integrated in the same proven lambda stack two years ago, and which became a product in 2021, as the
and spectral unmixing workflow. LSM Plus function for all ZEISS LSM 900 and 980.
Figure 10 LSM Plus function with optimized acquisition settings. Sampling is set
automatically and interlinked with the pinhole setting.
Figure 11 Resolution limits of Confocal, LSM Plus (Confocal Wiener DCV), and Airyscan SR detection. In addition to the resolution gain, both Airyscan and LSM Plus
reduce noise and improve visibility.
14
What are the technical capacities of LSM Plus, and what
is the deconvolution behind it?
LSM Plus
LSM Plus e.g. 0.3 AU Airyscan SR*
Resolution Confocal e.g. 0.8 AU (closed PH) (1.25 AU)
Spectral range 380 – 900nm 380 – 900 nm 380 – 900 nm 400 – 750 nm
The resolution gain of LSM Plus can be higher than the usual
DCV factor of 1.4. The reason for this is the quality of the opti-
mized PSF models, which can adjust to the instrument proper-
ties in the same way as with Airyscan. Another advantage is the
robustness of the Wiener filtering used here, which is applied in
the event of image noise or aberrations. Such a mismatch does
not create annoying artifacts, but it does reduce the maximum
Figure 12 Murine cremaster muscle, multi-color label with Hoechst (blue), resolution which can be achieved. The perceived SNR is always
Prox-1 Alexa488 (green), neutrophil cells Ly-GFP, PECAM1 Dylight549 (yellow), better with LSM Plus, even with non-optimal samples.
SMA Alexa568 (orange), VEGEF-R3 Alexa594 (red), platelets Dylight649
(magenta). Acquired with 32-channel GaAsP detector using Online Fingerprinting
on ZEISS LSM 980, without (top) and with LSM Plus (bottom). LSM plus offers an embedded and optimally tailored deconvolu-
Sample courtesy of Dr. Stefan Volkery, MPI for Molecular Biomedicine, Münster, tion which improves the spectral detection properties of ZEISS
Germany LSM 980 and ZEISS LSM 900 and complements other features of
the LSM, like NIR detection, Online Fingerprinting, multiphoton
Additionally, there is a true collaboration between Quasar and imaging or Airyscan.
Airyscan when using the multitracking mode. Here, Airyscan
becomes an additional channel in the spectral acquisition setup,
providing extra high resolution in addition to the LSM Plus
processed channels. More than 40 data channels are processed
in this mode, and after the deconvolution steps of Airyscan
and LSM Plus, the resulting channels can undergo a spectral
unmixing for perfect dye separation. By closing the pinhole in
the LSM Plus channels, the resolution can be pushed further to
get an optimal match to the Airyscan channel.
15
over classical confocal. Thereby, the choice of the Wiener noise
filter will balance between higher resolution and better SNR
in the image. Changing the Wiener filter parameter results in
the selection of different frequency bands and their individual
amplification. The signal of each detector element is decon-
volved separately and the contribution of each detector element
is weighted. This allows a robust DCV with the positional
information of the detector elements also considered, leading to
a resolution of 120 nm laterally and 350 nm axially for 488 nm
excitation.
Step by step to 90 nm – Airyscan Joint Deconvolution (jDCV) The new Airyscan jDCV Processing, in contrast to the classic
ZEISS Airyscan features a 32-channel gallium arsenide phosphide Airyscan Processing, features an accelerated joint Richard-
photomultiplier tube (GaAsP-PMT) area detector that collects son–Lucy algorithm, supporting Airyscan raw data images
a pinhole-plane image at every scan position. Each detector as well as ring-preprocessed images. Taking advantage of
element functions as a single, 0.2 AU pinhole where the data the unique Airyscan concept including light distribution and
from each element carries not only intensity information but location information from the pinhole plane of each detector
light distribution and location information. In total, 1.25 AU is element, this “multiview-information” is used to achieve a better
collected by the whole detector. The 0.2 AU of each element reconstruction result, especially for noisy and low SNR images,
determines the sectioning and resolution in XY and Z, whereas in comparison to the Wiener filter implementation in the classic
the 1.25 AU determines the sensitivity. By shifting all the images Airyscan processing. A better statistical interpretation and addi-
back to the center position, which can be done easily since the tional a-priori information (noise distribution and non-negativity)
amounts of their displacements are known, an image called are leading to a sample-dependent resolution improvement
the “Sheppard sum” is generated. This image has a 1.4× higher down to 90 nm. This makes information available which has
resolution compared to that of a classical confocal image. not been accessible previously, for example spine-distribution
and morphology for quantifications (Figure 14). These benefits
The classic Airyscan Processing, which includes the linear Wiener can be extended to multiphoton excitation to gain significant
filtering, leads to a simultaneous increase of 4–8× in signal- improvements in resolution and SNR.
to-noise as well as a two-fold spatial resolution improvement
16
2. Quality Threshold represents an alternative “stop criterion”.
A
Iterative DCV processing stops when the difference in fit
quality between most two recent iterations is smaller than
the quality threshold value.
Figure 15 Images of Cos-7 cell stained with anti-alpha-Tubulin Alexa fluor 488 were processed with the conventional SIM algorithms based on generalized Wiener
filter and with the SIM² reconstruction. The images show an improvement of resolution for SIM² compared to SIM. The superior sectioning capability of SIM² is shown
in the movie. Objective: Plan-Apochromat 63× / 1.4 Oil
17
Optimizing SR-SIM systems mainly means increasing the quality iterative approaches has been demonstrated in multiple cases
of the image reconstruction. This can be achieved by improve- [SIM_04, SIM_05]. Thus, ZEISS Elyra 7 contains a two-step SIM²
ment of the encoding of the high-resolution information during image processing, where, in a first step, SIM processing is per-
image acquisition and of the decoding during deconvolution. formed not only on the raw images but also on the microscope
On the encoding side, using the 2D lattice illumination, as PSF. The resulting SIM-PSF is then used for the subsequent iter-
implemented in ZEISS Elyra 7, leads to higher modulation ative DCV. This way, SIM² can double the conventional SR-SIM
contrast and improved image quality compared to conventional resolution in XY and improve the sectioning quality (Figure 15).
1D stripe pattern SIM systems [SIM_03]. Moreover, in contrast Furthermore, SIM² is much more robust against overprocessing
to the stripe illumination, lattice illumination does not require artifacts. The implementation of the SIM² algorithms in ZEN
rotational movement and, thus, enables fast image acquisition. software is as easy as performing conventional SIM processing.
The increased efficiency of the lattice illumination is also very Based on the signal-to-background quality of the raw data, the
gentle in terms of phototoxicity, making Elyra 7 Lattice SIM a user can choose one of the default parameters, Weak, Standard
live-cell imaging system. The user of the Elyra 7 has the flexibility or Strong, for fixed and live samples. An extra mode exists for
to choose between 13 or 9 phase image acquisition. reconstruction of large homogenous structures such as nuclei,
cells expressing free fluorophores, etc. For interested users, man-
For achieving the highest resolutions, 13 phase images are ual adjustment of the deconvolution parameters such as Number
recommended. When live samples of high dynamics are of iterations, Regularization weight, Input SNR and Sampling
investigated, 9 phase images might be advantageous due to is also accessible to maximize the SIM² performance. But we
the increased frame rates. It is important to mention that both strongly recommend performing the default processing first.
modes perform 3D super-resolution microscopy, even when
images are acquired only in 2D. In general, it is fully sufficient In summary, SR-SIM is a fast, live-cell compatible imaging
to acquire images only within the sample. The user can easily technique able to double or quadruple the resolution depending
identify the sample area by visibility of the illumination pattern. on the deconvolution method used.
For optimal performance of the Elyra 7, it is recommended to use
only 10 – 15% of the full grey value range, i.e., intensities of up to FAQ: Deconvolution
6000 – 8000. For dimmer samples, it is advantageous to reach at Is deconvolution quantitative?
least 100 – 200 grey values above the background noise. A quantitative method is defined here as: in the pre- and
post-deconvolved image the sum of all intensities (excluding
On the decoding side, image reconstruction consists of multiple intensities that are not collected) is preserved. In a nutshell,
steps: order separation, parameter estimation, Fourier filtering, some deconvolution algorithms are quantitative. Deconvolution
order shifting (and weighting), order combination and decon- algorithms generally can be categorized into three groups:
volution. Commonly, SIM reconstruction algorithms are based neighbor-based, inverse-based, and iterative-based. Neigh-
on generalized Wiener filtering and the order combination and bor-based algorithms, also known as deblurring methods, are
deconvolution are performed in a single step. fundamentally subtractive in nature. They seek to estimate the
contribution of out-of-focus signals in each frame and remove it.
Generalized Wiener filtering is a linear process leading to fully
quantitative results and can be carried out at high speeds. In Neighbor-based algorithms are qualitative only. Inverse-based
Elyra 7, Wiener filtering is referred as SIM processing. The user algorithms, including the famous Wiener Filter, are linear
can easily reconstruct the images by choosing between different restoration methods that are applied in a single operation.
strengths (weak, standard, strong) of deconvolution according Inverse-based algorithms usually involve specific “regularization”
to the signal-to-background levels of the acquired raw data or to deal with noise, but the overall process is strictly linear and
manually define the strength value. Nevertheless, generalized quantitative.
Wiener filter has certain limitations: a) The achievable lateral and
axial resolutions are limited to two-fold improvement, b) Over- Iterative-based algorithms, in an over-simplified view, are a
processing artifacts occur in low signal-to-background data and repetitive comparison operation between a current forward
c) No iterative deconvolution can be used [SIM_01]. The first model result and the measured data. Each repetition’s result is
two limitations result from the non-optimal PSF used during applied with meaningful “constraints” and the previous output is
the image reconstruction. Briefly, the microscope PSF used for used as the next estimate until a suitable result is achieved. For
W iener filtering does not reflect the changes applied during the raw data with decent signal to noise ratio (SNR), iterative-based
first steps of the processing. Thus, it is beneficial to decouple algorithms, are designed to be quantitative, no matter if they
the SIM processing and deconvolution steps and adjust the PSF. contain linear or non-linear computational components.
Insufficiency of improving only the DCV by implementation of
18
How does DCV exceed optical resolution? How much can I improve resolution with deconvolution?
In principle, deconvolution (DCV) is an image processing It is difficult to put a number on it. Nevertheless, it is sometimes
technique that seeks to reassign out of focus or blurred light claimed that up to 2 times better resolution can be achieved
to its proper in-focus location or to remove it entirely. Using with deconvolution. Contrary to popular belief, microscopy
this procedure, the signal-to-noise ratio (SNR) as well as the resolution cannot be straightforwardly measured. The most
signal-to-background ratio (SBR) or even the contrast of the accepted standard to measure resolution is based on a so-called
image will be improved. But how can resolution be improved Rayleigh Criterion. Here, the smallest resolvable detail is defined
even beyond the attainable optical resolution of the microscopic when two point-objects are so close together that the center
technique used for image recording? maximum of one point’s Airy Disk falls on the first minimum
of the others. Practically, a particular sample of either two
If the image is modelled as a convolution of the object with the point-objects (such as Gattaquant Nanoruler) or two line-objects
point-spread-function (PSF) then theoretically, the DCV of the (such as Argolight calibration slide) with specified distances is
raw image should restore the object. However, given the funda- measured to confirm the resolution.
mental limitations of any imaging system and image-formation
models, the best one can get is an estimate of the object. Another simplified and commonly used method is to measure
Convolution operations are best computed by the mathemat- the full width at half maximum (FWHM) of a sub-resolution
ical technique of Fourier Transformation, since in Fourier or object. A profile measurement across the center of an Airy Disk
frequency space convolution is just the product of the Fourier approximates a Gaussian curve (a Bessel function to be precise),
transform of the PSF, the so-called optical transfer function and the resolution is defined as the length of the FWHM of the
(OTF), with the Fourier transform of the image. The resulting Gaussian curve.
Fourier image can then simply be back transformed into real
space. And this brings us back to the resolution question. Iterative-based deconvolution seems to enjoy some advantage
of resolution improvement, especially for samples with small and
The higher the frequencies are that are supported by the OTF, clearly defined structures, and the resolution is measured with
the higher will be the resolution in terms of distances in the the FWHM method. Suppose you have a bright and low-density
restored image. Noise, however, contains the highest frequen- 100-nm fluorescent beads sample and use iterative-based
cies, so many algorithms use an approach termed regularization deconvolution with strong strength and more iterations.
to avoid or reduce noise amplification. If it is possible to make
assumptions about the structures of the object that gave rise In that case, you can quickly obtain an alleged FWHM resolution
to the image, it can be possible to set certain constraints for of less than 100 nm. Determination of resolution improvement
obtaining the most likely estimate. For example, knowing that in real biological samples with complicated structures is even
a structure is smooth results in discarding an image with rough more challenging. Deconvolution over-processing is strongly
edges. A regularized inverse filter, like Wiener filter, uses exactly associated with artifacts and should always be avoided for
that approach. By this method, high frequencies arising from biological sample data.
structures that would otherwise be obscured by high noise fre-
quencies become available and the resolution will be improved. Lastly, the potential resolution improvement is only achievable
However, such a linear approach only would be able to achieve when the raw data is properly, or to some extent overly, Nyquist
the theoretically possible resolution of the optical system. Or in sampled. In summary, deconvolution does improve resolution,
other words, resolution is restricted to the support of the OTF. but we shouldn’t blindly put a number on it and expect the
So, what about then surpassing the optical possible resolution? same outcome every single time.
This can be achieved by constrained iterative algorithms that Can I do deconvolution on 2D images?
improve the performance of inverse filters. As their name Yes. While Deconvolution yields the best result for optimally
implies, they operate in successive cycles. Usually, constraints sampled 3D stacks, you can perform deconvolution on a single
on possible solutions are applied that not only help to mini- 2D image. However, you cannot use the nearest-neighbor
mize noise but also increase the power to restore the blurred algorithm, which requires neighboring frames above and below.
signal. Such constraints include regularization, but also other If you use ZEN, it is recommended to use the “Deblurring”
constraints like nonnegativity. Nonnegativity is a reasonable method for 2D images.
assumption as an object cannot have negative fluorescence.
Such algorithms not only raise the high frequencies of the
OTF support, but in addition they are able to extend the OTF
support. And that in turn means higher frequencies than those
transported through the optical system and therefore, higher-
than-optical achievable resolution.
19
Should I use theoretical PSF or measured PSF? controlled properly. Ideally, it should be a single layer of
It depends. There are pros and cons to both methods. beads with minimum overlap when the beads are defocused
Theoretical PSF can be generated automatically by the files’ (see Figure 16). Practically speaking, you can sonicate and
metadata and can easily accommodate various image inputs, dilute the beads’ stock solution to multiple concentrations
e.g., data acquired with different objectives or wavelengths. in ethanol and drop 3–5 of them onto the same cover glass,
Theoretical PSF is inherently noise-free and can adapt spherical then let them air dry. Lastly, the beads should be prepared
aberration by using different PSFs at different imaging depths with the same imaging conditions of your raw data. This
(depth variance implementation in ZEN). However, theoretical includes using cover glass and identical mounting medium,
PSF assumes perfect hardware and experiment conditions that sample depth, and temperature. Note that it is beneficial to
are seldomly realistic. Those non-optimal imaging conditions have an antifade agent in the mounting medium.
lead to deconvolution artifacts. Measured PSF, on the other
hand, should represent the “real” PSF and include all possible
A B
aberrations (except for some SR techniques). But in practice,
measured PSF (obtained by imaging and averaging sub-resolu-
tion fluorescent beads; please refer to “How is a measured PSF
generated?”) requires a considerable amount of effort, and the
result is usually very noisy. Measured PSF is also less tolerant
to spherical aberration, typical for thick samples or samples
not directly attached to the cover glass. Lastly, measured PSF
represents only the current microscope conditions when the C D
20
3. Evaluate, process, and save your measured PSF. Once the PSF FAQ: Deconvolution artifacts
measurement is done, you should first examine and assess To confidently use deconvolution first requires an understanding
the PSF quality using the orthogonal viewing tools. The PSF of what can potentially go wrong. This section covers the origins
should be clear, straight, and symmetrical. If the PSF shows and appearance the most common deconvolution artifacts, as
significant artifacts, such as tilting or uneven diffraction ring well as how to avoid them.
patterns, it is sensible to inspect the optical components
of the microscope and redo the beads imaging. Spherical Where do deconvolution artifacts come from?
aberration cannot be fully corrected, so slight axial asym- 1. Erroneous raw data
metry is acceptable. PSF from a single bead usually has a Many mishaps can happen during image acquisition. From
high level of noise. It is necessary to average multiple beads the instrument side, optical misalignment, light source
to improve SNR. The ZEN Deconvolution software package flickering (mercury lamps, metal halide lamps, lasers), and
has dedicated beads averaging tools. Follow the software detector artifact (camera dead pixel or electromagnetic
wizard, choosing single and well-separated beads to form interference of PMT) impact image quality. From the
the final measured PSF. Lastly, save the measured PSF with a sample side, photobleaching during z-stack acquisition,
detailed description of the date, objectives and wavelength. using of non-standard cover glass, sample movement, and
The metadata should contain all necessary parameters, high background signal (resulting for example from tissue
but you don’t want to assign the wrong PSF to your raw autofluorescence, fluorescent signals in cell culture medium
data. Since the optical alignment and hardware conditions or immersion medium, or environmental stray light) also can
vary over time, it might be a good practice to update the influence quality. And from the imaging side, incorrectly
measured PSF regularly. recorded metadata (e.g., objective NA or emission wave-
length), poor lateral/axial sampling, and detector over-satu-
Should I deconvolve all my microscopy images? ration can skew the data collected.
If you have the resource and the technical know-how, you
probably should. Deconvolution is proven to improve image 2. Incorrect deconvolution settings
quality. You can expect better contrast for 2D images or for PSF setting is critical to deconvolution performance, especially
3D images with non-optimal sampling using neighbor-based for the iterative-based algorithms where the same PSF is repeat-
methods. For optimally sampled 3D images, using inverse-based edly applied. Incorrect PSF settings lead to strong artifacts and
or iterative-based methods can increase resolution and SNR. for theoretical PSF, the wrong metadata input. Most software
If you carefully control the parameters, deconvoluted images assumes the same refractive indices between embedding and
remain quantitative. The only drawbacks are the long processing immersion media, thus you need to change them manually
time, possible deconvolution artifacts, and duplication of data. when, for example, using oil objectives on a water embedded
Deconvolution artifacts need to be carefully reviewed (see sample when working with the theoretical PSF.
section “FAQ: Deconvolution artifacts”), and over-processing
needs to be avoided in most cases. It is also advisable to keep For measured PSF, you will need to use a PSF acquired with
the raw data for future processing and comparison. the imaging conditions matching your sample’s condition
(objectives, wavelength, refractive indices of embedding
Can I deconvolve transmitted light brightfield images? and immersion mediums, and pinhole size). The parameter
Generally, deconvolution cannot process brightfield data for the regularization deals with noise. Generally, the higher the
following two reasons: 1) PSF would be wrong as it does not noise level, the higher the order of regularization is required.
include the condenser; 2) Brightfield signals originate from light
absorption, which is a nonlinear process and mathematically 3. Overprocessing
non-tractable. Overprocessing is common in deconvolution, particularly
when maximum resolution improvement is the goal. It
However, there is one generally accepted linear approximation usually happens when a very high restoration “strength” has
possible under the following conditions: 1) The sample is been selected, or too many iterations have occurred while
approximately non-absorbent or at least very thin so that using an iterative-based method, or both. Raw data with low
absorption can be neglected; 2) The image needs to be gray- dynamic range and poor SNR is more likely to have overpro-
level inverted, then deconvolved as a fluorescent image (the cessing artifacts.
image can be inverted once again afterward); 3) The brightfield
system must be good enough Köhlered, so the PSF becomes
close to that of a fluorescent system.
21
What do deconvolution artifacts look like? 2. Axial distortion
1. Disappearing or inaccurate structures Axial distortion is usually formed by strong spherical aberra-
It is common to lose delicate and dim structures after tion and is not a deconvolution artifact. It should be visible
deconvolution processing. This is most likely caused by the in the raw data but is more prominent after deconvolution
noise reduction of the regularization filter or excessive use of since the SNR and contrast are improved. Axial distortion can
the smoothing filter. Using measured PSF with a high level of be corrected using an advanced depth variance algorithm.
noise also can lead to the disappearance of small structures. See Figure 3 for details.
On the other hand, deconvolution can enlarge and expand
very bright structures beyond their physical size, especially if 3. Salt-and-pepper noise
the pixels are saturated. Additional patterns can also be gen- Remnant salt-and-pepper noise can still be visible after
erated in the background for raw data with poor dynamic deconvolution when the regularization filter is too low.
range and high background noise, especially when using the
inverse filter-based method. See Figure 17 for examples of a 4. Sheets / stripes pattern above and below samples
few deconvolution artifacts. Deconvolution might enhance a horizontal or vertical
line pattern that was caused by a camera dead pixel or
an extremely bright structure residing near the acquired
A B
volume. Strong spherical aberration of raw data or use of
a wrong measured PSF could lead to a sheet of light above
and below the sample. See Figure 3 for an example.
C D 5. Ringing artifact
Ringing artifact is the appearance of one or multiple ripple
patterns around bright structures in the deconvolved image.
It happens mostly with neighbor-based or inverse filter-based
methods. Change to an iterative-based method should
Figure 17 Example of deconvolution artifacts. A) raw data of mitochondria resolve it.
structures acquired using a widefield fluorescent microscope. B) a good
deconvolution example using the constrained iterative method. C) a poor
How to avoid or compensate for deconvolution artifacts?
deconvolution example showing invention of structures in the background.
D) a poor deconvolution example showing enhanced salt-and-pepper noise and 1. Examine the raw data
disappearance of structures. By carefully adjusting the brightness and contrast of the
raw image and displaying the 3D data set in an orthogonal
view (view of XY, XZ and YZ projections simultaneously, see
Real shape Result after
500 iterations Figure 2), many imaging problems, such as spherical aberra-
tion or photobleaching can be identified. Such information
will guide us for the proper settings or corrections.
Line distance
22
3. Compare the raw data with the deconvolved image Closing remarks
Deconvolution can improve resolution and SNR, but it should Deconvolution has become an important asset for almost any
not create, change, or remove existing structures in the microscope imaging modality, be it camera or point detector
raw image. It is always advised to keep the raw data and based. However, the phrase “garbage in, garbage out” can be
compare the deconvolved and raw images in a synchronized applied to DCV, so you must make sure that your raw data are
view, such as the split view function in ZEN. of a sufficient quality before starting. Then, the question of
whether iterative algorithms will produce quantitative results
4. Compare different deconvolution algorithms or settings can be answered “yes” with a good conscience. In addition,
When a suspected deconvolution artifact is not visually DCV will not only play its strength in removing blur and increas-
present in the raw data, try a different deconvolution ing the SNR, but also provide a significant increase in resolution.
algorithm, like switching from a constrained iterative method As algorithms continue to be refined and constraints are better
to a regularized inverse filter method. Different algorithms matched to the sample structures, we can expect DCV to play
may have a different impact on a particular structure or an ever-increasing role in image processing.
background. A careful comparison of the data might reveal
the artifact. Additionally, reducing the restoration strength
or the number of iterations also can lead to fewer artifacts.
Bibliography
• Biggs, D.S.C.: Accelerated iterative Blind Deconvolution. Ph.D. Thesis, • Schaefer et. al. “Structured illumination microscopy: artefact analysis
University of Auckland, New Zealand (1996) and reduction utilizing a parameter optimization approach” Journal
• Born, Max & Wolf, Emil: Principles of Optics, Pergamon Press (1959) of Microscopy, Vol. 216, Pt 2 November 2004, pp. 165–174
• van Cittert, P. H.: “Zum Einfluß der Spaltbreite auf die • Schaefer, Lutz & Schuster, D.: Structured illumination microscopy:
Intensitätsverteilung in Spektrallinien. II,” Z. Phys. 69, 298–308 (1931) improved spatial resolution using regularized inverse filtering (2006)
• Gibson, S.F. & Lanni, F.: Experimental test of an analytical model • Sheppard C.J.R.: Super-resolution in confocal imaging, Optik -
of aberration in an oil-immersion objective lens used in three- International Journal for Light and Electron Optics 80(2):53 (1988)
dimensional light microscopy, JOSAA, Vol 9, Issue 1 (1992) • Sheppard C.J.R.: The Development of Microscopy for Super-
• [SIM_02] M. G. L. Gustafsson, J. Microsc. 2000, 198, 82. Resolution: Confocal Microscopy, and Image Scanning Microscopy
• Holmes T.J., Biggs D., Abu-Tarif A.: Blind Deconvolution. In: Pawley J. (2021), Applied Sciences 11(19):8981
(eds) Handbook Of Biological Confocal Microscopy. Springer, Boston, • [SIM_03] Dr. Jörg Siebenmorgen, Dr. Yauheni Novikau, Ralf
MA. https://doi.org/10.1007/978-0-387-45524-2_24 (2006) Wolleschensky, Dr. Klaus Weisshart, Dr. Ingo Kleppe, “Introducing
• [SIM_04] X. Huang, J. Fan, L. Li, H. Liu, R. Wu, Y. Wu, L. Wie, H. Mao, Lattice SIM for ZEISS Elyra 7 Structured Illumination Microscopy with
A. Lal, P. Xie, L. Tang, Y. Z., Y. Liu, S. Tan, L. Chen, Nat. Biotechnol. 2018, a 3D Lattice for Live Cell Imaging”, December 2018
36, 451. • Tikhonov, A.N. & Arsenin, V.Y.: Solutions of ill Posed Problems. Wiley,
• Jansson, P. A. ed.: Deconvolution of Images and Spectra (2Nd Ed.) New York (1977)
(Academic Press, Inc., Orlando, FL, USA (1996) • Van der Voort, H.T.M. and Strasters, K.C.: Restoration of confocal
• [SIM_01] A. Löschberger, Y. Novikau, R. Netz, M.-C. Spindler, R. images for quantitative analysis. J. Micros., 178, 165-181 (1977)
Benavente, T. Klein, M. Sauer, I. Kleppe, “Super-resolution imaging • Verveer, Peter J. and Jovin, Thomas M.: Efficient superresolution
by dual iterative structured illumination microscopy”, Carl Zeiss restoration algorithms using maximum a posteriori estimations with
Microscopy GmbH, April 2021 application to fluorescence microscopy J. Opt. Soc. Am. A,14, 1696-
• Lucy, L.B.: An iterative technique for the rectification of observed 1706 (1997)
distributions, Astron. J., 1974, 79:745-754 • Wallace, W. et al.: A workingperson’s guide to deconvolution,
• Meinel, Edward S.: Origins of linear and nonlinear recursive Biotechniques, Nov. 2001, Vol 31, No 5
restoration algorithms, J. Opt. Soc. Am. A 3, 787-799 (1986) • Wiener, Norbert: Extrapolation, Interpolation, and Smoothing of
• Richardson W.H., Bayesian-based iterative method for image Stationary Time Series. Wiley, New York NY (1949)
restoration. J. Opt. Soc. Am., 1972, 62 (6): 55-59 • [SIM_05] Y. Zhang, S. Lang, H. Wang, J. Liao, Y. Gong, J. Opt. Soc.
• Roth S. et. al.: Optical Photon Reassignment Microscopy (OPRA) Am. A 2019, 36, 173
(2013), arXiv:1306.6230
• Schaefer LH, Schuster D, Herz H.: Generalized approach for
accelerated maximum likelihood-based image restoration applied
to three-dimensional fluorescence microscopy. J Microsc. 2001
Nov;204(Pt 2):99-107. doi: 10.1046/j.1365-2818.2001.00949.x. PMID:
11737543.
23
07745 Jena, Germany
[email protected]
www.zeiss.com/microscopy
Carl Zeiss Microscopy GmbH
Not for therapeutic use, treatment or medical diagnostic evidence. Not all products are available in every country. Contact your local ZEISS representative for more information.
EN_41_013_266 | CZ 12-2021 | Design, scope of delivery and technical progress subject to change without notice. | © Carl Zeiss Microscopy GmbH