0% found this document useful (0 votes)
44 views15 pages

Image Formation Without A Lens

A lensless camera, based on the idea by Prof. Laura Waller’s group, is built and used to capture single-shot images of three-dimensional (3D) objects.

Uploaded by

Albaraa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views15 pages

Image Formation Without A Lens

A lensless camera, based on the idea by Prof. Laura Waller’s group, is built and used to capture single-shot images of three-dimensional (3D) objects.

Uploaded by

Albaraa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

By: Albaraa Falodah, Dirk Floris Zwarts, and Ellenor Geraffy

Supervision by: Pegah Asgari, Dr. Gerhard Blab

Image Formation
Without a Lens

Application of Light and Electron Microscopy


1 February 2019
Abstract

A lensless camera, based on the idea by Prof. Laura Waller’s group, is built and used to
capture single-shot images of three-dimensional (3D) objects. The set up consists of an
image sensor with a diffuser placed in front of it. With the use of a calibration step a
single-shot of the object measured is taken and reconstructed using computational means.
While the spatial resolution of this method is still quite poor it proves to be a useful and
quick way to capture 3D images once the calibration is done. Thus, forming an easy way to
capture 3D images in a more time efficient way with a less bulky set up.

Introduction

Today, the majority of optical sensors are two-dimensional (2D). Therefore, imaging in
three-dimensions (3D) is mainly performed through the reconstruction of objects scanned
from several angles. While this method is successful in achieving 3D imaging with high
spatial resolution, the capture speed is time-consuming.​1,2 Attempts have been made to
overcome this limitation with single-shot 3D methods. In theory, lensless cameras have
shown great promise in comparison to traditional cameras due to their small form factors,
which should enhance their portability. However, in reality, the resulting resolution of
​ and the equipment required were often bulky and
lensless imaging was significantly lower 3,4
complicated to set up. In this project, we test a replication of the compact and lensless 3D
imaging optical system, the DiffuserCam, designed by Prof. Laura Waller’s group in
Berkeley.​5

The setup used in this experiment consists of a Raspberry Pi mini-computer, a lensless


camera, and a diffuser with an aperture placed in front of the image sensor of the camera,
later referred to as (Pi) DiffuserCam. The diffuser allows us to use the scattering light to our
advantage in the absence of a lens, enabling image reconstruction of a 2D image or a 3D
object with the prospect of enabling 3D capture in the future. Instead of trying to lessen the
effects of unwanted scattering as done in typical cameras, the diffuser uses scattering to
project the object in question without causing too much disruption (absorption or
scattering) to the light.​6 It is a thin, smooth material (e.g. transparent tape) that refracts
randomly and produced caustic patterns of a high contrast under illumination from a light
source.​5 The (Pi) DiffuserCam collects a single 2D image of the desired object and is
reconstructed using an algorithm on Python3. A calibration measurement is also necessary
for this experiment to measure how the chosen diffuser scatters light. Figure 1 shows a
model of the (Pi) DiffuserCam setup, along with the steps necessary to result in a 2D
reconstruction of the object measured.

1
The aim of this experiment is to first produce a 2D reconstruction of a 2D image on a mobile
phone (with the phone screen acting as a self-illuminating device), followed by imaging a 3D
object with good resolution. This was carried out by testing different diffuser materials to
scatter the light illuminating the objects. Different methods were used to prevent stray light
(not coming from the object) from entering the sensor. Finally, different light sources were
used to illuminate the 3D object.

Theory
Diffuser
A diffuser is a material that diffuses or scatters light that goes through it. Most diffusers are
made from translucent materials like ground glass, opal glass or teflon. The diffuser
functions as a phase mask, in our case a transparent phase object with a smoothly varying
thickness. When a light from temporally incoherent point source is shone through the
diffuser at a sensor, a high-frequency pseudorandom caustic pattern is observed. The
caustic patterns, termed point spread functions (PSFs), vary with the 3D position of the
source, thereby encoding 3D information. ​It should also be noted that in contrast to a
traditional camera, a lensless cameras maps a point in the scene to many points on the
sensor, therefore requiring computational reconstruction.​6
A way to visualize the encoding of 3D information is via the peaks that arise from the
bunching of rays under regions where the local diffuser curvature causes it to act like a
positive lens (Figure 2). The intensity peaks induced by the combination of refraction and
propagation, known as caustics, create an intensity pattern at the sensor that is directly
related to the local structure of the mask surface. Intuitively, these peaks will be located
under the strongly convex regions of the diffuser. Changing the incident illumination angle
leads to a linear shift of the caustic pattern. Thus, the intensity pattern formed by light
striking any part of the diffuser is uniquely determined by the incident angle and the local
diffuser structure.​7

2
​ From ref. 7
The PSF is depth dependent and thus a smaller pattern will form when the light source is
moved away from the sensor on the x-axis, as represented in Figure 3. Similarly, the PSF also
shows lateral dependence, where shifting of the light source on the y-axis will cause the
pattern to shift laterally as well.​5,6 ​Therefore each point in 3D space has a unique pattern,
both in size and in position. Now if we assume every point to be incoherent with each other,
you can see the measurement or raw data as a linear combination of the PSFs of all
positions in 3D space.
This can be represented as a matrix-vector multiplication
b = Hv
where ​b​ is the vector containing the measured 2D data, and ​v​ is the vector representing the
intensity of every point of the object in 3D space.
Using the algorithm provided you can solve the problem posed by the equation(see paper
for more detailed information).​6

3
Methods
The camera setup for our (Pi) DiffuserCam consisted of an image sensor and a diffuser with
an aperture. The camera was connected to a Raspberry Pi computer that was already set up
according to the tutorial by the authors​8​. As diffusers for our experiments, we used different
materials of varying transparency (i.e. different types of tape) and thickness (i.e. the tape
was folded over itself a number of times). It was important that the diffuser was transparent
enough to scatter the light while not causing too much disruption to the resulting image.
Figure 4 shows the different diffuser materials used in this project, along with the objects
being imaged.
An aperture was created on the diffuser using an opaque material (i.e. black tape) to block
any stray light from entering the system. It was constructed during a live reading of the
image sensor to ensure that the PSF is within the sensor field of view. The aperture was
tested by shining a bright light source on it, to see if any light can get through. A long black
tube was also placed from the position of the diffuser to the position of the object/ light
source for the same reason and to generate the best possible measurements, see Figure 5.

For obtaining the PSF, the light source is required to be point-like to avoid the resolution of
the reconstruction to be affected as a result of blurred lines in the PSF measurements.
Hence, it was sufficient to use the torch in a mobile phone for the calibration
measurements. To obtain images of good resolution, the light source should use the full
range of the image sensor but not saturate it.​6 Hence, the aim was that none of the pixels
reach their maximum value but should still be close to it. In doing so the brightness is not

4
lost and a more precise reconstruction of the object in question is achieved. This can be
achieved by either using a tunable voltage source, as the group of Prof. Laura Waller do, or
vary the exposure time of the sensor, as we do in this experiment.

Due to scattering by a diffuser, the images are reconstructed with an algorithm, based on
the PSF of the diffuser. This was the most crucial part of the experiment. The focal distance
at which the PSF had the sharpest features had to be found. This should also be the distance
the 2D image or 3D object would be placed following the calibration measurement. Using a
light source (i.e. phone torch), the PSF is recorded where the caustic pattern was sharpest
with the highest contrast.​7

Due to the depth and lateral dependence of the PSF (Figure 3), shifts in the live reading of
the sensor were observed when the light source was moved. If the shifts are too large, part
of the pattern will not illuminate the sensor anymore. The sharpness of the PSF was also
largely dependent on the diffuser material and attached aperture. Without the use of an
aperture on the tape diffuser, movement of the light source would cause renewed caustic
patterns on the sensor, thus a single image to calibrate the system would not be enough.

Once the PSF pattern was recorded, imaging of sample objects started (Figure 4). First, an
image of a 2D picture on a mobile phone screen was shot and reconstructed. Then, 3D
objects were imaged while illuminated with different light sources (desk lamps, phone
torch). All images were recorded with room lights off, to minimise stray light and therefore
noise. The image sensor had a flat output surface, where the 2D image or 3D object
information can be collected, and an input surface where the information is stored to be
reconstructed computationally. The reconstruction algorithm was run using Python3.

Whereas the PSF and all imaging should have been performed at the same distance, this
was not strictly followed for practical considerations (better lighting of objects and so they
are larger with less background). This will be discussed in the Results and Discussion section.
The distances used are summarised in Table 1 below.

5
Table 1 ​Distances between sensor and objects for calibration and imaging.
Object Distance from sensor Notes

Diffuser 2 mm Adhered directly in place of the removed lens

Light source ~ 80 cm These are approximate values because for each


object the distance was varied while camera was
2D image on screen ~ 17 cm on “live view” until the frame ​appeared to be
filled
3D objects 8 ~ 15 cm

Results and Discussion


For clarity, the results of calibration and imaging will be presented separately. Also, some
unsuccessful imaging attempts are presented in the appendix.

Calibration
Figure 6 below shows the caustic patterns of the point spread functions (PSF) successfully
obtained for two diffusers (transparent tape and Scotch® double-sided tape). While both
PSF’s were obtained with the light source at the same distance, this was done on different
days and with a different aperture. This explains the difference in size, shape and position
between the two, based on lateral and depth dependence explained earlier. Difference in
brightness is caused by partially covering the light source to avoid overexposure.

Figure 7 below shows results from diffusers which scatter too much light, resulting in
absence of the desired defined caustic pattern. The translucent tape was the first to be
tested (Figure 4a, leftmost tape). The possibility that the translucent tape was not scattering
enough was disproved by using multiple layers of tape, which did not enhance the results.
On the contrary, when transparent tapes were used the caustic pattern was observed as
shown on Figure 6.

6
Imaging
The results of imaging the spiral test image (figure 4b), as suggested by authors​8​, before and
after reconstruction are shown in figure 8 for the two transparent diffusers. The result of
the reconstruction using the Scotch® double-sided tape shows a higher resolution and
sharper features. This is assumed to be the same tape reported by authors​6​. It was
suggested by Dr. Blab that the adhesive, being on both sides of the tape, may contribute to
better scattering. The type of polymer used in each tape can also play a big role. While both
images were taken at the same distance, the difference in size may be attributed to the
different focal length, hence different magnification, of each diffuser. Further analysis of
optical properties of different tapes may shed light on what makes a good diffuser for this
technique.

7
The superiority of the Scotch® double-sided tape for this technique compared to the other
diffusers tested is further shown in figure 9. While the reconstruction is not great, the
resemblance is very clear compared to the image with the other tape. The biggest challenge
encountered for imaging 3D objects was lighting the object uniformly (i.e. no shadowing)
without obstructing the camera or introducing stray light or too much light. Furthermore,
the technique tested is limited to black and white imaging which apparently increases the
negative effect of shadowing. This is evidenced by the face (figure 9, right) being completely
hidden even though it was not greatly shadowed, as the neck is evidently well-lit. This
argument is based on the assumption that color would help in reconstruction (given the
right algorithm) as opposed to intensity alone, where in the latter case the black background
becomes more similar to shaded areas.
Furthermore, the decreased resolution could be a result of imaging 3D objects at a different
position than the PSF was recorded (Table 1). This shift can results in the object being
outside the appropriate focal distance of the diffuser. Furthermore, imaging at a different
position than the calibration results in mismatches between the recorded PSF and the actual
scattering during imaging. Since the algorithm for reconstruction utilises the calibration PSF,
these mismatches can lead to decreased resolution.

Figure 10 shows reconstructed images of a ‘thumbs-up’ attempt. Although the hand was
positioned at what appeared to be a sufficient distance while on “live view”, the cropping
effect of the tube is evident after reconstruction. The left-top image appears to show
knuckles and part of the fingers. The right-top image appears to show knuckles and part of
the palm. Still, the images are poor and the thumb is absent. The bottom-right image shows
the best resemblance. These results show further evidence that Scotch® tape works better,

8
the importance of proper distance for imaging, and the negative effect of shadowing. Figure
11 further illustrates the problem shadowing poses to the algorithm, as the image after the
zeroth iteration looks more like a fist than after the fifth iteration, where details and proper
contrast are greatly lost.

9
Figure 12 shows an attempt to replicate the successful reconstruction of the spiral drawing
but using a different image. The result is much worse than in Figure 8. It was suggested that
since Fourier Transform is used in the algorithm to look for patterns, the presence of a
pattern in the object might have thrown the algorithm off. Imaging patterned objects would
then be a limitation of this method (Also, Fig. S3 in appendix).

10
Conclusion and Outlook
We found Scotch tape and transparent tape to properly work as diffusers. We succeeded in
obtaining a 2D reconstructing of 2D images from a mobile phone screen. We also managed to get
some results with 3D objects, although the resolution is poor due to non-uniform lighting of the
objects. The lighting of objects should be much improved; for example, by using LED lights around
the tube (Figure 5) to illuminate the objects uniformly while not introducing stray light to the sensor.
Due to the limited time available for the experiments some parameters, e.g. angle dependence of
the illumination source, multilayers of tape (for transparent or Scotch), 3D reconstruction and
replication of experiments, have not been done. These will help in both improving and analyzing the
data. Furthermore, other steps to improving the resolution could be done by using a tunable light
source to measure the PSF at the optimal distance to the sensor, and perform imaging at the same
distance without overexposing the sensor.
However, considering the limited time, we did a good job in getting the system to work.

11
References

1. Denk, W., Strickler, J. H. & Webb, W. W. Two-photon laser scanning fluorescence


microscopy. ​Science (80-. ).​ ​248,​ 73–76 (1990).
2. Holekamp, T. F., Turaga, D. & Holy, T. E. Fast Three-Dimensional Fluorescence Imaging
of Activity in Neural Populations by Objective-Coupled Planar Illumination
Microscopy. ​Neuron​ ​57,​ 661–672 (2008).
3. Broxton, M. ​et al. Wave optics theory and 3-D deconvolution for the light field
microscope. ​Opt. Express​ ​21,​ 25418 (2013).
4. Pégard, N. C. ​et al. Compressive light-field microscopy for 3D neural activity
recording. ​Optica​ ​3,​ 517 (2016).
5. Kuo, G., Antipa, N., Ng, R. & Waller, L. DiffuserCam: Diffuser-Based Lensless Cameras.
in ​Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP) CTu3B.2 (2017).
doi:10.1364/COSI.2017.CTu3B.2
6. Parthasarathy, S; Biscarrat, C;Antipa, N; Kuo, G;Waller, L. How to build a (Pi)
DiffuserCam. (2018).
7. Antipa, N., Necula, S., Ng, R. & Waller, L. Single-shot diffuser-encoded light field
imaging. in ​2016 IEEE International Conference on Computational Photography, ICCP
2016 - Proceedings​ (2016). doi:10.1109/ICCPHOT.2016.7492880
8. Laura Waller Lab. Github - DiffuserCam Tutorial.
https://github.com/Waller-Lab/DiffuserCam-Tutorial​. (accessed January 9, 2019)

12
Appendix

Fig. S1: Spiral test image with Parafilm® as diffuser. Fifth (left) and tenth (right) iteration of
reconstruction. No difference is observed.

Fig. S2: Reconstructed image (5th iteration) of hippo (see Fig. 4(b))

13
Fig. S3: Reconstructed images of dice (see Fig. 4(b)) after 0th (left) and 5th (right) iteration(s).

14

You might also like