A Practical Method For Measuring The Spatial Frequency Response of Light Field Cameras
A Practical Method For Measuring The Spatial Frequency Response of Light Field Cameras
A Practical Method For Measuring The Spatial Frequency Response of Light Field Cameras
ABSTRACT 1.0
Modulation
important and unbiased image quality measures of a digital 0.6
SFR in front of the focal plane
resides in their ability to capture both spatial and angular in- 0.0 0.1 0.2 0.3 0.4 0.5
Nyquist
cycles/px
formation of the incoming light field thanks to an array of
microlenses located in front of the sensor. Existing meth-
Fig. 1: The spatial frequency responses (SFRs) of a
ods for measuring the SFR of conventional cameras are thus
microlens-based light field camera. The SFR varies according
no longer applicable as the interaction between the main lens
to scene depth, and is lowest at the focal plane.
and the micro-lenses results in different resolving powers over
the image plane that depend on the scene depths. By us-
ing a 3-dimensional target made of vertical lines printed on limited by the sensor resolution. In the case of first genera-
an inclined planar surface, we are able to measure the SFR tion Lytro camera, the sensor has a resolution of 3280 × 3280
across multiple depths in a single exposure. Our method al- pixels, with 330 × 330 microlenses arranged in a hexagonal
lows SFR measurements from the raw light field itself as cap- lattice. The microimages thus have a resolution of approxi-
tured by the camera, and is thus independent of subsequent mately 10 × 10 pixels.
post-processing algorithms such as image reconstruction, dig- Given the early stage of development of light field cam-
ital refocusing or depth estimation. Our experimental results eras, not many methods have yet been proposed to assess
are consistent with theoretical bounds and reproducible. their intrinsic quality using an objective criterion such as their
spatial frequency response (SFR). The SFR, analogue to the
Index Terms— Spatial frequency response, Modulation modulation transfer function of an optical system, is a resolu-
transfer function, Plenoptic cameras, Light field, Computa- tion measure that reports how well a lens/sensor combination
tional photography is able to resolve scene details. The SFR describes the mod-
1. INTRODUCTION ulation at different spatial frequencies, usually expressed in
cycles/pixel. The modulation expresses how well the original
Light field cameras, such as the ones developed by Lytro [1], contrast of the target is reproduced. When the spatial frequen-
are able to capture a 4D light field from a single exposure. cies are normalized, as illustrated in Fig. 1, the Nyquist limit
This is achieved by inserting a microlens array between the is at 0.5 cycles/pixel.
sensor and the main lens. Each microlens projects a low res- For conventional cameras, SFR measurement standards
olution microimage on the sensor, which contains directional exist such as ISO12233 [2], which define a planar reference
samples for a single spatial location. From this information, it target and an evaluation method. These methods are not only
is possible to retrace the light rays in space and develop new used to assess the quality of a given camera, but to also ob-
post-processing applications such as single exposure digital jectively compare different lens/sensor combinations among
refocusing or depth estimation. each other. They all evaluate the SFR from the acquired raw
The spatial sampling is determined by the microlens array image, before any post-processing is applied, in order to avoid
whereas the angular sampling is determined by the pixels on any modification of the modulation that would be induced by
the sensor. This implies a trade-off between spatial and an- processing the data.
gular resolution as the total number of captured light rays is Light field cameras, however, have two main optical parts
Authorized licensed use limited to: Apple. Downloaded on April 04,2023 at 15:38:17 UTC from IEEE Xplore. Restrictions apply.
Fig. 2: SFR measurement workflow. Each row of the acquired light field corresponds to a different depth on the observed
target. For each row, our method starts by selecting which microimages are valid for line spread function (LSF) estimation. The
SFR is then computed by taking the modulus of the Fourier transform of the average LSF.
in front of the sensor: the main lens and the microlens ar- 3. SFR MEASUREMENT
ray. The SFR of such cameras is subject to the interaction
between those two and varies according to the depth of the We describe here the proposed workflow to measure the SFR
scene. Consequently, the SFR also varies across the acquired of a microlens-based light field camera from a captured raw
image. This particular characteristic makes traditional meth- light field. The workflow is illustrated in Figure 2 and sum-
ods not applicable anymore. In this paper, we thus propose a marized in Algorithm 1.
method to measure the SFR of a microlens-based light field
camera across multiple depths using a 3-dimensional target Input: Raw light field captured from target
and a single image acquisition. As seen in Fig. 1, the resolv- Output: SFR for each depth of the target
ing power varies across the image and is the lowest at the focal Pre-processing (linearization, devignetting);
point [3, 4]. We conducted experiments with the Lytro and ob- Derive slanted edge angle using Hough transform;
tained results that are consistent with the theoretical bounds. for all microimages do
Our method is straightforward to implement and reproducible Fit Gaussian function to line spread function (LSF);
for different camera parameters. end
for all rows of microimages do
Classify microimages by Gaussian fit;
2. RELATED WORK
Combine LSF and compute SFR;
end
In recent literature, researchers proposed theoretical SFR Algorithm 1: High-level description of the proposed SFR
measures for light field cameras by analyzing the geometry measurement method.
of the captured light field. Ng [1] derived the output reso-
lution of photographs from the theory behind his refocusing
algorithm while Lumsdaine et al. [5] studied the sampling 3.1. Target setup and pre-processing
pattern of different microlens-based designs by comparing
their theoretical resolution floor. Our method is independent The camera setup described in Figure 3 shows how the target
of image rendering algorithms and provides a practical ap- is positioned in space with respect to the camera.The target
proach to measure the resolving power of the optics/sensor itself is made of a set of black lines printed over a white back-
combination. ground on a planar surface. The camera is rotated and tilted
SFR measurement methods for conventional cameras use with respect to the target so that slanted edges are captured at
different targets, such as a slanted edge [2], the Siemens star different depths. Those slanted edges are observed by the mi-
[6], or the dead leaves target [7]. The slanted edge measures crolenses and projected on the sensor as microimages, form-
the edge response of the camera with an oversampled step ing the acquired raw light field. Each row of the light field
function. This target is appealing for our application because corresponds to a single depths on the target. The light field is
it can be used even when only a limited number of pixels is first devignetted by dividing the input image by a normalized
available, as is the case for each microimage in light field calibration image that represents the light attenuation at each
cameras. The other targets provide more comprehensive mea- pixel due to vignetting. The light field is then preprocessed to
surements but are not suitable for low resolution images, as linearize the digital output level according to the input lumi-
they require a larger sensor area. Using those targets with low nance using the sensor’s opto-electronic conversion function
resolution images can produce results affected by aliasing. (OECF) [8], similarly to traditional SFR measurement.
Authorized licensed use limited to: Apple. Downloaded on April 04,2023 at 15:38:17 UTC from IEEE Xplore. Restrictions apply.
(a) Top view (a) At the focal point, the blur (b) Away from the focal point,
is maximal and no edge is multiple microlenses capture
present in the microimages. a slanted edge for which the
SFR is computable.
Fig. 4: Slanted edges in microimages.
(b) Side view
Fig. 3: Target capture setup. The camera is tilted vertically 1
and rotated horizontally with respect to the target in order to SF R(ω, r) = F LSFi (x) (1)
capture slanted edges at different depths. |Vr |
i∈Vr
Authorized licensed use limited to: Apple. Downloaded on April 04,2023 at 15:38:17 UTC from IEEE Xplore. Restrictions apply.
0.5 0.5 0.5
0.4 0.4 0.4
0.3 0.3 0.3
cycles/px
cycles/px
cycles/px
0.2 0.2 0.2
0.1 SFR at 10% modulation 0.1 SFR at 10% modulation 0.1 SFR at 10% modulation
SFR at 50% modulation SFR at 50% modulation SFR at 50% modulation
0.0 50 100 150 200 250 300 0.0 50 100 150 200 250 300 0.0 50 100 150 200 250 300
Image row (decreasing depth) Image row (decreasing depth) Image row (decreasing depth)
(a) Focal point at row 150 (b) Focal point at row 100 (c) Focal point at row 65
Fig. 5: SFR measurement for different focal points. As the focal point is shifted further from the camera (i.e. towards the
top of the target), the depth of the resolution floor follows its location.
0.5
confirming the reproducibility of the proposed measurement
method. 0.4
0.3
cycles/px
5. DISCUSSION
Authorized licensed use limited to: Apple. Downloaded on April 04,2023 at 15:38:17 UTC from IEEE Xplore. Restrictions apply.
7. REFERENCES [16] Kartik Venkataraman, Dan Lelescu, Jacques Duparr,
Andrew Mcmahon, Gabriel Molina, Priyam Chatterjee,
[1] Ren Ng, Digital light field photography, Ph.D. thesis, and Robert Mullis, “PiCam : An Ultra-Thin High Per-
Stanford University, 2006. formance Monolithic Camera Array,” ACM Transac-
tions on Graphics (TOG), vol. 32, no. 6, pp. 166, 2013.
[2] ISO12233, “ISO 12233:2000: Photography–Electronic
Still-Picture Cameras–Resolution Measurements,”
2000.
[12] Max Born and Emil Wolf, Principles of Optics, pp. 333–
335, Pergamon, Oxford, UK, 6th edition, 1980.
[14] Tom E. Bishop and Paolo Favaro, “The light field cam-
era: Extended depth of field, aliasing, and superresolu-
tion,” Pattern Analysis and Machine Intelligence, IEEE
Transactions on, vol. 34, no. 5, pp. 972–986, 2012.
Authorized licensed use limited to: Apple. Downloaded on April 04,2023 at 15:38:17 UTC from IEEE Xplore. Restrictions apply.