0% found this document useful (0 votes)
15 views5 pages

Xiaoming 2010

Uploaded by

Trần Vĩ Quang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

Xiaoming 2010

Uploaded by

Trần Vĩ Quang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Real-Time Distance Measurement Using a Modified

Camera

Liu Xiaoming, Qin Tian, Chen Wanchun Yin Xingliang


School of Astronautics
Beihang University China Aerospace Science & Industry Corp
Beijing 100191, P. R. China Beijing 100830, P. R. China
[email protected]

Abstract—In this paper we propose a real-time method which can In contrast, the method described here requires only one
measure distance using a modified camera. The camera’s image image from a monocular camera while the camera does not
sensor is inclined by a certain angle. Thus, the image projected have to be moved. We incline the image sensor by a certain
on the sensor is defocused differently at different areas. The area angle. Thus, the image projected on the sensor is defocused
where the image is focused best is just at projection plane. The differently at different areas. Namely, the definition of the
position of the projection plane can be obtained after finding out image is a function of distance from pixels on the sensor to lens
this area. The distance between the projection plane and the lens plane. The function value reaches maximum at the area where
of the camera is image distance. Object distance can be obtained the image is focused best. If the maximum of the function is
by applying image distance to lens formula. This method is
found, so as the image plane. Thus the image distance is easy
proved to be effective and a single distance measurement can be
to be obtained. Applying the image distance to lens formula,
performed within 3.21ms.
we can calculate the object distance.
Keywords-distance measurement; definition function; This paper is organized as follows. Principle of
monocular camera; lens formula measurement is introduced in Section II. Simulation is
demonstrated in Section III. Performance of the hardware is
I. INTRODUCTION tested in Section IV. Finally, conclusions will be drawn in
It is often said that distance information is lost during the Section V.
process of image formation, but in fact, according to lens
formula, object distance can be obtained if the focal length of II. PRINCIPLE OF MEASUREMENT
the lens and image distance are known [1]. Generally, the focal This section will discuss the principle of the measurement
length of the lens is given. Thus, the key point of distance method. As shown in Fig. 1, a lens is located at point O. An
measurement is how to get the image distance. Here, the image object at point U is projected by the lens and generates an
distance is the distance between the focused image and the lens image plane at M. An Image sensor is located at point N on the
center along the lens optical axis, while the object distance is optical axis of the lens, and is inclined by an angle of θ. The
the distance between the object and the lens center along the sensor and the image plane intersect at point C.
lens optical axis.
Several methods for calculating image distance have been
proposed. In [2], the image sensor is movable. Fix the lens and
the object, move the image sensor until the focused image
appears on the sensor. Record the image distance and calculate
the object distance using lens formula. In [3], a method using
two images is given. Two images are taken respectively at two
fixed points along the optical axis. The image distance is
calculated according to the ratio between each size of an object
projected on two images. In [1], the camera has one lens but
two image sensors. A half-reflect mirror divides the light from
the lens into two parts. Each sensor gets one part and produces
one image. These two images are identical except for the
aperture size and therefore depth of filed. Calculate the image
distance according to this difference and get the object distance.
Another method is proposed in [4]. But all these methods take
much time to perform a single measurement, because they have Figure 1. Skew sensor
to move cameras or deal with two or more images.

978-1-4244-2787-1/09/$25.00 ©2009 IEEE


The image projected on the sensor is clear only at the f (m) = SMDx + SMDy ( x = m,1 < y ≤ 800) (5)
horizontal line that passes through point C. All the other parts
of the image are blurred because they are defocused. If the Suppose f (m) gets to climax when m = ma , thus
length of NC can be calculated, the image distance can be
expressed as: ma − 300
NC = ×h (6)
OM = ON − NC sin θ (1) 600
According to lens formula where h is the physical height of the sensor. Combine (6) and
1 1 1 (3), the object distance can be calculated.
+ = (2)
OM OU f ma − 300
f (ON − × h sin θ )
we can obtain the object distance. Here, f is the focal length OU = 600 (7)
of the lens. m − 300
ON − a × h sin θ − f
Substitution (1) into (2) yields 600
The image distance range covered by the sensor is:
f (ON − NC sin θ )
OU = (3) h h
ON − NC sin θ − f (ON − sin θ , ON + sin θ ) (8)
2 2
Length of NC is the only one unknown parameter. So we’ll The corresponding object distance is
discuss how to calculate NC in the following text.
⎛ h h ⎞
⎜ f (ON + 2 sin θ ) f (ON − 2 sin θ ) ⎟
Suppose that N is the middle horizontal line of the sensor
and C is the clearest horizontal line of the image obtained from
the sensor. Calculate the definition of all lines. The line with ⎜ , ⎟ (9)
h h
the largest definition is just C. Now let’s have an overview of ⎜ ON + sin θ − f ON − sin θ − f ⎟
definition function. ⎝ 2 2 ⎠
Obviously, the measurement range of a camera is decided
Definition function is used to describe the clarity degree of
by the focal length f , the distance from the middle line of the
an image. The clear image has sharper edge and more obvious
contrast than the blurred one. So we can distinguish its detail sensor to the center of the lens ON , the skew angle of the
well. According to this fact, we can construct a definition sensor θ and the height of the sensor h .
function that can increase as the image becomes clearer.
Perfect definition function should be unbiased, single-peaked III. SIMULATION
and can reflect defocusing polarity. It also has an enough ratio
of signal to noise (SNR). At the same time, the algorithm is To testify the validity of this method, let’s have a
simple, the operation speed is fast and it needs little necessary simulation with an image named Water lilies.jpg in windows
memory. The most common definition functions are Tenengrad photo gallery.
Function, SMD (Sum-Modulus-Difference) Function, VAR First, change the image into grey one, as shown in Fig. 2.
(Variance) Function and FSWM (Frequency-Selective- Provided the grey image is a clear image at point M after
Weighted-Median) Filter Function [5]. Here we select SMD focused. Then, what will the image projected on the sensor be
Function as an example. like? Provided the sensor is not inclined but kept a small
SMD is introduced by Jarvis. Its arithmetic is very simple, distance from the image plane, as shown in Fig. 3.
just calculating the sum of the first order grey difference of two
A point A′ on the focus plane will disperse and form a
adjacent pixels on horizontal or vertical directions. The formula
is as following: bright dot, whose diameter is A1A2 , on the sensor. A1A2 can
be expressed as following:
⎧ SMDx = ∑∑ f ( x, y ) − f ( x, y − 1)
⎪ NM

x y
(4) A1A2 = ⋅ 2R (10)
⎪ SMDy = ∑∑ f ( x, y ) − f ( x + 1, y )
MO
⎩ x y
Where, R is the radius of the lens.
where, f ( x, y ) is the grey value of the image pixel at point Generally, object distance is much larger than lens
( x, y ) . diameter. So we can consider that the light intensity, from point
A on the object to the lens, is constant. In other words, point A
Provided f (m) is the SMD Function of the m th row and irradiates the lens uniformly. So A′ irradiates the sensor
uniformly too. Namely, the light intensity of the bright dot is
the resolution of the sensor is 600×800. f ( m) can be written
uniform. Now let’s have a look at the formation of a point B on
as following: the sensor, as shown in Fig. 4.
Figure 2. Water lilies

Figure 4. Formation of point on sensor

Figure 3. Dispersion of a focused point

The light on point B comes from the points in a circle,


whose diameter is B1B2 , on image plane, while every point in Figure 5. Sensor bars
the circle illuminates another circle A1A2 , as shown in Fig. 3.
Thus the luminance of point B is According to the former discussion, suppose that lens
radius R=8mm, focal length f=25mm, sensor height h=3mm,
I (Ο( B1B 2)) distance from sensor center to lens center is 26.6mm, sensor is
IB = (11)
inclined by 30 degrees, account of sensor bars is 30. The image
N (Ο( A1A2))
distance range covered by the sensor is
Where I (Ο( B1B 2)) is the luminance summation of all 3 3
the points in circle B1B 2 , N (Ο( A1A2)) is the point (26.6 − sin 300 , 26.6 + sin 300 ) mm.
2 2
amount in circle A1A2 . Obviously, The corresponding object distance is
MN (291, 760) mm.
B1B 2 = ⋅ 2R (12)
ON Object distance is increased from 300mm. Measurement
where R is the radius of the lens. result is recorded every 50mm. A simulation result with Matlab
is obtained as Fig. 6.
Up to now, given object distance, we can obtain image
distance while given the distance from sensor to lens we can
obtain the grey image on the sensor.
Now provided the sensor is inclined and we can assume
that the sensor is made up of many horizontal “sensor bars”, as
shown in Fig. 5. For each sensor bar, the former conclusions
are still proper.
The sensor bar with the maximum definition is just at the
position where the clear image is focused (image plane). The
image distance is obtained when this position is found. Then
calculate the object distance and compare with the given object
distance to test the validity of this method.

Figure 6. Arithmetic simulation


Fig. 6 shows that the measurement results are in agreement
with the true distance. So this method is demonstrated to be
effective. However, in the eighth test, there is a large difference
between the given distance and the measurements result. That’s
because when the object distance is 650mm, the focused image
is just at the boundary of two adjacent sensor bars. It is hard to
say whose definition is larger. Generally, it depends on the
high frequency of the image itself. Thus, a basic fact can be
concluded: the error of measurement will become larger when
decreasing the amount of sensor bars. However, summation of
pixels on each sensor bar will become less when increasing the
Figure 7. Modified camera
amount of sensor bars. The definition function will be much
more depended on the image itself. The object with too much
spots, though it is defocused, may have a larger definition
value than a smooth object. Error will also be made due to this
reason. Then, how to select the amount of the sensor bars will
be an important topic in our future research.

IV. HARDWARE TEST


A USB camera is modified according to this method, as
shown in Fig. 7.
Basic parameters: lens radius R=8mm, focal length
f=25mm, sensor height h=3mm, distance from sensor center to
lens center is 27.05mm, sensor is inclined by 45 degrees, the
Figure 8. Program interface
amount of sensor bars is 24, the resolution of the sensor is 240
×320. The image distance range covered by the sensor is:

3 3
(27.05 − sin 450 , 27.05 + sin 450 ) mm.
2 2
The corresponding object distance is
(226, 656) mm.
The camera is fixed vertically, facing to the floor of our
laboratory. The interface programmed in VC is shown in Fig. 8.
Object distance is increased from 300mm. Measurement
result is recorded every 50mm. A test result with hardware is
obtained as Fig. 9. Figure 9. Hardware test result

Fig. 9 verifies the validity of this method. Due to the


simplicity of definition function, just accumulating the V. CONCLUSIONS
difference of two adjacent pixels, it takes a little time to A real-time method for distance measurement using a
processing the image. In our experiment, it only takes 3.21ms modified camera has been proposed. The arithmetic has been
to complete one distance measurement using a common simulated and the hardware has been tested. Both simulation
computer. Namely, it can complete more than 300 and hardware experiment verified its validity. The hardware is
measurements per second. easy to be implemented with low cost and has characteristics of
However, similar to Fig. 6, there is a large difference small size, light weight and low power consumption, so it has
between the given distance and the measurements result in the potential applications in airplane landing, missile cruising,
fourth test. The reason has been explained in Section III. An target movement estimation, especially in unmanned aerial
improvement can be made to decrease this error. When the vehicle (UAV) areas where size, weight, and power
sensor bar with the maximum definition is found, slide the requirements are restricted.
sensor bar up and down by one or several lines until its
The purpose of this paper is just to verify the validity of the
definition value reaches “peak”. Obviously, the sensor bar with
proposed method. Precision, robustness and source of error are
this “peak” definition value is just located at the focused image
not deeply analyzed. The influence of image definition
plane. Thus the measurement will become more accurate.
function, amount of sensor bars and sensor resolution is not
However, more time will be used to implement this procedure.
discussed either. All of these are the topics we will research in
our future work.
REFERENCES
[1] A. Pentland, T. Darrell, M. Turk, W. Huang, “A simple, real-time range
camera,” Computer Vision and Pattern Recognition, 1989. Proceedings
CVPR '89., IEEE Computer Society Conference on 4-8 June 1989,
pp.256-261
[2] Naomi Isoda, Kenji Terada, Shunichiro Oe, Kenichi Kaida,
“Improvement of accuracy for distance measurement by using movable
CCD,” SICE '97. Proceedings of the 36th SICE Annual Conference.
International Session Papers 29-31 July 1997, pp. 1251-1254
[3] Naoki YAMAGUTI, Shunichiro OE, Kenji TERADA, “A method of
distance measurement by using monocular camera,” SICE '97.
Proceedings of the 36th SICE Annual Conference. International Session
Papers 29-31 Jul 1997, pp.1255-1260
[4] Shree K. Nayar, Masahiro Watanabe, Minori Noguchi, “Real-time focus
range sensor,” Pattern Analysis and Machine Intelligence, IEEE
Transactions on Volume 18, Issue 12, Dec. 1996 pp.1186-1198
[5] ZHANG Jian-min, CHENG Hong-tao, “Auto-focusing technique used in
CCD digital imaging,” Journal of TIANJIN University of Technology
and Education, Vol.17 No.2 Jun.2007
张建民, 程红涛 用于 CCD 数码成像的自动对焦技术 天津工程师范
学院学报 Vol.17 No.2 Jun.2007

You might also like