0% found this document useful (0 votes)
198 views4 pages

360° Video-Graphic Panorama Using Image Stitching

This paper presents a new approach to image stitching and object tracking. It aims at creating a full 360° panorama with the help of 6 cameras which will grant 6 inputs to form the panoramic image. Depending on the frames per second, the next set of images will be given as an input. Thus, this paper actually aims at stitch-ing the video being captured by the provided cameras. The next part is to track down the objects of interest and display its details on the screen. The process and the estimated results of this approach are discussed and explained in the paper.

Uploaded by

IJAFRC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
198 views4 pages

360° Video-Graphic Panorama Using Image Stitching

This paper presents a new approach to image stitching and object tracking. It aims at creating a full 360° panorama with the help of 6 cameras which will grant 6 inputs to form the panoramic image. Depending on the frames per second, the next set of images will be given as an input. Thus, this paper actually aims at stitch-ing the video being captured by the provided cameras. The next part is to track down the objects of interest and display its details on the screen. The process and the estimated results of this approach are discussed and explained in the paper.

Uploaded by

IJAFRC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

International Journal of Advance Foundation and Research in Computer (IJAFRC)

Volume 2, Issue 11, November - 2015. ISSN 2348 4853, Impact Factor 1.317

360 Video-graphic Panorama Using Image Stitching


Authors Name:Parikshit Sonaikar/Saurabh Salunke/Mayur Naik/Satej Gujarathi Prof:-Vandana Nawale
Computer Department,Dhole Patil College Of Engineering(SPPU),Pune

ABSTRACT
This paper presents a new approach to image stitching and object tracking. It aims at creating a full 360
panorama with the help of 6 cameras which will grant 6 inputs to form the panoramic image. Depending on
the frames per second, the next set of images will be given as an input. Thus, this paper actually aims at
stitch-ing the video being captured by the provided cameras. The next part is to track down the objects of
interest and display its details on the screen. The process and the estimated results of this approach are
discussed and explained in the paper.
Keywords:- Harris Corner detector, RANSAC, SIFT, GIST, Feathering algorithm,

I.

INTRODUCTION

The main aim is to get a full 360outward pano-rama of an enclosed area. This can be used for video surveillance
and object tracking. Six fixed cameras will be installed and the corresponding captured images will be stitched so as
to get a whole 360 outward panorama. The basic GUI is planned to be very user friendly. The 360 panorama will
be seen on the top of the screen. A cursor will be provided in the form of a reference window so as to get the zoom
view. Object tracking will be the next goal of this pro-ject. First, the user has to click on a moving object that he
wishes to observe. Then this object will be encircled and all the information like the path traversed. And previous
position will be shown on the screen.
The first step towards creating a panorama is to take a series of pictures in an enclosed 3D space from one common
point in a circular fashion. A problem that can be encountered during this process is the variation of lighten-ing
conditions from one viewing direction in the scene to another. Any real scene has a significant difference in
luminance levels that can be perceived by the human eye. A typical camera uses 8 bits per color channel to store
brightness information, which provides 256 luminance levels. Human eye is able to distinguish a contrast of
10,000:1. Hence, the traditional stills are usually too bright or too dark in certain areas, which results in some of
detailed information to be lost.
II. RELATED WORK
Many existing surveillance systems solve these problems sequentially according to a pipeline. However, recent
research works show that some of these problems can be jointly solved or even be skipped in order to over-come
the challenges posed by certain application scenari-os. For example, while it is easy to compute the topology of a
camera network after cameras are well calibrated, some approaches are proposed to compute the topology without
camera calibration, because existing calibration methods have various limitations and may not be efficient or
accurate enough in certain scenarios.
A. Monoperspective Panoramic Images:-With the advent of digital cameras, pan-oramic imaging became popular
with a larger audience. Here, views with a wide angle are pro-duced by stitching together images of a normal
aspect ratio of 4:3. Ideally, the images are pro- duced with a tripod-mounted camera. This en- sures a fixed
focal point also known as center of projection. By rotating the camera around its projection. By rotating the
camera around its projection. vertical axis, only its viewing direction is al-tered. This means, that the
projection of the three-dimensional world onto the CCD-chip will never change. The rotation itself only makes
one part of the image disappear while another moves in. Unfortunately, this does not mean that images can be
put together by a simple concatenation because the rotation changes the vanishing points within the images.
This is most obvious in architectural photos, in which lines being parallel in the real world converge against a
18 | 2015, IJAFRC All Rights Reserved

www.ijafrc.org

International Journal of Advance Foundation and Research in Computer (IJAFRC)


Volume 2, Issue 11, November - 2015. ISSN 2348 4853, Impact Factor 1.317
common vanishing point in the projection .Prior to stitching two images together, a perspective de-warping
of one of them or preferably even of both at the same time has to be carried out. This process must be applied in
a common image space. Map-ping the images into such a space is usually done by applying either a tubular or a
spherical projection.
B. Multi-perspective Panoramic Images:- Multiperspective means that the panoramic image consists of patches
which do not have a common projection center but which are taken from changing viewpoints. This makes
stitching particularly difficult or impossible in a naive way since overlapping parts of neighboring images
which may in principal show the same content cannot be aligned. This is due to the fact, that changing
the viewpoint corresponds to a rotation of the objects in the real world. This may hide parts which had been
seen before the rotation and reveal new insights afterwards. As a consequence, none of two neighboring
images will exhibit simple cuts where one image can be aligned with its neighbour.One of the earliest
in-stances of multiperspective imaging is the ani-mated cartoon Pinocchio by Walt Disney Productions which
was made in 1940.
III. PROPOSED WORK
The main aim is to achieve 360outward panorama of an enclosed area and then tracking a particular object. In
the terms of feasibility, the project achievement is plausible as it is contained to an enclosed area. Tracking
objects of interest, and understanding and analyzing their activities. The primary goal is to have a wide variety
of applications private environments, such as monitoring laboratories, libraries, patients, elderly and children
at home. The view of a single camera is finite and limited by scene structures. In order to monitor a wide area,
such as tracking student activities in class rooms etc video streams from multiple cameras will be used.
1.

Multi-camera calibration will map different camera views to a single coordinate system.

2.

The topology of a camera network will identify whether camera views are overlapped or spatially
adjacent and describe the transition time of objects between camera views.

3.

Object re-identification will be used to match two image regions observed in different camera views and
recognize whether they belong to the same object or not, purely based on the appearance information
without spatio-temporal reasoning.

4.

Multi-camera tracking will be used to track objects across camera views. If it is known that two camera views
have over-lap, the homography between them can be computed in an automatic manner. Therefore, these two
problems will be jointly solved in some approaches. Multi-camera tracking requires matching tracks
obtained from different camera views according to their visual and spatio-temporal simi-larities. Matching
the appearance of image regions will be studied in object re-identification.The spatio-temporal reasoning
requires camera calibration and the knowledge of topology.

IV. CONSTRUCTING PANORAMA


The process of building a panoramic image consists of five steps: taking a series of images, locating correspondence
points in each pair of images, estimating a transformation matrix between related photographs in order to calculate
a new location of images in the panorama and finally stitching photos together. similar, it shows that the furtherselected matching points can meet with the registration precision; that can be called consistency data.
V. SIFT
SIFT algorithm can be used for image matching. SIFT is feature based and invariant to image scaling, rotation
and changes in illumination. The major stages in extracting image features are:
1.

Scale-space extrema detection.

2.

Keypoint localization.

3.

Orientation assignment.

19 | 2015, IJAFRC All Rights Reserved

www.ijafrc.org

International Journal of Advance Foundation and Research in Computer (IJAFRC)


Volume 2, Issue 11, November - 2015. ISSN 2348 4853, Impact Factor 1.317
4.

Generation of keypoint descriptors.

VI. BLENDING
Image blending refers to process of creating a set of discrete samples of continuous, one parameter family of images
that connects a pair of input images.
Feathering or centre weighting image blending:
In this simplest approach, the pixel values in the blended regions are weighted average from the two over-lapping
images.
PB(i,j) = (1-w)*PA(i,j) + w*PB(i,j)

VII. OPTIMIZED SIFT FEATURE MATCHING ALGORITHM:Due to the noise and surface similarity, the accuracy of image matching can decrease in the original SIFT algorithm.
In order to reduce the mismatched points and eliminate the redundant points, this paper proposed an optimized
matching algorithm blow.
First, use the original SIFT algorithm to extract a large number of matching points as initial candidate feature points.
Then establish kd-tree data structure according to the feature points and its feature vector in the image. This
improves the searching speed. Meanwhile, limit the relative displacement between feature points less than two
pixels, which can ensure the ratio of the nearest neighbor and the second nearest neighbor at a large scale. This step
accomplished the coarse matching.
Secondly, with the application of maximum of minimum distance cluster algorithm, select matching points from the
above matching points detected to find spatial farthest cluster center, so it can form the most uniform spatial
distribution of matching points. This step can largely improve the accuracy for the next geometric correction.
Last, according to the principle of consistency test, compute the line segments composited by precise matching
points. If the ratio of the corresponding line is

VIII.

ESTIMATED RESULT

The result that can be expected from this process is the 360 outward panoramic video with very miniscule glitches,
if any. The speed will be in synchronization with the actual happenings. There will be very few delays, and the
stitched results will be optically clear, which will further enhance the function of tracking moving objects.
IX. CONCLUSION

20 | 2015, IJAFRC All Rights Reserved

www.ijafrc.org

International Journal of Advance Foundation and Research in Computer (IJAFRC)


Volume 2, Issue 11, November - 2015. ISSN 2348 4853, Impact Factor 1.317
The proposed approach helps us to grab even the minute details of an enclosed area. This creation of 360
panorama will help in reducing number of humans monitoring the area, thereby reducing human error.
X. FUTURE WORK
In the proposed project the source (set of cameras) is static. So the future work will be to capture real time data by
making the source dynamic (cameras will move). Application of future work will be for army tanks.
X. REFERENCES
[1]

M. Brown and D. Lowe. Recognising Panoramas - slides. University of British Columbia, 2004.

[2]

M. Brown and D. G. Lowe. Recognising Panoramas. Ninth IEEE International Conference on Computer Vision
(ICCV03) - Volume 2, 2003.

[3]

Pablo dAngelo. Hugin. Technical report, 2005. Available at http://hugin.sourceforge. net/.

[4]

Paul Debevec and Jitendra Malik. Recovering High Dynamic Range Radiance Maps from Photographs.
Uni-versity of California at Berkeley, 1997.

[5]

Fuji. http://www.dpreview.com/reviews/fujifilms3pro/ page18.asp. 1997.

[6]

B. Wilburn, M. Smulski, H. K. Lee, and M. Horowitz, The light field video camera, in Proc. Media
Processors 2002, SPIE Electronic Imaging 2002, (San Jose (CA), USA), January 2002.

[7]

D. N. Wood, A. Finkelstein, J. F. Hughes, C. E. Thayer, and D. H. Salesin, Multiperspecive panoramas for


cel animation, Proc. SIGGRAPH, 1997.

[8]

J. Kim, K. Henning, S. Seitz, and D. Salesin, Multiperspective images from videos, tech. rep., Graphics
and Imaging Laboratory, University of Wash-ington, 2001. http://grail.cs.washington.edu/projects/
videoslice/results.

[9]

P. Rademacher and G. Bishop, Multiple-center-of-projection images, Proc. SIGGRAPH, 1998.

[10]

S. Vallance and P. Calder, Multi-perspective images for visualization, in Pan-Sydney Area Workshop on
Visu-alization Information Processing, 2002.

21 | 2015, IJAFRC All Rights Reserved

www.ijafrc.org

You might also like