0% found this document useful (0 votes)
67 views4 pages

Distributed FPGA-based Smart Camera Architecture For Computer Vision Applications

This document describes a distributed FPGA-based smart camera architecture for computer vision applications. The architecture uses FPGA-enabled smart camera nodes that can be configured and updated at runtime to process images in parallel pipelines. The cameras are addressable over the internet and can cooperate to extract visual features and aggregate data about their environment.

Uploaded by

Araya Kiros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views4 pages

Distributed FPGA-based Smart Camera Architecture For Computer Vision Applications

This document describes a distributed FPGA-based smart camera architecture for computer vision applications. The architecture uses FPGA-enabled smart camera nodes that can be configured and updated at runtime to process images in parallel pipelines. The cameras are addressable over the internet and can cooperate to extract visual features and aggregate data about their environment.

Uploaded by

Araya Kiros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Distributed FPGA-based smart camera architecture for

computer vision applications


Cedric Bourrasset, Luca Maggiani, Jocelyn Serot, Francois Berry, Paolo
Pagano

To cite this version:


Cedric Bourrasset, Luca Maggiani, Jocelyn Serot, Francois Berry, Paolo Pagano. Dis-
tributed FPGA-based smart camera architecture for computer vision applications. Inter-
national Conference on Distributed Smart Cameras 2013, Oct 2013, Palm Springs, United
States. 2013 Seventh International Conference on Distributed Smart Cameras (ICDSC 2013),
2013, 2013 Seventh International Conference on Distributed Smart Cameras (ICDSC 2013).
<10.1109/ICDSC.2013.6778245>. <hal-01186091>

HAL Id: hal-01186091


https://hal.archives-ouvertes.fr/hal-01186091
Submitted on 24 Aug 2015

HAL is a multi-disciplinary open access Larchive ouverte pluridisciplinaire HAL, est


archive for the deposit and dissemination of sci- destinee au depot et `a la diffusion de documents
entific research documents, whether they are pub- scientifiques de niveau recherche, publies ou non,
lished or not. The documents may come from emanant des etablissements denseignement et de
teaching and research institutions in France or recherche francais ou etrangers, des laboratoires
abroad, or from public or private research centers. publics ou prives.
Distributed FPGA-based Smart Camera Architecture
for Computer Vision Applications
Cedric Bourrasset , Luca Maggiani , Jocelyn Serot , Francois Berry , Paolo Pagano
Institut Pascal, UMR 6602 CNRS, Universite Blaise Pascal, Clermont-Fd, France
TeCIP Institute, Scuola Superiore SantAnna, Pisa, Italy
National Laboratory of Photonic Networks, CNIT, Pisa, Italy

Abstract: Smart camera networks (SCN) raise challenging issues to be manipulated in M2M and H2M. FPGAs are therefore
in many fields of research, including vision processing, commu- considered as publishers of Internet resources, addressable as
nication protocols, distributed algorithms or power management. URIs by means of application layer protocols like HTTP and
Furthermore, application logic in SCN is not centralized but spread
among network nodes meaning that each node must have to COAP [5].
process images to extract significant features, and aggregate data to II. H ARDWARE A RCHITECTURE
understand the surrounding environment.
In this context, smart camera have first embedded general pur- The smart camera architecture used in this paper is a FPGA-
pose processor (GPP) for image processing. Since image resolution based platform developed at our institute and called Dream-
increases, GPPs have reached their limit to maintain real-time Cam [6] (Fig. 1). This smart camera consists of 5 stacked
processing constraint. More recently, FPGA-based platforms have boards. This modular architecture allows us to easily adapt to new
been studied for their massive parallelism capabilities. This paper
present our new FPGA-based smart camera platform supporting co- functions. For instance, different communication board have been
operation between nodes and run-time updatable image processing. designed and the switch from USB to Giga-Ethernet is trivial.
The architecture is based on a full reconfigurable pipeline driven
by a softcore. Image FPGA
Sensor (Cylcone III)
Giga Ether.
(E2V)
I. I NTRODUCTION Board

Smart Camera Networks (SCNs) is an emerging research field


which represents the natural evolution of centralized computer
vision applications towards full distributed systems. Indeed, in
SCNs the application logic is not centralized, but spread among
network nodes: every SCN node has the capability to pre-process
images in order to extract significant features. In a such scenario, Memory
board
a strong cooperation between nodes is necessary, as well as the Power
possibility of having pervasive and redundant SCN devices [1]. Board
In this context, many hardware architectures for visual sen- Fig. 1: DreamCam system
sors network have been proposed [2]. Most existing platforms
are based on GPP units [3] because this kind of architectures DreamCam platform is equipped by a 1.3 Mpixels CMOS
offers high level programming and modern embedded processors image sensor from E2V, supporting sub-sampling/binning and
provide good performances. Despite this, embedded processors multi regions of interest (ROIs). The processing core is a FPGA
cannot meet real-time processing constraints when image sensor Cyclone-III EP3C120 FPGA manufactured by Altera. This FPGA
resolution increases. is connected to 6x1MBytes of SRAM memory blocks wherein
On the other side, FPGA-based architectures which offering each 1MB memory block has a private data and address buses
more massive computing capability than GPPs have been studied (hence programmer may address six memories independently).
for implementing image processing applications on smart camera Fig 2 proposes the internal FPGA architecture. Two IPs (In-
nodes [4]. However, if the introduction of FPGAs can address tellectual property) have been designed to control the image
some performance issues in smart camera networks, it introduces sensor and the communication device. Thus, the pixel flow is
new challenges concerning programmability of nodes, hardware processed by the configurable processor and results are ressources
abstraction and network management. accessible via the internet network. The system management is
This work presents a FPGA-based platform offering high- performed by a softcore (NIOS II). At run-time, the softcore
capacity for image processing, but also being fully configurable can specify Ethernet configuration (MAC and IP address), image
and updateable. The proposed architecture is inspired by the sensor configuration (resolution, frame rate, ROI) and configures
Internet of Things, where all device nodes (including low-end the processing block.
ones) are addressable, and their resources may represent sensors, This block is composed by a pipeline of simple processing
actuators, combinations of values or other information, eligible elements called Elab which can be interconnected via mod-
Standby
Gigabit mode
Standby Standby Active
Image IP IP Com. Ethernet mode smart
Congurable mode smart
camera
Ethernet smart
sensor sensor Processing
Controller
chips smart camera
camera camera

Softcore

IP
memory
FPGA

Memory

Fig. 2: FPGA logical architecture

ules called RouteMatrix (Fig. 3). The processing element can Fig. 4: Distributed target application
be thresholding, convolution, histogram computation,... and the
interconnection (RouteMatrix) can be viewed as multiplexer con-
trolled by the softcore. In this way, each node (smart camera) can communicate each other on an ethernet network. This first archi-
be configure via the network and provides the ressource expected. tecture does not take care on low-consumption communication
The former realizes the connections between the Elab blocks, in smart camera network which need to be investigated. A new
while the latter implement basic computer vision algorithms that physical medium is under development and will integrate an IEEE
can be part of a specific elaboration pipeline tunable at run-time. 802.15.4 transceiver supporting wireless communication.
This architecture can support multi-camera video inputs that can ACKNOWLEDGMENT
be parallel processed as a continuous pixel flow. In this respect,
This work has been sponsored by the French government re-
the captured flow has not an associated semantic, so that an user
search programm Investissements davenir through the IMobS3
interacting with a configuration manager can select and compose
Laboratory of Excellence (ANR-10-LABX-16-01), by the Eu-
the appropriate modules suited to the desired application.
ropean Union through the program Regional competitiveness
Elaboration parameter conguration and employment 2007-2013 (ERDF Auvergne region), and by
Route Matrix Route Matrix Route Matrix the Auvergne region. This work is made in collaboration with
Video Stream 0 Elab00 Elab10 Elab20 ... the Networks of Embedded Systems team at CNIT (Consorzio
Video Stream 1 Elab01 Elab11 Elab21 ... Nazionale Interuniversitario per le Telecomunicazioni)
Video Stream 2 Elab02 Elab12 Elab22 ... R EFERENCES
...
Video Stream 3 Elab03 Elab13 Elab23 [1] P. Pagano, C. Salvadori, S. Madeo, M. Petracca, S. Bocchino,
D. Alessandrelli, A. Azzar, M. Ghibaudi, G. Pellerano, and R. Pel-
liccia, A middleware of things for supporting distributed vision
Run-time software interconnection reconguartion applications, in Proceedings of the 1st Workshop on Smart Cameras
Fig. 3: Configurable processing IP for Robotic Applications, SCaBot Workshop, 2012.
[2] W. W. B. Rinner, Towards Pervasive Smart Camera Networks.
Academic press, 2009.
[3] P. Chen, P. Ahammad, C. Boyer, S.-I. Huang, L. Lin, E. Lobaton,
III. A PPLICATION M. Meingast, S. Oh, S. Wang, P. Yan, A. Yang, C. Yeo, L.-C. Chang,
J. D. Tygar, and S. Sastry, Citric: A low-bandwidth wireless camera
Targeted application is a tracking system in distributed network network platform, in Distributed Smart Cameras, 2008. Second
of smart cameras, presented in Fig 4. Our system will be based ACM/IEEE International Conference on, 2008.
on particle filter which is very common for tracking objects [7]. [4] E. Norouznezhad, A. Bigdeli, A. Postula, and B. Lovell, A high
resolution smart camera with gige vision extension for surveillance
Moreover, Cho et al have demonstrated in [8] that particle filters
applications, in Distributed Smart Cameras, 2008. ICES 2008.
can be implemented in FPGA device, making us confident about Second ACM/IEEE International Conference on, 2008, pp. 18.
the feasibility of this hardware implementation. A first application [5] Z. Shelby, K. Hartke, C. Bormann, and B. Frank, Constrained
(HOG) using this modular architecture has been implemented and Application Protocol (CoAP), CoRE Working Group, Internet En-
the smart camera is currently able to exchange information via gineering Task Force, October 2012.
[6] M. Birem and F. Berry, Dreamcam : A modular fpga-based smart
Giga-Ethernet communication with low resources FPGA (around
camera architecture, in Journal of System Architecture - Elsevier -
5%). Future development will focus on a wireless transceiver To appear - 2013, 2013.
integration and particle filter implementation. [7] S. Haykin and N. deFreitas, Special issue on sequential state
estimation, Proceedings of the IEEE, vol. 92, no. 3, pp. 399400,
IV. C ONCLUSION 2004.
[8] J. U. Cho, S.-H. Jin, X. D. Pham, J. W. Jeon, J.-E. Byun, and
In this paper, we presented a FPGA architecture offering high H. Kang, A real-time object tracking system using a particle filter,
processing capacity with configurable processing. At this point in Intelligent Robots and Systems, 2006 IEEE/RSJ International
of the project, we know that out smart camera can autonomously Conference on, 2006, pp. 28222827.

You might also like