FPGA Implementation of Pipeline Processor
FPGA Implementation of Pipeline Processor
TRACKING ALGORITHM
A PROJECT REPORT
Submitted by
G. SHRIKANTH (21904106079)
KAUSHIK SUBRAMANIAN (21904106043)
in partial fulfillment for the award of the degree
of
BACHELOR OF ENGINEERING
In
BONAFIDE CERTIFICATE
SIGNATURE
SIGNATURE
Prof. R. Narayanan
Mr. N. Venkateswaran
SUPERVISOR
Assistant Professor
Communication Engineering
Communication Engineering
Engineering, Pennalur,
Sriperumbudur - 602105
Engineering, Pennalur,
Sriperumbudur - 602105
EXTERNAL
EXAMINAR
INTERNAL
EXAMINAR
2
ACKNOWLEDGEMENT
We are personally indebted to a number of people who gave us their useful
insights to aid in our overall progress for this project. A complete
acknowledgement would therefore be encyclopedic. First of all, we would
like to give our deepest gratitude to our parents for permitting us to take up
this course.
Our sincere thanks and heartfelt sense of gratitude goes to our respected
Principal, Dr. R. Ramachandran for all his efforts and administration in
educating us in his premiere institution. We take this opportunity to also
thank our Head of the Department, Prof. R. Narayanan for his
encouragement throughout the project.
ABSTRACT
In this project we propose to use Image Processing algorithms for
the purpose of Object Recognition and Tracking and implement the same
using an FPGA.
The individual frames acquired from the target video are fed into
the FPGA. These are then subject to segmentation, thresholding and filtering
stages. Following this the object is tracked by comparing the background
frame and the processed updated frame containing the new location of the
target. The results of the FPGA implementation in tracking a moving object
were found to be positive and suitable for object tracking.
TABLE OF CONTENTS
CHAPTER NO.
1.
TITLE
PAGE NO.
ABSTRACT
iv
LIST OF FIGURES
vii
1.1 OVERVIEW
1.1.1 Basic Object Tracking
1.3.3 Thresholding
10
10
2.
11
14
14
16
16
17
17
17
20
20
20
21
24
2.4.1.1 C Compiler
24
25
2.5 SIMULATION
28
28
CONCLUSIONS
30
APPENDIX 1
31
APPENDIX 2
37
REFERENCES
42
6
LIST OF FIGURES
CHAPTER
NO.
1.
TITLE
INTRODUCTION
1.1 Layout of the Image processing system
1.2 Object Recognition Algorithm Flow
1.3 Gray Level Thresholding
1.4 Example of Median Filter
1.5 Frame Generation using Matlab
1.6 Step-wise generation of Enhanced Image
1.7 Object Path obtained
2.
PAGE
NO.
3
5
6
8
11
12
13
FPGA IMPLEMENTATION
2.1 Benchmarking Test conducted by BDTi
2.2 Programmable Logic Blocks of an FPGA
2.3 Spartan-3E Layout
2.4 Spartan-3E Starter kit
2.5 Mapping the Sliding Window Operation
2.6 Window Operation using Buffers
2.7 System Block Diagram
2.8 Text File converted to Image in Matlab
2.9 Pixel Values obtained from FPGA plotted
using Matlab
14
15
18
19
22
23
27
28
29
CHAPTER 1
INTRODUCTION TO OBJECT TRACKING AND SYSTEM DESIGN
1.1 OVERVIEW
10
The video is fed in the Matlab program. The program reads the .avi file
and converts it to frames. The frames are produced at the rate of 10 frames
per second. Consider a 10 second video, a total of 100 frames will be
produced in RGB format. These frames are then stored as individual bitmap
11
files (total of 100 files). The bitmap files are arranged in the order of their
occurrence in the video. The first frame is selected as the Base
Background Frame. The remaining bitmap files are used for the process of
Object Recognition and Tracking.
12
1.3.3 Thresholding
In order to further enhance the resolution of the delta frame Gray Scale
Thresholding is done. Example Figure 1.3. The individual pixels in the
grayscale image are marked as object pixels if their value is greater than some
threshold value (initially set as 80) and as background pixels otherwise.
then replacing the pixel being considered with the middle pixel value. (If the
neighborhood under consideration contains an even number of pixels, the
average of the two middle pixel values is used.) An example is shown below -
In general, the median filter allows a great deal of high spatial frequency
detail to pass while remaining very effective at removing noise on images where
less than half of the pixels in a smoothing neighborhood have been effected.
15
The algorithm previously described helps identify the object and gives
us information about its shape and size. Now in order to track it, we must
select the frames acquired from the video. Considering a 10 second video,
100 frames are produced. The frames are then fed into the Matlab program
at the rate of 1 frame per second. This is under the impression that the rate
that we choose contains the complete motion of the object. The optimal
frame rate considered is 1 frame per second. Further complexity is reduced if
we alter the input frame rate to 4 frames per second.
16
At the rate of 1 frame per second, the enhanced image is fed to the
tracking program. This analyzes each frame and computes the first white
pixel that represents the object. This is under the assumption that object is
17
b) Threshold
d) Edge Detection
e) Enhanced Image
19
20
CHAPTER 2
FPGA IMPLEMENTATION OF OBJECT TRACKING
ALGORITHM
22
23
The main advantages are High Speed Connectivity, High Performance DSP
Solutions and Lowest Cost Embedded Processing Solutions.
such as MAC engines, and adaptive and fully parallel FIR filters. The Block
RAM can be used for storing partial products and coefficients.
26
27
Separate 32-bit instruction and data buses that conform to IBM's OPB
(On-chip Peripheral Bus) specification.
The MicroBlaze is a full 32-bit RISC CPU that is embedded in the Xilinx
FPGA SPARTAN 3E and Virtex-4 FPGA families. It can be run at speeds
up to 100MHz and is the best choice for CPU-intensive tasks in Xilinx
FPGA based systems.
28
operator. The result is a pixel value that is assigned to the centre of the
window in the output image as shown below in Figure 2.5.
both caching and pipelining there needs to be a mechanism for adding to the
row buffer and for flushing the pipeline. This is required when operating on
video data, due to the horizontal blanking between lines and the vertical
blanking between frames. If either the buffer or the pipeline operated during
the blanking periods the results for following pixels would be incorrect due
to invalid data being written to them. This requires us to stop entering data
into the row buffers and to stall the pipeline while a blanking period occurs.
A better option is to replicate the edge pixels of the closest border. Such
image padding can be considered as a special case of pipeline priming.
When a new frame is received the first line is pre-loaded into the row buffer
the required number of times for the given window size. Before processing a
new row the first pixels are also pre-loaded the required number of times, as
is the last pixel of the line and the last line. Figure 2.6 shows the
implementation of the Row Buffers for Window Operations.
30
Because the Spartan 3E FPGA that is used in the design does not
have enough internal RAM for image storage, the processing blocks were
interfaced with five on-board 256K36-bit pipelined DDRAM devices. To
reduce the hardware computation time, each sub-block can read and write
within the same clock cycle; each sub-block was connected to two memory
chips while active. Typically, a computational block reads its inputs from
one memory and writes its outputs to another. It is also necessary to
control/arbitrate the FPGA internal block RAM, which is used for storage of
computed thresholds and other parameters. The memory interface provides
the computational blocks with a common interface and hides some of the
complex details.
2.4.1.1 C Compiler
Xilinx MicroBlaze Processor Supports Linux and C-to-FPGA
Acceleration Embedded systems can be developed to create hardwareaccelerated, single-chip applications that take advantage of the MicroBlaze
processor
features
and
C-to-hardware
acceleration
for
complex,
critical
C-language
processes
to
dedicated
hardware
Input Files
1. MHS File
The Microprocessor Hardware Specification (MHS) file defines the
hardware component. The MHS file serves as an input to the Platform
Generator (Platgen) tool. An MHS file defines the configuration of the
embedded processor system, and includes the following:
Bus architecture
Peripherals
Processor
System Connectivity
32
2. MSS File
The Microprocessor Software Specification (MSS) is used as an input file
to the Library Generator (Libgen). The MSS file contains directives for
customizing OSs, libraries, and drivers.
3. UCF File
The User Constraints File (UCF) specifies timing and placement
constraints for the FPGA Design.
Output Files
1. Block Memory Map
A BMM file is a text file that has syntactic descriptions of how
individual Block RAMs constitute a contiguous logical data space. When
updating the FPGA bitstream with memory initialization data, the
Data2Mem utility uses the BMM file to direct the translation of data into the
proper initialization form. This file is generated by the Platform Generator
(Platgen) and updated with physical location information by the Bitstream
Generator tool.
2. ELF File
The Executable and Linkable Format (ELF) is a common standard in
computing. An executable or executable file, in computer science, is a file
whose contents are meant to be interpreted as a program by a computer.
Most often, they contain the binary representation of machine instructions of
a specific processor, but can also contain an intermediate form that requires
the services of an interpreter to be run.
33
34
2.5 SIMULATION
a) Threshold
b) Noise Filter
c) Edge Detection
d) Enhanced Image
Figure 2.9: Pixel Values obtained from FPGA plotted using Matlab
36
CHAPTER 3
CONCLUSION
The gray scale transformation has been used to remove the coherence
of the background and the target to be tracked. The Delta Frame-based
segmentation and Thresholding combine two intensive operations into one
step, eliminating the need for large numbers of parallel comparators. The
resulting optimized enhanced image fits on a small FPGA, such as the Xilinx
Spartan- III XC3S500E, with sufficient resources available for an
application to make use of the derived tracking information. We have
demonstrated this by designing a simple video which contains an object in
motion.
37
APPENDIX 1
//Median Function
Module FindMedian( Values )
Start
Sort the Values
If CountOf(Values) Is Even
Return Mean(Middle Two Values)
Else
Return Middle Value
End
Module MainProgram
Start
//Initialize the variables
Initialize MatVal, MatVal1, IntVal, IntVal1 To 0
Initialize cp To 0x25000000
Initialize cp1 To 0x25100000
Initialize NoOfRows,NoOfColumns To 128
Initialize NoOfRows1, NoOfColumns1 To 120
//Read 1st Input
For I = 0 To NoOfRows
For J = 0 To NoOfColumns
While( True )
If Ch Ranges from 0 to 9 Then
38
39
End If
FirstFilter[i][j] = bbr
End For
End For
//Thresholding
For I = 0 To NoOfRows
For J = 0 To NoOfColumns
If FirstFilter[i][j] > 40 Then
FirstFilter[i][j] = 255
Else
FirstFilter[i][j] = 0
End If
EdgeImage[i][j] = 255
End For
End For
//Edge Detection
For I = 0 To NoOfRows
For J = 0 To NoOfColumns
If I == 0 And J == 0 Then
If FirstFilter[j][i+1] And FirstFilter[j+1][i+1] And
FirstFilter[j+1][i] Are Equal To 255 Then
EdgeImage[j][i] = 0
End If
Else If I == 0 And J == Length Then
If FirstFilter[j+1][i] And FirstFilter[j+1][i-1] And
FirstFilter[j][i-1] Are Equal To 255 Then
EdgeImage[j][i] = 0
41
End If
Else If I == Length And J == 0 Then
If FirstFilter[j-1][i] And FirstFilter[j-1][i+1] And
FirstFilter[j][i+1] Are Equal To 255 Then
EdgeImage[j][i] = 0
End If
Else If I == Length And J == Length Then
If FirstFilter[j-1][i-1] And FirstFilter[j][i-1] And
FirstFilter[j-1][i] Are Equal To 255 Then
EdgeImage[j][i] = 0
End If
Else If I == 0 And J == Length Then
If FirstFilter[j+1][i] And FirstFilter[j+1][i-1] And
FirstFilter[j][i-1] Are Equal To 255 Then
EdgeImage[j][i] = 0
End If
Else If I == 1
If FirstFilter[j][i-1] And FirstFilter[j+1][i-1] And
FirstFilter[j+1][i] And FirstFilter[j+1][i+1] And
FirstFilter[j][i+1] Are Equal To 255 Then
EdgeImage[j][i] = 0
End If
Else If I == Length
If FirstFilter[j-1][i-1] And FirstFilter[j+1][i-1] And
FirstFilter[j+1][i] And FirstFilter[j-1][i] And
FirstFilter[j][i-1] Are Equal To 255 Then
EdgeImage[j][i] = 0
42
End If
Else If J == 1
If FirstFilter[j][i-1] And FirstFilter[j-1][i-1] And
FirstFilter[j-1][i] And FirstFilter[j-1][i+1] And
FirstFilter[j][i+1] Are Equal To 255 Then
EdgeImage[j][i] = 0
End If
Else
If FirstFilter[j-1][i-1] And FirstFilter[j+1][i-1] And
FirstFilter[j+1][i] And FirstFilter[j+1][i+1] And
FirstFilter[j][i-1] And FirstFilter[j-1][i] Are Equal To 255
Then
EdgeImage[j][i] = 0
End If
End If
End For
End For
//Image Enhancement and Object Tracking
For I = 0 To NoOfRows
For J = 0 To NoOfColumns
Seg[i][j] = Input[i][j] - Input1[i][j]
If Seg[i][j] = 1 Then
Store [i],[j]
End If
End For
End For
End
43
APPENDIX 2
44
flexibility allows the user to balance the required performance of the target
application against the logic area cost of the soft processor.
The items in white are the backbone of the MicroBlaze architecture while
the items shaded gray are optional features available depending on the exact
needs of the target embedded application. Because MicroBlaze is a soft-core
microprocessor, any optional features not used will not be implemented and
will not take up any of the FPGAs resources.
Instruction Operations:
The MicroBlaze pipeline is a parallel pipeline, divided into three
stages: Fetch, Decode, and Execute. In general, each stage takes one clock
cycle to complete. Consequently, it takes three clock cycles (ignoring delays
45
or stalls) for the instruction to complete. Each stage is active on each clock
cycle so three instructions can be executed simultaneously, one at each of
the three pipeline stages. MicroBlaze implements an Instruction Prefetch
Buffer that reduces the impact of multi-cycle instruction memory latency.
While the pipeline is stalled by a multi-cycle instruction in the execution
stage the Instruction Prefetch Buffer continues to load sequential
instructions. Once the pipeline resumes execution the fetch stage can load
new instructions directly from the Instruction Prefetch Buffer rather than
having to wait for the instruction memory access to complete. The
Instruction Prefetch Buffer is part of the backbone of the MicroBlaze
architecture and is not the same thing as the optional instruction cache.
Stack:
The stack convention used in MicroBlaze starts from a higher
memory location and grows downward to lower memory locations when
items are pushed onto a stack with a function call. Items are popped off the
stack the reverse order they were put on; item at the lowest memory location
of the stack goes first and etc.
Registers:
The MicroBlaze processor also has special purpose registers such
as: Program Counter (PC) can read it but cannot write to it, Machine Status
Register (MSR) to indicate the status of the processor such as indicating
arithmetic carry, divide by zero error, a Fast Simplex Link (FSL) error and
enabling/disabling interrupts to name a few. An Exception Address Register
(EAR) that stores the full load/store address that caused the exception. An
Exception Status register (ESR) that indicates what kind of exception
46
47
EDK Interface:
The MicroBlaze processor is useless by itself without some type of
peripheral devices to connect to and EDK comes with a large number of
commonly used peripherals. Many different kinds of systems can be created
with these peripherals, but it is likely that you may have to create your own
custom peripheral to implement functionality not available in the EDK
peripheral libraries and use it in your processor system.
The processor system by EDK is connected by On-chip Peripheral
Bus (OPB) and/or Processor Local Bus (PLB), so your custom peripheral
must be OPB or PLB compliant. Meaning the top-level module of your
custom peripheral must contain a set of bus ports that is compliant to OPB or
PLB protocol, so that it can be attached to the system OPB or PLB bus.
48
REFERENCES
1.
2.
3.
Crookes D., Benkrid K., Bouridane A., Alotaibi K., and Benkrid
A.(2000), Design and implementation of a high level programming
environment for FPGA-based image processing, Vision, Image and
Signal Processing, IEE Proceedings, vol. 147, Issue: 4 , Aug, 2000,
pp. 377 -384.
4.
Hong C.S,. Chung S.M, Lee J.S. and Hong K.S. (1997), A VisionGuided Object Tracking and Prediction Algorithm for Soccer Robots,
IEEE Robotics and Automation Society, Vol. 1 pp: 346-351.
5.
6.
49