0% found this document useful (0 votes)
33 views3 pages

Video Processing Application: Using Model-Based Design To Develop and Deploy A

Uploaded by

Narayana Tadanki
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views3 pages

Video Processing Application: Using Model-Based Design To Develop and Deploy A

Uploaded by

Narayana Tadanki
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Using Model-Based Design to Develop and Deploy a

Video Processing Application


By Houman Zarrinkoub

Blockset. During simulation, the video data is


According to the U.S. National Highway Traffic Safety processed in the Lane Marker Detection and
Tracking subsystem, which outputs the detec-
Administration, single-vehicle road departures result in
tion algorithm results to the To Video Display
many serious accidents each year. To reduce the like­ block for computer visualization (Figure 1).

lihood of a vehicle’s straying out of lane, automotive en- Lane Detection and Visualization
Figure 2 shows the main subsystem of our
gineers have developed lane tracking and departure Simulink model. The sequence of steps in
the lane marker detection and tracking
warning systems that use a small camera to transmit video algorithm maps naturally to the sequence
information about lane markings and road conditions to of subsystems in the model.
We begin with a preprocessing step in which
a microprocessor unit installed on the vehicle. we define a relevant field of view and filter
the output of this operation to reduce image
In this article, we show how Model-Based ments, detected by maximizing the Hough noise. We then determine the edges of the
Design with Simulink® and the Video and transform of the edges in a video frame. image using the Edge Detection block in the
Image Processing Blockset can be used to We input a video stream to the simulation Video and Image Processing Blockset. With
design a lane-detection and lane-departure environment using the From Multimedia File this block we can use the Sobel, Prewitt, Rob-
warning system, implement the design on block from the Video and Image Processing erts, or Canny methods to output a binary
a Texas Instruments DSP, and verify its on-
target performance in real time.
The core element of Model-Based Design is
an accurate system model— an executable
specification that includes all software and
hardware implementation requirements,
including fixed-point and timing behavior.
You use the model to automatically gener-
ate code and test benches for final system
verification and deployment.
This approach makes it easy to express
Figure 1. Lane-detection model.
a design concept, simulate the model to
verify the algorithms, automatically gen-
erate the code to deploy it on a hardware
target, and verify exactly the same opera-
tion on silicon.

Building the System Model


Using Simulink, the Signal Processing
Blockset, and the Video and Image Pro-
cessing Blockset, we first develop a float-
ing-point model of the lane-detection
system. We model lane markers as line seg- Figure 2. Floating-point model: the Lane Marker Detection and Tracking subsystem.

Reprinted from The MathWorks News & Notes | January 2006 | www.mathworks.com
image, a matrix of Boolean values corre-
sponding to edges. Figure 4.
Next, we detect lines using the Hough Fixed-point
model:
Transform block, which maps points in main
the Cartesian image space to curves in subsystem.
the Hough parameter space using the
following equation:
Converting the Design from the fixed-point model in Accelerator
rho = x * cos(theta) + y * sin(theta)
Floating Point to Fixed Point mode. The Simulink Accelerator can sub-
The block output is a parameter space
To implement this system on a fixed-point
matrix whose rows and columns corre- stantially improve performance for larger
processor, we convert the algorithm to use
spond to the rho and theta values, respec- Simulink models by generating C code for
fixed-point data types. In a traditional de-
tively. Peak values in this matrix represent the model, compiling the code, and gen-
sign flow based on C programming, this
potential lines in the input image. erating a single executable for the model
conversion would require major code
Our lane marker detection and tracking that is customized to a model’s particular
modification. Conversion of the Simulink
subsystem uses a feedback loop to further configuration. In Accelerator mode, the
model involves three basic steps:
refine the lane marker definitions. We post- simulation for the fixed-point model runs
1. Change the source block output data
process the Hough Transform output, using at the speed of compiled C code.
types. During automatic data type propa-
line segment correction to deal with image
gation, Simulink displays messages indi- Implementing and Verifying the
boundary outliers, and then compute the
cating the need to change block param- Application on TI Hardware
Hough lines. The Hough Lines block in the
eters to ensure data type consistency in Using Real-Time Workshop® and Real-
Video and Image Processing Blockset finds
the model. Time Workshop Embedded Coder, we
the Cartesian coordinates of line end-points
2. Set the fixed-point attributes of the ac- automatically generate code and imple-
by locating the intersections between the
cumulators and product outputs using ment our embedded video application on
lines, characterized by the theta and rho
Simulink Fixed Point tools, such as Min- a TI C6400™ processor using the Embed-
parameters and the boundaries of the refer-
max and Overflow logging. ded Target for TI C6000™ DSP. To verify
ence image.
3. Examine blocks whose parameters are that the implementation meets the original
The subsystem then uses the computed end-
sensitive to the pixel values to ensure that system specifications, we use Link for
points to draw a polygon, and reconstructs
these parameters are consistent with the Code Composer Studio™ to perform real-
the image. The sides of the polygon corre-
input signal data type. (The interpreta- time hardware-in-the-loop validation and
spond to the detected lanes, and the polygon
tion of pixel values depends on the data visualization of the embedded application.
is overlaid onto the original video. We simu-
type. For example, the maximum inten- Before implementing our design on a TI
late the model to verify the lane detection
sity of a pixel is denoted by a value of 1 in C6416DSK evaluation board, we must
and tracking design (Figure 3).
floating point and by a value of 255 in an convert the fixed-point, target-indepen-
unsigned 8-bit integer representation.) dent model to a target-specific model. For
Figure 4 shows the resulting fixed-point this task we use Real-Time Data eXchange
Figure 3. Lane tracking simulation results, with (RTDX), a TI real-time communications
model. During simulation, the flexibility
a trapezoidal figure marking the lanes in the
video image. and generality provided by fixed-point protocol that enables the transfer of data
operators as they to and from the host. RTDX blocks let us
check for over- ensure that the same test bench used to
flows and per- validate the design in simulation is used in
form scaling and implementation.
saturations can Creating the target-specific model involves
cause a fixed- three steps:
point model to 1. Replace the source block of the target-in-
run slower than dependent model with the From RTDX
a floating-point block and set its parameters.
model. To speed 2. Replace the Video Viewer block of the
up the simula- target-independent model with the To
tion, we can run RTDX block and set its parameters.

Reprinted from The MathWorks News & Notes | January 2006 | www.mathworks.com
not meet the real-time hardware require-
ments. In Model-Based Design, simulation
and code generation are based on the same
model, and so we can quickly conduct
multiple iterations to optimize the design.
For example, we can use the profiling capa-
bilities in Link for Code Composer Studio
to identify the most computation-intensive
segments of our algorithm. Based on this
analysis, we can change the model parame-
Figure 5. The TI C6416 DSK block automatically sets up all Real-Time Workshop targeting parameters ters, use a more efficient algorithm, or even
based on the configuration of the TI board and Code Composer Studio installed locally. replace the general-purpose blocks used

3. Set up Real-Time Workshop target- file used in simulation


specific preferences by dragging a block and retrieve the pro-
specific to our target board from the cessed video output
C6000 Target Preferences library into the from the DSP.
model. Figure 5 shows the resulting tar- 5. Plot and visualize the
get-specific model. results in a MATLAB
To automate the process of building the figure window.
application and verifying accurate real- Figure 6 shows the script
time behavior on the hardware, we create a used to automate em-
script, using Link for Code Composer Stu- bedded software verifi-
dio to perform the following tasks: cation for TI DSPs
1. Invoke the Link for Code Composer from MATLAB. Link for
Studio IDE to automatically generate the Code Composer Studio
Link for Code Composer Studio project. provides several func- Figure 7. Automatically generated code executing on the target DSP
2. Compile and link the generated code tions that can be invoked verifies that the application correctly detects the lane markers.
from the model. from MATLAB to pa-
3. Load the code onto the target. rameterize and automate the test scripts for in the model with target-optimized blocks
4. Run the code: Send the video signal to the embedded software verification. supplied with the Embedded Target for TI
target-specific model from the same input Figure 7 shows the results of the automati- C6000. Such design iterations help us op-
cally generated code execut- timize our application for the best deploy-
ing on the target DSP. We ment on the hardware target. 7
observe that the application
running on the target hard-
ware properly detects the Resources
lane markers, and we verify
that the application meets
4 Video and Image Processing Blockset
www.mathworks.com/res/viprocessing
the requirements of the
original model. 4 Model-Based Design for Signal Processing and
Communication Systems
After running our applica- www.mathworks.com/res/dsp_comm
tion on the target, we may
4 Webinar: Using Simulink for Video and Image Processing
find that our algorithm does www.mathworks.com/res/vipwebinar

4 Webinar: Design and Implementation of Video


Applications on TI C6400 DSPs with Simulink
www.mathworks.com/res/videowebinar

Figure 6. Link for 4 Book: Video Processing and Communications


Code Composer www.mathworks.com/res/book2613
Studio IDE script.

Reprinted from The MathWorks News & Notes | January 2006 | www.mathworks.com

You might also like