H.264/H.265 Video Codec Unit V1.2 Solutions Logicore Ip Product Guide (Pg252)
H.264/H.265 Video Codec Unit V1.2 Solutions Logicore Ip Product Guide (Pg252)
2 Solutions
LogiCORE IP Product Guide (PG252)
IP Facts
Core Specifics
Simulation For supported simulators, see the Vivado Design Suite User Guide: Release Notes,
Installation, and Licensing (UG973).
Support
1. For a complete list of supported devices, see the AMD Vivado™ IP catalog.
2. For the supported versions of the tools, see the Vivado Design Suite User Guide: Release Notes, Installation, and
Licensing (UG973).
Features
The features of the AMD LogiCORE IP H.264/H.265 Video Codec Unit (VCU) core for Zynq UltraScale+ MPSoC devices are as
follows:
HEVC
Main, Main Intra, Main10, Main10 Intra, Main 4:2:2 10, Main 4:2:2 10 Intra up to Level 5.1 High Tier
AVC
Baseline, Main, High, High10, High 4:2:2, High10 Intra, High 4:2:2 Intra up to Level 5.2
Support simultaneous encoding and decoding of up to 32 streams with a maximum aggregated bandwidth of
3840x2160@60fps
Low latency rate control
Flexible rate control: CBR, VBR, and Constant QP
Supports simultaneous encoding and decoding up to 4K UHD resolution at 60 Hz of two video contents.
✎ Note: 4k (3840x2160) and below resolutions are supported in all speedgrades. However, 4K DCI (4096x2160) requires
-2 or -3 speedgrade.
Supports 8K UHD at reduced frame rate (~15 Hz)
Progressive support for H.264 and H.265; Interlace -alternate mode support for H.265
Video input:
Semi-planar formats of YCbCr 4:2:2, YCbCr 4:2:0, and Y-only (monochrome)
8 and 10-bit per color channel
AMD Adaptive Computing documentation is organized around a set of standard design processes to help you find relevant
content for your current development task. You can access the AMD Versal™ adaptive SoC design processes on the Design
Hubs page. You can also use the Design Flow Assistant to better understand the design flows and find content that is specific
to your intended design needs.
Overview
The AMD LogiCORE™ ™ IP H.264/H.265 Video Codec Unit (VCU) core supports multi-standard video encoding and decoding,
including support for the high-efficiency Video Coding (HEVC) and Advanced Video Coding (AVC) H.264 standards. The unit
contains both encode (compress) and decode (decompress) functions, and is capable of simultaneous encode and decode.
The VCU is an integrated block in the programmable logic (PL) of selected AMD Zynq™ UltraScale+™ MPSoCs with no direct
connections to the processing system (PS), and contains encoder and decoder interfaces. The VCU also contains additional
functions that facilitate the interface between the VCU and the PL. VCU operation requires the application processing unit (APU)
Applications
The VCU core is a dedicated circuitry located in the PL to enable maximum flexibility for a wide selection of use cases, memory
bandwidth being a key driver. Whether the application requires simultaneous 4K UHD at 60 Hz encoding and decoding or a
single SD stream to be processed, a system design and memory topology can be implemented that balances performance,
optimization, and integration for the specific use case. The following figure shows the use case example where the VCU core
works with the PS and the PL DDR external memory.
Unsupported Features
This AMD LogiCORE™ IP module is provided at no additional cost with the AMD Vivado™ Design Suite under the terms of the
End User License.
For more information, visit the Zynq UltraScale+ MPSoC product page.
Implementation of H.264 or H.265 video compression standards can require a license from third parties as well as the payment
of royalties; further information can be obtained from individual patent holders and industry consortia such as MPEG LA and
HEVC Advance.
Information about other AMD LogiCORE™ IP modules is available at the Intellectual Property page. For information about pricing
and availability of other AMD LogiCORE IP modules and tools, contact your local sales representative.
Product Specification
Standards
The Encoder and Decoder blocks are compatible with the following standards:
ISO/IEC 23008-2:2017 Information technology — High efficiency coding and media delivery in heterogeneous environments —
Part 2
High efficiency video coding
Performance
The following sections detail the performance characteristics of the H.264/H.265 Video Codec Unit.
Maximum Frequencies
The typical clock frequencies for the target devices are described in the Zynq UltraScale+ MPSoC Data Sheet: DC and AC
Switching Characteristics (DS925). The maximum achievable clock frequency of the system can vary. The maximum achievable
clock frequency and all resource counts can be affected by other tool options, additional logic in the device, using a different
version of AMD tools and other factors.
Throughput
The VCU supports simultaneous encoding and decoding up to 4K UHD resolution at 60 Hz. This throughput can be a single
stream at 4K UHD or can be divided into up to 32 smaller streams of up to 480p at 30 Hz. Several combinations of one to 32
streams can be supported with different resolutions provided the cumulative throughput does not exceed 4K UHD at 60 Hz.
Resource Utilization
Streams of 4K UHD at 60 Hz consume significant amounts of the bandwidth of the external memory interfaces and significant
amounts of the Arm® AMBA® AXI4 bus bandwidth between the Processing System and the Programmable Logic.
For simultaneous encoder and decoder operation (including transcode use cases), consider using both an PS Memory
Controller and dedicated an VCU Memory Controller.
F8 Muxes (57600) 0 0 12
DSP's (1728) 0 0 0
HPIOBDIFFINBUF (192) 0 0 0
BITSLICE_RX_TX (416) 0 0 0
PLL (16) 0 0 0
MMCM (8) 0 0 0
Related Information
DDR Memory Footprint Requirements
Zynq UltraScale+ EV Architecture Video Codec Unit DDR4 LogiCORE IP v1.1
Core Interfaces
Port Descriptions
The VCU core top-level signaling interface is shown in the following figure.
M_AXI_ENC0 Memory mapped AXI4 128-bit memory mapped interface for Encoder block.
master interface
M_AXI_ENC1 Memory mapped AXI4 128-bit memory mapped interface for Encoder block.
master interface
M_AXI_DEC0 Memory mapped AXI4 128-bit memory mapped interface for Decoder block.
master interface
M_AXI_DEC1 Memory mapped AXI4 128-bit memory mapped interface for Decoder block.
master interface
M_AXI_MCU Memory mapped AXI4 32-bit memory mapped interface for MCU.
master interface
S_AXI__LITE Memory mapped AXI4-Lite AXI4-Lite memory mapped interface for external master access.
slave interface
Related Information
Overview
The following table summarizes the signals which are either shared by, or not part of the dedicated AXI4 interfaces.
vcu_host_interrupt Output Active-High interrupt output from VCU. Can be mapped to PL-
PS interrupt pin.
Register Space
The Zynq UltraScale+ MPSoC VCU soft IP implements registers in the programmable logic. The following table summarizes the
soft IP registers. These registers are accessible from the PS through the AXI4-Lite bus.
Soft IP Registers
VCU_ENCODER_ENABLE
Table: VCU_ENCODER_ENABLE
VCU_DECODER_ENABLE
Table: VCU_DECODER_ENABLE
VCU_MEMORY_DEPTH
Table: VCU_MEMORY_DEPTH
VCU_ENC_COLOR_DEPTH
Table: VCU_ENC_COLOR_DEPTH
VCU_ENC_VERTICAL_RANGE
Table: VCU_ENC_VERTICAL_RANGE
VCU_ENC_FRAME_SIZE_X
Table: VCU_ENC_FRAME_SIZE_X
VCU_ENC_FRAME_SIZE_Y
Table: VCU_ENC_FRAME_SIZE_Y
VCU_ENC_COLOR_FORMAT
Table: VCU_ENC_COLOR_FORMAT
VCU_ENC_FPS
Table: VCU_ENC_FPS
VCU_ENC_VIDEO_STANDARD
Table: VCU_ENC_VIDEO_STANDARD
VCU_ENC_VIDEO_STANDARD
0 RO 0x0 0 = H.265 (HEVC)
1 = H.264 (AVC)
VCU_STATUS
Table: VCU_STATUS
VCU_DEC_VIDEO_STANDARD
Table: VCU_DEC_VIDEO_STANDARD
VCU_DEC_VIDEO_STANDARD
0 RO 0x0 0 = H.265 (HEVC)
1 = H.264 (AVC)
VCU_DEC_FRAME_SIZE_X
Table: VCU_DEC_FRAME_SIZE_X
VCU_DEC_FRAME_SIZE_Y
Table: VCU_DEC_FRAME_SIZE_Y
VCU_DEC_FPS
Table: VCU_DEC_FPS
VCU_BUFFER_B_FRAME
Table: VCU_BUFFER_B_FRAME
ENC_NUM_CORE
Table: ENC_NUM_CORE
VCU_PLL_CLK_HI
Table: VCU_PLL_CLK_HI
VCU_PLL_CLK_LO
Table: VCU_PLL_CLK_LO
VCU_GASKET_INIT
Table: VCU_GASKET_INIT
1 Bit 1 is set to 0 to
assert reset to VCU.
Software needs to de-
assert it to 1 for out of
reset
0 Bit 0 is set to 1 to
remove gasket
isolation after
VCUINT_VCU fully
ramps, VCCAUX fully
ramps, and the PL is
programmed
✎ Note: Read value
of
VCU_GASKET_INIT[0]
is inverse of the Write
value
VCU_ENCODER_ENABLE
Table: VCU_ENCODER_ENABLE
VCU_ENCODER_ENABLE
Table: VCU_ENCODER_ENABLE
VCU_ENCODER_ENABLE
Table: VCU_ENCODER_ENABLE
VCU_ENCODER_ENABLE
Table: VCU_ENCODER_ENABLE
Core Architecture
Encoder Block
The encoder block of the Video Codec Unit (VCU) core is a video encoder engine for processing video streams using the H.265
(ISO/IEC 23008-2 high-efficiency video coding) and H.264 (ISO/IEC 14496-10 advanced video coding) standards. It provides
complete support for these standards, including support for 8-bit and 10-bit color depth, 4:2:0, 4:2:2, and 4:0:0 Chroma formats
and up to 4K UHD at 60 Hz performance.
The VCU encoder block is shown in the following figure.
Features
Performance
Profiles
Baseline Main
Main Main Intra
High Main 10
Progressive High and Main 10 Intra
Constrained High (subsets Main 4:2:2 10
of the High profile) Main 4:2:2 10 Intra
High 10
High 4:2:2
High10 Intra
High 4:2:2 Intra
32 streams at 720×480p at 30 Hz
Eight streams at 1920×1080p at 30 Hz
Four streams at 1920×1080p at 60 Hz
Two streams at 3840×2160p at 30 Hz
One stream at 3840×2160p at 60 Hz
One stream at 7680×4320p at 15 Hz
Coding Tools
Prediction size Down to 4×4 for intra prediction, Down to 4×4 for intra prediction,
down to 8×8 inter prediction down to 8×8 inter prediction
Intra prediction modes All intra 4×4, intra 8×8, intra 16×16 All 33 directional modes, planar,
modes DC
Motion vector prediction modes All motion vector prediction All motion vector
modes except spatial direct mode prediction/merge/skip modes
and direct_8×8_inference_flag=0
The following table summarizes the maximum bit rate achievable for different profile/level combinations.
Functional Description
The following figure shows the top-level interfaces and detailed architecture of the encoder block.
✎ Note: The AXI4 master interface from the MCU is multiplexed with the corresponding AXI-4 Master interface from the
Decoder. The multiplexer output is available at the embedded VCU.
The encoder block includes the compression engines, control registers, an interrupt controller, and an optional encoder
buffer with a memory controller. The encoder buffer is connected to UltraRAM or block RAM in the programmable logic
and enabled using registers.
The encoder block is controlled by a microcontroller unit (MCU) subsystem, including a 32-bit MCU with a 32 KB
instruction cache, a 1 KB data cache, and a 32 KB local SRAM.
A 32-bit AXI4-Lite slave interface is used by the APU to control the MCU for the configuration of encoder parameters, to
start/stop processing, to get status and to get results.
Two 128-bit AXI4 master interfaces are used to fetch video input data, load and store intermediate data, store compressed
data back to memory.
A 32-bit AXI4 master interface is used to fetch the MCU software and load/store additional MCU data.
The VCU control software can change encoding parameters and even change between H.264 and H.265 encoding dynamically;
however, the available memory and bandwidth must be selected to support the worst case needed by the application. Use the
VCU GUI to explore bandwidth requirements.
Applications that use the encoder block must connect all encoder ports (ports with names beginning with m_axi_enc). The
following table shows the list of ports/interfaces of the top-level encoder block.
VCU-Interrupt (s_axi_lite_aclk)
vcu_pl_enc_awqos0 4 Output AXI Master write QOS signal for interface 0, controlled
from SLCR
vcu_pl_enc_arqos0 4 Output AXI Master read QOS signal for interface 0, controlled
from SLCR
vcu_pl_enc_awcache0 4 Output AXI Master write cache signal for interface 0, controlled
from SLCR
vcu_pl_enc_arcache0 4 Output AXI Master read cache signal for interface 0, controlled
from SLCR
vcu_pl_enc_awqos1 4 Output AXI Master write QOS signal for interface 1, controlled
from SLCR
vcu_pl_enc_arqos1 4 Output AXI Master read QOS signal for interface 1, controlled
from SLCR
vcu_pl_enc_awcache1 4 Output AXI Master write cache signal for interface 1, controlled
from SLCR
vcu_pl_enc_arcache1 4 Output AXI Master read cache signal for interface 1, controlled
from SLCR
VCU Encoder- 32-bit AXI Master MCU Instruction and Data Cache Interface
pl_vcu_mcu_m_axi_ic_dc_awready 1 Input AXI Master write address ready signal for MCU
vcu_pl_mcu_m_axi_ic_dc_awsize 3 Output AXI Master write burst size signal for MCU
vcu_pl_mcu_m_axi_ic_dc_awvalid 1 Output AXI Master write address valid signal for MCU
vcu_pl_mcu_m_axi_ic_dc_bready 1 Output AXI Master write response ready signal for MCU
pl_vcu_mcu_m_axi_ic_dc_bvalid 1 Input AXI Master write response valid signal for MCU
Clocking
Reset
The Encoder block includes an embedded MCU that runs the MCU firmware and controls the hardware Encoder core. Refer to
Microcontroller Unit Overview for more information on the MCU.
Data Path
The encoder block has two 128-bit AXI4 master interfaces to fetch video data from external DDR memory attached to either the
Processing System (PS) or the Programmable Logic (PL).
The data fetched from memory includes:
Control Path
The MCU slave interface is accessed once per frame by the APU, which sends a frame-level command to the IP core. This
interface does not require a fast data path. Interrupts are triggered at frame level to wake up the APU at the end of each frame
processing. These commands are processed by the embedded MCUs, which generate slice- and tile-level commands to the
video encoder hardware. For more information, refer to Microcontroller Unit Overview.
Encoder Buffer
The buffer memory controller of the encoder block manages the read and write access to the encoder buffer, which stores pixel
data from the reference frames. It pre-fetches data blocks from the reference frames in the system memory and stores them in
the encoder buffer. The encoder buffer stores Luma and Chroma pixels from the reference frames so that they are present in the
buffer when needed by the encoder. The encoder buffer must be one contiguous memory access (CMA) buffer and should be
aligned to a 32-byte boundary. Refer to the Zynq UltraScale+ MPSoC Data Sheet: Overview (DS891) to see the device memory
available per EV device.
Calculate the system bandwidth in total derated based on memory controller efficiency for the required access pattern. Enable
the Encoder Buffer if the calculated bandwidth is insufficient.
The optional encoder buffer can be used to reduce the memory bandwidth. This option can slightly reduce the video quality. See
the CacheLevel2 in Table 1 for more information. Aside from the size, there are no user controls for tuning the Encoder buffer
usage.
✎ Note: To enable the Encoder buffer, pass the prefetch-buffer parameter into the GStreamer pipeline that uses the hardware.
For example:
✎ Note: To find the most precise frame buffer size, use the Advance Configuration > Resource Summary menu option in the
VCU GUI.
The encoder input and output requirements are shown in the following table.
Requirement Description
Input Buffer
Contiguous Yes
Alignment 32
Output Buffer
Contiguous Yes
Requirement Description
Alignment 32
✎ Note: Because the VCU uses multiple internal encoder engines, it is not possible to reduce the output buffer requirements.
Memory Requirements
The VCU software reads the total available encoder buffer size from LogiCORE registers (the values can be in the GUI, after the
settings) along with the value of maximum number of cores based on the settings you select (resolution and fps). The memory
allocated by the software is calculated using the total encoder buffer size divided by the maximum number of cores. If this
value is inadequate, no channel is created.
✎ Note: The value of 4kp30 encoder buffer is not half of 4kp60.
The 4kp30 encoder buffer requires lesser space than the 4kp60 because it uses two cores and four cores, respectively. Two
tiles are encoded in parallel for 4kp30 whereas for 4kp60, four tiles are encoded in parallel. To allocate encoder buffer, the most
demanding use case is entered in the Vivado IDE, which computes and provides the maximum number of cores used in that use
case (4 for 4K60) to the driver. For each core, the firmware allocates a static encoder buffer size equal to the total size divided
by the maximum number of cores. When a channel is started, the firmware computes the required number of cores (for
example, two cores for 4K30) and tries to use the available encoder buffer size for these cores.
The following example uses these values: HEVC, 4:2:0 8-bit, B-frames, Low MV range, and single stream. The requirement of
encoder buffer is 696 KB for 4kp60 and 504 KB for 4kp30.
Set the GUI settings to: 4:2:0 8-bit, B-frames, Low MV range, and 4Kp60. The VCU software gets the available encoder
buffer size = 696 KB, max-number-of-cores = 4.
Running the 4kp30 use case with same design might not work. A channel creation error might occur because 4kp30
requires two encoder cores and therefore, the available encoder buffer size is calculated as 696/2=348 Kb but 348 Kb
is not enough to run the 4kp30 use case.
The same use case works if you override the setting number-cores=4, in the software or the application, because the
encoder buffer attached to each core is used in that case.
Two 4Kp30 streams are defined in the GUI as the case with highest usage.
Number of cores for each 4Kp30 must be forced to four (this forces time sharing for the four cores and change the
scheduling of the two channels while reducing the total amount of encoder buffer).
The first option is preferred to avoid exposing core management and to avoid additional encoding constraints. The second
option might be preferred if the PL memory optimization is an important requirement. The following table shows the possible
combinations:
Table: HEVC 4:2:0, 8-bit Depth, with B-Frames enabled, Low MV Range
The automatic computation of the required size becomes cumbersome if you want to support multiple use cases that do
not have the same max-number-of-cores, but in basic cases (for example, 4x1080p60 or 2x4k30 or 1x4k60), the worst
case encoder buffer size and the worst case max-number-of-cores must be provided. Choose multi-stream use case in GUI
to avoid such failures.
✎ Note: You cannot set the value of num-cores to maximum in the sub-frame latency mode. It is recommended that you
leave num-cores calculation as Auto to VCU firmware and adjust GUI settings to support multiple use cases.
Video resolution
Chroma sub-sampling
Color depth
Coding standard
The following table shows the worst-case memory footprint for various encoding schemes.
Buffers Per Buffer Total Buffers Per Buffer Total Buffers Per Buffer Total
Total 27 MB 54 294
The following table contains theoretical contiguous memory access (CMA) buffer requirements for the VCU encoder/decoder
based on resolution and format. The sizes below correspond to one instance of the encoder or decoder. Multiply these by
number of streams for multistream use cases. Other elements such as kmssink/v4l2src typically increase the CMA
requirements by an additional 10 to 15%.
3840×2160 (MB) 3 294 199 248 155 243 151 208 117
1920×1080 (MB) 54 52 42 40 42 40 32 31
1280×720 (MB) 27 26 21 20 20 19 17 16
1. AVC requires extra intermediate buffers when it is using multiple-cores. Unlike HVEC, the AVC standard does not
support Tile processing, so to exploit data processing parallelism it requires two intermediate buffers.
2. VCU AVC Encoder uses multiple cores when resolution is >= 1080p60, that is reason for having ~100 MB delta between
HEVC and AVC CMA requirements for 3840x2160.
3. Includes memory for two intermediate buffers.
An approximate formula for each buffer type can be used to derive the total memory requirement per use case.
Buffer Formula
Source Frame height x width x bit depth (10-bit = 4/3, 8-bit = 1) x chroma format (For 4:2:2 = 2, 4:2:0 = 1.5,
4:0:0 = 1)
Reference Frame height x width x bit depth (10-bit = 10/8, 8-bit = 1) x chroma format (For 4:2:2 = 2, 4:2:0 = 1.5,
4:0:0 = 1)
Reconstructed Frame height x width x bit depth (10-bit = 10/8, 8-bit = 1) x chroma format (For 4:2:2 = 2, 4:2:0 = 1.5,
4:0:0 = 1)
Intermediate Buffers height/16 x width/16 x 1328 (Requires by AVC when using multi-cores)
Motion Vector Buffer height/16 x width/16 x Codec (For AVC = 32, For HEVC = 16)
Bitstream Buffer height x width x bit depth (10-bit = 10/8, 8-bit = 1) x color format (For 4:2:2 = 2, 4:2:0 = 1.5,
4:0:0 = 1)/ Codec (For AVC = 2, For HEVC = 4)
The following example is for a multistream use case: 4 1080p30 AVC, 4:2:2 chroma format, 10-bit depth.
Reference Frame 3 15 MB 60 MB
Reconstructed Frame 1 5 MB 20 MB
Intermediate Buffers 2 0 MB 0 MB
Bitstream Buffer 2 5 MB 20 MB
Other Buffers 1 1 MB 4 MB
Total 54 MB 217 MB
Memory Bandwidth
AMD recommends using the fastest DDR4 memory interface possible. Specifically, the 8x8-bit memory interface is more
efficient than 4x16-bit memory interface because the x8 mode has four bank groups, whereas the x16 mode has only two; and
DDR4 allows for simultaneous bank group access. For more information, see Answer Record 71209.
The source frame buffer contains the input frame pixels. It contains two parts: luminance pixels (Luma) followed by
chrominance pixels (Chroma). Luma pixels are stored in pixel raster scan order, shown in the following figure. Chroma pixels are
stored in an U/V-interleaved pixel raster scan order. Therefore, the Chroma portion is half the size of the Luma portion when
using a 4:2:0 format and the same size as the Luma portion when using a 4:2:2 format. The encoder picture buffer must be one
contiguous memory region.
✎ Note: The VCU accepts semi-planar data.
Two packing formats are supported in external memory: 8 bits per component or 10 bits per component, shown in the following
tables. The 8-bit format can only be used for an 8-bit component depth and the 10-bit format can only be used for a 10-bit
component depth.
✎ Note: Encoder input buffers width and height should be in multiple of 32. Decoder output buffers width is in multiple of 256-
byte. Height is in multiples of 64. For example, for 1920x1080 resolution, the decoder output is 2048x1088.
The following table lists the encoder block registers. For additional information, see Zynq UltraScale+ Device Register Reference
(UG1087).
1. Mixed registers are registers that have read only, write only, and read write bits grouped together.
The quality of encoded video is based on a function of the target-bitrate and type of video content. Several encoder parameters
can be used to adjust the encoder quality, such as:
The number of B-frames (Gop.NumB) that can be adjusted according to the amount of motion, for example, increase to
two for static scenes or video conferencing or reduce to zero for sequences with a lot of motion or high frame rates.
The VBR rate control mode can improve the average quality when some parts of the sequence have lower complexity or
motion.
For video conference or when random access is not needed, you can replace the IPP... GOP by the LOW_DELAY_P GOP and
optionally enable the GDR intra refresh.
If there are frequent scene changes, the ScnChgResilience setting can be enabled to reduce artifacts following scene
change transitions.
If scene changes can be detected by the system, the encoder's scene change signaling API should be called instead (with
ScnChgResilience disabled, for example) for the encoder to dynamically adapt the encoding parameters and GOP pattern.
The scene change information can be provided in a separate input file (CmdFile) when using the control software test
application.
If the highest PSNR figures are targeted instead of subjective quality, it is recommended to set QPCtrlMode =
UNIFORM_QP and ScalingList = FLAT.
If target bitrate is too low for complex video scenes, I frame beating or a sweeping low quality line (if in GDR mode) is
noticeable. Target bitrate needs to be increased to avoid such visual artifacts.
Decoder Block
The decoder block is designed to process video streams using the H.265 (HEVC) and H.264 (AVC) standards. It provides a
complete support for these standards, including support for 8-bit and 10-bit color depth, 4:0:0, 4:2:0, and 4:2:2 Chroma formats,
up to 4K UHD at 60 Hz performance.
The decoder block efficiently performs video decompression.
The IP hardware has a direct access to the system data bus through a high-bandwidth master interface to transfer video data to
and from an external memory.
The IP control software is partitioned into two layers. The VCU Control Software runs on the APU while the MCU firmware runs
on an MCU, which is embedded in the hardware IP. The APU communicates with the embedded MCU through a slave interface,
also connected to the system bus. The IP hardware is controlled by the embedded MCU using a register map to set decoding
parameters through an internal peripheral bus.
The VCU decoder block is shown in the following figure.
Features
Performance
Profiles
Baseline (except Main
FMO/ASO/RS) Main Intra
Constrained Baseline Main 10
Main Main 10 Intra
High Main 4:2:2 10
Progressive High and Main 4:2:2 10 Intra
Constrained High (subsets
of the High profile)
High 10
High 4:2:2
32 streams at 720x480p at 30 Hz
Eight streams at 1920x1080p at 30 Hz
Four streams at 1920x1080p at 60 Hz
Two streams at 3840x2160p at 30 Hz
One stream at 3840x2160p at 60 Hz
One stream at 7680x4320p at 15 Hz
Picture width and height multiple of eight Minimum size: 80×96 Minimum size: 128×128
Maximum width or height: 8192 Maximum width: 8,192 Maximum width: 8184
Maximum picture size of 33.5 MP Maximum height: 8,192 (limited to 4,096 in level
4/4.1 or when WPP is
enabled)
Maximum height: 8,192
Coding Tools
1. Support of 8K15 uses a subset of level 6: maximum Luma picture size up to 225 samples, other constraints of Level 5.1
apply (e.g., maximum of 200 slices and 11×10 tiles), WPP is not supported for widths above 4,096.
2. Support of 8K15 uses a subset of Level 6: maximum Luma picture size up to 225 samples, other constraints of Level
5.2 apply, maximum slice size of 65,535 macroblocks so a minimum of two balanced slices must be used above 4K
size.
The following table describes the VCU decoder block maximum supported bit rates.
High 62.5
High 10 150
High 300
High 10 720
Main 4:2:2 10 84
Functional Description
The following figure shows the block diagram of the decoder block.
The decoder block includes the H.265/H.264 decompression engine, control registers, and an interrupt controller block. The
decoder block is controlled by an MCU subsystem. A 32-bit AXI4-Lite slave interface is used by the system CPU to control the
MCU to configure decoder parameters, start processing of video frames and to get status and results. Two 128-bit AXI4 master
interfaces are used to fetch video input data and store video output data from/to the system memory. An AXI4 master interface
is used to fetch the MCU software and performs load/store operation on additional MCU data.
Applications that use the decoder must connect all the decoder ports (ports beginning with m_axi_dec). The following table
shows the decoder block AXI4 master interface ports.
Reset
Data Path
The master interface inputs several types of video data from external memory:
Bitstream
Reference frame pixels
Co-located picture motion vectors
Headers and residual data
Control Path
The VCU slave interface is accessed once per frame by the APU, which sends a frame-level command to the IP. This interface
therefore does not require a fast data path. An interrupt is generated on conclusion of each frame. These commands are
processed by the embedded MCU, which generates tile- and slice-level commands to the Decoder block hardware.
The Decoder input and output requirements are shown in the following table.
Requirement Description
Input Buffer
Contiguous No
Alignment 0
Output Buffer
Alignment 32
Video resolution
Chroma sub-sampling
Color depth
Coding standard: H.264 or H.265
Buffers Per Buffer Total Buffers Per Buffer Total Buffers Per Buffer Total
The following table contains theoretical contiguous memory access (CMA) buffer requirements for the VCU decoder based on
resolution and format. The sizes below correspond to one instance of the encoder or decoder. Multiply these by number of
streams for multistream use cases. Other elements such as kmssink/v4l2src typically increase the CMA requirements by an
additional 10 to 15%.
3840×2160 (MB) 665 582 597 513 524 466 473 414
1920×1080 (MB) 258 214 232 190 208 167 188 148
Memory Bandwidth
The decoder memory bandwidth depends on frame rate, resolution, color depth, chroma format and Decoder profile. The
LogiCORE™ IP provides an estimate of decoder bandwidth based on the video parameters selected in the GUI.
AMD recommends using the fastest DDR4 memory interface possible. Specifically, the 8x8-bit memory interface is more
efficient than 4x16-bit memory interface because the x8 mode has four bank groups, whereas the x16 mode has only two and
DDR4 allows for simultaneous bank group access. For more information, see AMD Answer: 71209.
Memory Format
The decoded picture buffer contains the decoded pixels. It contains two parts: luminance pixels (Luma) followed by
chrominance pixels (Chroma). Luma pixels are stored in pixel raster scan order. Chroma pixels are stored in U/V-interleaved
pixel raster scan order, hence the Chroma part is half the size of the Luma part when using a 4:2:0 format and the same size as
the Luma part when using a 4:2:2 format. The decoded picture buffer must be one contiguous memory region.
✎ Note: Decoder output buffers width are in multiple of 256 byte. Height is in multiples of 64. For example: for 1920x1080
resolution, Decoder output is 2048x1088.
Two packing formats are supported in external memory: eight bits per component or 10 bits per component, shown in the
following tables, respectively. The 8-bit format can only be used for an 8-bit component depth and the 10-bit format can only be
used for a 10-bit component depth. The following tables show the raster scan format supported by the decoder block for 8-bit
and 10-bit color depth.
The frame buffer width (pitch) can be larger than the frame width so that there are (pitch - width) ignored values between
consecutive pixel lines.
VCU Decoder
The VCU encoded data and the decoded format is shown in the following table:
The following table lists the decoder block registers. For additional information, see the Zynq UltraScale+ Device Register
Reference (UG1087).
1. Mixed registers are registers that have read only, write only, and read write bits grouped together.
✎ Note: The VCU encoder and decoder output streams are stored in DDR memory (either PS or PL) and cannot be routed
directly from encoder to decoder and vice versa, so it is required to use DDR (PS or PL) with the VCU encoder and decoder.
The VCU core includes two MCU subsystems that run the MCU firmware and control the encoder and decoder blocks. The
encoder and decoder blocks each have their own MCU to execute the firmware. The MCU has a 32-bit RISC architecture capable
of executing pipelined transactions. The MCU has internal instruction, data cache, and AXI master interface to interface with the
external memory.
Functional Description
The following figure shows the top-level interfaces and detailed architecture of the MCU.
The MCU interfaces to peripherals using a 32-bit AXI4-Lite master interface. It has a local memory bus, an AXI4 32-bit
instruction, and data cache interfaces.
The MCU block has a 32 KB local memory for internal operations that is shared with the CPU for boot and mailbox
communication. The MCU has a 32 KB instruction cache with 32-byte cache line width. It has a 4 KB data cache with 16-byte
cache line width. The data cache has a write-through cache implementation.
The following table shows the AXI4 instruction and data cache interface ports of MCU.
Control Flow
The MCU is kept in sleep mode after applying the reset until the firmware boot code is downloaded by the kernel device driver
into the internal memory of the MCU. After downloading the boot code and completing the MCU initialization sequence, the
control software communicates with the MCU using a mailbox mechanism implemented in the internal SRAM of the MCU. The
MCU sends an acknowledgment to the control software and performs the encoding/decoding operation. When the requested
operation is complete, the MCU communicates the status to the control software.
For more details about control software and MCU firmware, refer to Application Software Development.
The following table lists the MCU registers. For additional information, see the Zynq UltraScale+ Device Register Reference
(UG1087) .
1. Mixed registers are registers that have read only, write only, and read write bits grouped together.
Overview
The AXI Performance Monitor (APM) is implemented inside the embedded Video Codec Unit (VCU). The VCU AXI Performance
Monitor (VAPM) allows access to system level behavior in a non-invasive way and without burdening the design with additional
soft IP.
The APM block is capable of measuring the number of read/write bytes and address based transactions within a measurement
window on the AXI master bus from Encoder/Decoder blocks. The APM can additionally measure master ID based read and
write latency within a measurement window. The APM supports cumulative latency value along with the number of outstanding
transfers being considered for latency measurement. The APM has the ability to interrupt the host processor when the status
registers are ready to be read.
Functional Description
The VAPM generates measurement parameters based on two user-selected operating modes.
Start/Stop Mode
In this mode, the measurement window is determined by the start/stop bit in VCU_SLCR.APM n _TRG[start_stop] ( n =0,1,2,3)
bit. A measurement is triggered when the start bit is set from 0 to 1 in this register and it is stopped when this bit is reset from 1
to 0.
In this mode, a 32-bit counter is used to generate a fixed length measurement window. When the counter reaches the maximum
value, it resets to a value specified in the VCU_SLCR.APMn_TIMER (n = 0, 1, 2, 3) register. The measurement is continued until
the 32-bit counter reaches the value set in the APMn_TIMER register and a capture pulse is generated to store the measured
values in the VCU_SLCR result registers.
The VAPM is capable of doing the following measurements:
Latency can be calculated on transaction ID basis. It is possible to select a single ID or all IDs for latency calculation. For
additional information, see the Zynq UltraScale+ Device Register Reference (UG1087).
APM Registers
1. Mixed registers are registers that have read only, write only, and read write bits grouped together.
The Video Codec Unit (VCU) core is a dedicated hardware block in the programming logic (PL). All interfaces are connected
through AXI interconnect blocks in the PL. The VCU core is AXI4 compliant on its AXI master interfaces. It can be connected to
the S_AXI_HP_FPD (or S_AXI_LPD, S_AXI_HPC_FPD) ports of the PS or AXI compliant interface of the PL memory controller.
There are no direct (hardwired) connections from the VCU to the processing system (PS). HP0 to HP4 AXI ports are
recommended for PS-DDR video application.
Interrupts
There is one interrupt line from the VCU core to the PS (vcu_host_interrupt). This interrupt has to be connected to either
PL-PS-IRQ0[7:0] or PL-PS-IRQ1[7:0]. If there are other interrupts in the design, the interrupt has to be concatenated
along with the other interrupts and then connected to the PS.
The Video Codec Unit (VCU) core supports one clocking topology, the internal phase locked loop (PLL). An internal VCU PLL
drives the high frequency core (667 MHz) and MCU (444 MHz) clocks based on an input reference clock from the
programmable logic (PL). The internal PLL generates a clock for the encoder and decoder blocks.
✎ Note: All AXI clocks are supplied with clocks from external PL sources. These clocks are asynchronous to core encoder and
decoder block clocks. The encoder and decoder blocks handle asynchronous clocking in the AXI ports.
The VCU core is reset under the following conditions:
Initially while the PL is in power-up/configuration mode, the VCU core is held in reset.
After the PL is fully configured, a PL based reset signal can be used to reset the VCU for initialization and bring-up.
Platform management unit (PMU) in the processing system (PS) can drive this reset signal to control the reset state of the
VCU.
During partial reconfiguration (PR), the VCU block is kept under reset if it is part of the dynamically reconfigurable module.
Functional Description
Clocking
The Decoder (VDEC) and Encoder (VENC) blocks work independently as separate units without any dependency on each other.
The following table describes the clock domains in VCU core.
Core clock 712 Processing core, most of the logic and memories
AXI master port for memory access, 128-bit, typically connected to PS AFI-FM
(HP) port or to a soft memory controller in the PL.
AXI4-Lite slave port 167 s_axi_lite_aclk, AXI4-Lite slave port (32-bit) for register programming
clock
✎ Note: All AXI clocks are supplied with clocks from external PL sources. These clocks are asynchronous to core encoder,
decoder, and MCU clocks. The VENC and VDEC cores are designed to handle asynchronous clocking in the AXI ports. The
m_axi_mcu_aclk is asynchronous to all clocks used in VCU.
pll_ref_clk is sourced externally to the device, typically by a programmable clock integrated circuit.
Video encoder and decoder blocks work under the VENC_core_clk domain generated by the VCU PLL.
MCU for encoder and decoder work under the VENC_MCU_clk domain generated by the VCU PLL.
m_axi_enc_aclk is the AXI clock input from the PL for the 128-bit AXI master interfaces for the encoder.
m_axi_dec_aclk is the AXI clock input from the PL for the 128-bit AXI master interfaces for the decoder.
s_axi_lite_aclk is the AXI4-Lite clock from the PL.
m_axi_mcu_aclk is the MCU AXI master clock from the PL.
The following clock frequency requirements must be met while providing clocks from PL:
The AXI clock for encoder and decoder interface is limited to 333 MHz.
The following ratio requirements need to be met:
s_axi_lite_aclk ≤ 2 × m_axi_enc_aclk
s_axi_lite_aclk ≤ 2 × m_axi_dec_aclk
Refer to Microcontroller Unit Overview for more information on the MCU.
PLL Overview
The VCU core has a PLL for generating encoder and decoder block clocks. Typically, the PLL has an external source such as an
Si570 XO programmable clock generator connected directly using an IBUFDS to the PLL reference clock. Alternatively, the
IBUFDS can drive an MMCM to enable other modules to share an external clock source while meeting the sub-100ps jitter
specification. It is not recommended to use the PS PLL as a clock source due to jitter requirements. The range of the PLL
reference clock is 27 MHz to 60 MHz. The PLL generates a high frequency clock that can be divided down to generate various
output clock frequencies. The divided clock can be supplied to the encoder block, decoder block, and MCU (separate MCU for
video encoder and decoder).
The PLL has a Voltage Controlled Oscillator (VCO) block which generates an output clock based on the input reference clock.
The output clock from VCO is generated based on a frequency multiplier value. The output clock of the VCO is divided by an
output divider to generate the final clock.
The VCO operating frequency can be determined by using the following relationship:
fvco = frefclk × M
and
fclkout = fvco / O
where, M corresponds to the integer feedback divide value and O corresponds to the value of output divide.
✎ Note: The PLL does not support fractional divider values.
‼ Important: Select the PLL feedback multiplier value based on the supported VCO frequency range (fvco).
Refer to the Zynq UltraScale+ MPSoC Data Sheet: DC and AC Switching Characteristics (DS925) for more information on the
operating range of fvco.
Select the output divider (O) based on the required core clock or MCU clock frequency.
⚠ CAUTION! Sharing VCU clock inputs with other IP can result in clock jitter that might degrade VCU performance or image
quality.
Reset Sequence
The state of the VCU during PL power up and the initialization sequence for the VCU are as follows:
The PL is not yet configured. In this condition, the VCU is held in reset.
The VCU core is held in reset when the power supplies ramp up. A voltage detector present in the VCU core to PL interface
keeps the core under reset while supplies ramp up.
PL is fully configured. The PL is configured with AXI connectivity between a CPU in the PS or PL and the AXI slave port of
the VCU core. The VCU core reset can be released so that the core is in a known state.
After the VCU core reset is deasserted, use the software to program the VCU PLL for generating the clocks for VCU core and
MCU blocks. When programming the VCU PLL, follow the steps described in PLL Integer Divider Programming for programming
PLL configuration parameters. The PLL lock status is indicated by VCU_SLCR. For additional information, see the Zynq
UltraScale+ Device Register Reference (UG1087).
✎ Note: The VCU core clocks are available while the reset is released. The PL should be configured before releasing the raw
reset, which can be controlled by the PMU from outside of the VCU core.
Additional initialization is done by software through programming the VCU core registers after the PL is configured and core is
in a reset release state.
⚠ CAUTION! It is not possible to power down VCU rail (VCUINT_VCU) dynamically during runtime.
To operate the VCU PLL, configure the VCU_SLCR. VCU_PLL_CFG register using the values in the following table. The following
fields must be programmed.
VCU_SLCR.VCU_PLL_CFG[LOCK_DLY]
VCU_SLCR.VCU_PLL_CFG[LOCK_CNT]
VCU_SLCR.VCU_PLL_CFG[LFHF]
VCU_SLCR.VCU_PLL_CFG[CP]
VCU_SLCR.VCU_PLL_CFG[RES]
The FBDIV value (or the PLL feedback multiplier value, M) depends on the output VCO frequency (fvco). You must program
VCU_SRCR.VCU_PLL_CFG based on the calculated FBDIV values in the following table.
25 3 10 3 63 1000
26 3 10 3 63 1000
27 4 6 3 63 1000
28 4 6 3 63 1000
29 4 6 3 63 1000
30 4 6 3 63 1000
31 6 1 3 63 1000
32 6 1 3 63 1000
33 4 10 3 63 1000
34 5 6 3 63 1000
35 5 6 3 63 1000
36 5 6 3 63 1000
37 5 6 3 63 1000
38 5 6 3 63 975
39 3 12 3 63 950
40 3 12 3 63 925
41 3 12 3 63 900
42 3 12 3 63 875
43 3 12 3 63 850
44 3 12 3 63 850
45 3 12 3 63 825
46 3 12 3 63 800
47 3 12 3 63 775
48 3 12 3 63 775
49 3 12 3 63 750
50 3 12 3 63 750
51 3 2 3 63 725
52 3 2 3 63 700
53 3 2 3 63 700
54 3 2 3 63 675
55 3 2 3 63 675
56 3 2 3 63 650
57 3 2 3 63 650
58 3 2 3 63 625
59 3 2 3 63 625
60 3 2 3 63 625
61 3 2 3 63 600
62 3 2 3 63 600
63 3 2 3 63 600
64 3 2 3 63 600
65 3 2 3 63 600
66 3 2 3 63 600
67 3 2 3 63 600
68 3 2 3 63 600
69 3 2 3 63 600
70 3 2 3 63 600
71 3 2 3 63 600
72 3 2 3 63 600
73 3 2 3 63 600
74 3 2 3 63 600
75 3 2 3 63 600
76 3 2 3 63 600
77 3 2 3 63 600
78 3 2 3 63 600
79 3 2 3 63 600
80 3 2 3 63 600
81 3 2 3 63 600
82 3 2 3 63 600
83 4 2 3 63 600
84 4 2 3 63 600
85 4 2 3 63 600
86 4 2 3 63 600
87 4 2 3 63 600
88 4 2 3 63 600
89 4 2 3 63 600
90 4 2 3 63 600
91 4 2 3 63 600
92 4 2 3 63 600
93 4 2 3 63 600
94 4 2 3 63 600
95 4 2 3 63 600
96 4 2 3 63 600
97 4 2 3 63 600
98 4 2 3 63 600
99 4 2 3 63 600
100 4 2 3 63 600
101 4 2 3 63 600
102 4 2 3 63 600
103 5 2 3 63 600
104 5 2 3 63 600
105 5 2 3 63 600
106 5 2 3 63 600
107 3 4 3 63 600
108 3 4 3 63 600
109 3 4 3 63 600
110 3 4 3 63 600
111 3 4 3 63 600
112 3 4 3 63 600
113 3 4 3 63 600
114 3 4 3 63 600
115 3 4 3 63 600
116 3 4 3 63 600
117 3 4 3 63 600
118 3 4 3 63 600
119 3 4 3 63 600
120 3 4 3 63 600
121 3 4 3 63 600
122 3 4 3 63 600
123 3 4 3 63 600
124 3 4 3 63 600
125 3 4 3 63 600
✎ Note: The range for FBDIV is based on what is supported in silicon. A minimum value of 25 is determined based on the
minimum VCO frequency and the maximum input clock frequency. The applicable values for FBDIV are determined by the fVCO,
VCO output frequency range. For VCU PLL, the range is 1500-3000 MHz. FBDIV should be determined based on the fVCO,
required output frequency and the input clock frequency. For example, for a 27 MHz input frequency and a 666 MHz VCU
operating frequency, the possible FBDIV value is 2664/27= 99. Also, the maximum value of FBDIV for 27 MHz will be
3000/27=111. Similarly, the minimum value of FBDIV for 60 MHz will be 1500/60=25.
Reset
The VCU hard block can be held under reset under the following conditions:
The VCU reset signal must be asserted for, at least, two clock cycles of the VCU PLL reference clock (the slowest clock input to
the VCU). The VCU registers can be accessed after the reset signal is de-asserted.
✎ Note:
If software resets the VCU block in the middle of a frame, use the software to clear the physical memory allocated for the
VCU.
The reset does not need to be asserted between changes to the VCU configuration during run-time via the VCU control
software.
The vcu_resetn signal of Zynq UltraScale+ VCU should be tied to either AXI GPIO or ZynqMP GPIO(EMIO).
The software can program the VCU_GASKET_INIT register at offset 0x41074 in the VCU_SLCR to assert a reset pulse to the
VCU block. Reset VCU using the following procedure:
1. Ensure there is no pending AXI transaction in VCU AXI bus/AXI4-Lite bus. This can be ensured by making sure that the
software that uses VCU is not running. No master should be sending any requests to VCU.
2. Assert vcu_resetn through an EMIO GPIO pin to VCU LogiCORE IP.
3. De-assert vcu_resetn.
4. Write 0 to VCU gasket isolation register VCU_GASKET_INIT[1] to assert reset to VCU.
5. Write 0 to VCU gasket isolation register VCU_GASKET_INIT[0] to enable VCU gasket isolation.
6. Power down VCU supply.
The PLL in the VCU core can be reset through VCU_SLCR register which is accessible through AXI4-Lite interface.
1. Mixed registers are registers that have read only, write only, and read write bits grouped together.
Vivado Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994)
Vivado Design Suite User Guide: Designing with IP (UG896)
Vivado Design Suite User Guide: Getting Started (UG910)
Vivado Design Suite User Guide: Logic Simulation (UG900)
This section includes information about using AMD tools to customize and generate the core in the AMD Vivado™ Design Suite.
If you are customizing and generating the core in the Vivado IP integrator, see the Vivado Design Suite User Guide: Designing IP
Subsystems using IP Integrator (UG994) for detailed information. IP integrator might auto-compute certain configuration values
when validating or generating the design. To check whether the values do change, see the description of the parameter in this
chapter. To view the parameter value, run the validate_bd_design command in the Tcl console.
You can customize the IP for use in your design by specifying values for the various parameters associated with the IP core
using the following steps:
For details, see the Vivado Design Suite User Guide: Designing with IP (UG896) and the Vivado Design Suite User Guide: Getting
Started (UG910).
Figures in this chapter are illustrations of the Vivado IDE. The layout depicted here might vary from the current version.
The Basic configuration tab, shown in the following figure, allows for the selection of video parameters used to calculate the
encoder buffer size and total dynamic power used by encoder or decoder blocks.
The VCU subsystem is controlled by software at runtime. Configuration options set in the VCU GUI are used to estimate power
consumption, estimate bandwidth, and calculate the encoder buffer size. See VCU Control Software Sample Applications and
VCU Control Software API for information about controlling configuration parameters at runtime.
The parameters on the basic configuration tab are as follows:
Component Name
Component name is set automatically by IP integrator.
Resource Summary
Reports the encoder buffer size. Reports bandwidth for the encoder and decoder.
Enable Encoder
Enables the encoder and related parameters.
Coding Standard
Select AVC or HEVC.
Coding Type
Select the GOP structure to use for encoding:
The encoder buffer can only be enabled if intra and inter frame is selected.
Resolution
Select one of the following resolutions:
854×480
1280×720
1420×576
1920×1080
3840×2160
4096×2160
7680×4320
Color Format
Select one of the following color formats:
4:0:0 - monochrome
4:2:0
4:2:2
Color Depth
Select 8 or 10 bits per channel.
UltraRAM only
Block RAM only
Combination of UltraRAM and block RAM
Enable Decoder
Enables the decoder and associated parameters.
Coding Standard
Select AVC or HEVC.
Resolution
Select one of the following resolutions:
854×480
1024×576
1280×720
1920×1080
3840×2160
4096×2160
7680×4320
Color Format
Select one of the following color formats:
4:0:0 - monochrome
4:2:0
4:2:2
Color Depth
Select 8 or 10 bits per channel.
The Advanced Configuration tab, shown in the following figure, allows you to override the encoder buffer memory depth,
compression features, and the encoder core clock.
The advanced configuration options for the encoder buffer are only enabled the Basic Configuration tab has the following
settings:
Manual Override: Select this to override the Encoder Buffer memory size calculated by the IP integrator.
Memory Depth (Kbytes): If the Manual Override checkbox is selected, you can enter a memory size ranging from 0 to 7,000
Kbytes.
B-Frame: Select one of the following:
NONE - lowest latency
STANDARD - GOP configuration IPPP with intra period of 30 ms or GOP configuration IPBBBBPBBBBP with num-b-
frames=4.
HIERARCHICAL - Also known as pyramidal. Works with 3, 5, or 7 B-frames.
Standard B-Frame use case has lower write bandwidth requirements because it does not write a reconstructed frame to
memory as a reference for subsequent frame encoding.
Motion Vector Range: Decides the encoder buffer size.:
LOW
MEDIUM
HIGH
The exact encoder buffer size is reported in resource summary.
CORE Clk (MHz): Select a clock frequency ranging from 1 to 667 MHz.
Use the Decoder Configuration tab for a multi-stream use-case of decoding three streams of 1080p60 resolution, for example.
The max bandwidth of the VCU is: 1 Stream * 3840 x 2160 fps 60 or 8 Stream * 1920 x 1080 fps 30.
To integrate the VCU core into an IP integrator (IPI) block design, follow these steps:
2. Click Next on New Project wizard until you reach the Family Selection window.
5. In the Settings window, enable the Performance Explore option by selecting Settings > Implementation > Options >
Strategy: Performance_Explore See the Vivado Design Suite User Guide: Design Analysis and Closure Techniques (UG906)
for more information.
6. Click Create Block Design.
10. Configure Zynq UltraScale+ MPSoC to enable AXI slave interfaces, clocking, and PL-PS interrupt signal per your design
requirements. Refer to the Zynq UltraScale+ MPSoC Processing System LogiCORE IP Product Guide (PG201) for
configuration options of the Zynq UltraScale+ MPSoC IP.
The following figure shows an example of configuring the PS-PL interface signals.
12. Enable IRQ0 [0-7] and enable the following master, slave interfaces as shown in figure below. Also set the data width of
S_AXI_HPC0_FPD to 32 bits.
Connect the vcu_host_interrupt port of Zynq UltraScale+ VCU to the pl_ps_irq0 port of Zynq UltraScale+ MPSoC IP.
16. Connect the following clock ports of each IP specified below to the pl_clk1 output of Zynq UltraScale+ MPSoC core:
VCU: m_axi_enc_aclk
VCU: m_axi_dec_aclk
Zynq UltraScale+ MPSoC: saxihp0_fpd_aclk
Zynq UltraScale+ MPSoC: saxihp1_fpd_aclk
Zynq UltraScale+ MPSoC: saxihp2_fpd_aclk
Zynq UltraScale+ MPSoC: saxihp3_fpd_aclk
17. Connect the vcu_resetn signal of Zynq UltraScale+ VCU to peripheral_aresetn pin of Processor System reset.
18. Instantiate a Clocking Wizard IP.
19. Set the output clock frequency to 59Mhz. Connect the clk_in1 pin of clocking wizard to pl_clk0 of Zynq UltraScale+
MPSoC. Also connect the clk_out1 pin to pll_ref_clk of VCU.
20. In the Address Editor tab, expand EncData address segment and auto assign the addresses. The following table shows an
example address map.
22. Create a top-level Vivado wrapper by right-clicking on Block Design and selecting Create HDL Wrapper option as shown in
the following figure.
23. Click on the Run Synthesis, Run Implementation, or Generate Bitstream option.
Use-case 1 (UC1) refers to the multimedia pipeline, where decoder and encoder are using PS_DDR for buffer allocations and
memory read/write operations of video processing. The current system is capable of encoding and decoding of 4k@60 fps and
transcoding of 4k@30 fps with the available bandwidth of PS_DDR. The target is to achieve transcoding at 4k@60fps and it has
been identified that the PS_DDR bandwidth is the bottleneck. A new design has been proposed to overcome PS_DDR bandwidth
limitations.
Use-case 2 (UC2) is the new design approach proposed to use PL_DDR for decoding and PS_DDR for encoding, so that the DDR
bandwidth would be sufficient to achieve transcoding at 4k@60fps. The figure below explains the transcoding pipeline. The
decoder completes the decoding process and writes the data to PL_DDR and the same is copied to PS_DDR from where the
encoder consumed the data. The buffer copy from PL_DDR to PS_DDR is achieved by DMA transfers.
2 GB Access Limit
The following access limits apply:
VCU IP can access 4 GB range from aligned 4 GB base address and MCU can access buffers within 2 GB range only from
dcache offset.
The MCU requires a certain buffer access during encode/decode process, so the buffers that are accessed by MCU should
be within 2 GB range from dcache offset.
To enable PL-DDR:
&amba_pl {
video_m2m {
compatible = "xlnx,mem2mem";
dmas = <&v_frmbuf_rd_0 0>, <&v_frmbuf_wr_0 0>;
dma-names = "tx", "rx";
};
};
b. Update the device tree node to have a dedicated memory for the VCU.
<=19.2 dtsi changes:
&vcu_ddr4_controller_0
{
compatible = "xlnx,ddr4-2.2";
reg = <0x00000048 0x00000000 0x0 0x80000000>;
ranges;
#address-cells = <2>;
#size-cells = <2>;
plmem_vcu: pool@0
{
reg = <0x48 0x00000000 0x0 0x70000000>;
};
};
/ {
reserved-memory {
#address-cells = <0x2>;
#size-cells = <0x2>;
ranges;
plmem_vcu: vcu_dma_mem_region {
compatible = "shared-dma-pool";
no-map;
reg = <0x48 0x0 0x0 0x70000000>;
};
};
};"
c. The VCU device tree node changes to allocate memory from PL_DDR.
&decoder {
xlnx,dedicated-mem = <&plmem_vcu>;
};
&decoder {
memory-region = <&plmem_vcu>;
};
&encoder {
xlnx,dedicated-mem = <&plmem_vcu>;
};
&encoder {
memory-region = <&plmem_vcu>;
};
✎ Note: These changes are relevant to the ZCU106 and ZCU104 designs only. You should modify your changes
based on the specific design that you are working with.
This generates the device tree in the path: components/plnx_workspace/device-tree/device-tree
vi project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi
The contents of the file for >= 20.1 UC2 are:
/include/ "system-conf.dtsi"
/ {
reserved-memory {
#address-cells = <0x2>;
#size-cells = <0x2>;
ranges;
plmem_vcu_dec: vcu_dma_mem_region {
compatible = "shared-dma-pool";
no-map;
reg = <0x48 0x0 0x0 0x70000000>;
};
};
};
&amba_pl {
video_m2m {
compatible = "xlnx,mem2mem";
dmas = <&v_frmbuf_rd_0 0>, <&v_frmbuf_wr_0 0>;
dma-names = "tx", "rx";
};
};
&vcu_ddr4_controller_0 {
compatible = "xlnx,ddr4-2.2";
reg = <0x00000048 0x00000000 0x0 0x80000000>;
ranges;
#address-cells = <2>;
1. Boot using the above images and run the following commands for QoS settings:
✎ Note: The format will vary based on the format that you are using. For example:
XV20 when using 4:2:2 10-bit
NV12 when using 4:2:0 8-bit
✎ Note: A V4L2convert is required for the transcode usecase when encoder and decoder are using different DDRs.
For example, the encoder is using PS and the decoder is using PL. These configuration of using separate DDR is used
to achieve performance.
The necessary XDC constraints are delivered with the core generation in the Vivado Design Suite.
Required Constraints
This section is not applicable for this IP core.
Clock Frequencies
There is no restriction for speed grade. All speed grades support the maximum frequency of operation.
Clock Management
This section is not applicable for this IP core.
Clock Placement
This section is not applicable for this IP core.
Banking
This section is not applicable for this IP core.
Transceiver Placement
This section is not applicable for this IP core.
For details about synthesis and implementation, see the Vivado Design Suite User Guide: Designing with IP (UG896).
Simulation
Introduction
The AMD Zynq™ UltraScale+™ MPSoC architecture-based FPGAs VCU DDR4 IP v1.1 core is a combined pre-engineered
controller and physical layer (PHY) for interfacing Zynq UltraScale+ MPSoC programmable logic (PL) user designs to DDR4
SDRAM. This DDR4 Controller is only for use with the Zynq UltraScale+ MPSoC EV products and not for us with any other AMD
devices.
This chapter provides information about using, customizing, and simulating an AMD LogiCORE™ ™ IP DDR4 SDRAM for Zynq
UltraScale+ MPSoCs. It also describes the core architecture and provides details on customizing and interfacing to the core.
IP Facts
Core Specifics
Support
1. For a complete list of supported devices, see the AMD Vivado™ IP catalog.
2. For the supported versions of the tools, see the Vivado Design Suite User Guide: Release Notes, Installation, and
Licensing (UG973).
Overview
The AMD UltraScale+ architecture includes the DDR4 SDRAM cores. These cores provide solutions for interfacing with these
SDRAM memory types. Both a complete Memory Controller and a physical layer only solution are supported. This controller is
optimized for the VCU traffic patterns, specifically the decoder accesses to memory. The UltraScale+ architecture for the DDR4
cores are organized in the following high-level blocks:
Physical Layer
The physical layer provides a high-speed interface to the SDRAM. This layer includes the hard blocks inside the FPGA and
the soft blocks calibration logic necessary to ensure optimal timing of the hard blocks interfacing to the SDRAM. The
application logic is responsible for all SDRAM transactions, timing, and refresh.
Memory Initialization
The calibration modules provide a JEDEC®-compliant initialization routine for the particular memory type. The delays in
the initialization process can be bypassed to speed up simulation time, if desired.
Calibration
The calibration modules provide a complete method to set all delays in the hard blocks and soft IP to work with the
memory interface. Each bit is individually trained and then combined to ensure optimal interface performance. Results of
the calibration process are available through the AMD debug tools. After completion of calibration, the PHY layer presents
raw interface to the SDRAM.
‼ Important: Zynq UltraScale+ EV Architecture Video Codec Unit DDR4 LogiCORE IP can only be used with the H.264/H.265
Video Codec Unit (VCU) core for Zynq UltraScale+ MPSoCs.
Supports five high performance AXI ports for connecting to decoder 0, decoder 1, MCU, PS, and display controller interface
Component/SODIMM support for interface width of 64-bits
Supports speed bins of 2133, 2400, and 2667
Support for Zynq-UltraScale+ EV series -1e , -2 , -3e parts and supported frequencies 2133 MT/s, 2400 MT/s, 2667
MT/s.
Does not support speed grades –1L and –1LV.
Supports AXI
See Table 1 for a complete list of supported memories.
Reorders FIFO for better efficiencyx8 and x16 device support
8-word burst support
ODT support
Write leveling support for DDR4 (fly-by routing topology required component designs)
JEDEC-compliant DDR4 initialization support
Encrypted source code delivery in Verilog/VHDL
Open, closed, and transaction based pre-charge controller policy
Interface calibration and training information available through the Vivado hardware manager
Target technologies (physical interface):
Using MIG generated phy-only design
Zynq UltraScale+ MPSoC
DDR4 features are supported
Controller:
High efficiency is achieved for multi-port and random access applications through
Reordering
Burst maximization
Deep lookahead
Ping pong ss
High frequency can be achieved through configurable pipes
Low resource count
This AMD LogiCORE IP module is provided at no additional cost for Zynq UltraScale+ MPSoC EV devices with the AMD Vivado
Design Suite under the terms of the End User License. Information about other LogiCORE IP modules is available at the
Intellectual Property page. For information on pricing and availability of other LogiCORE IP modules and tools, contact your
local sales representative.
Product Specification
Standards
This core supports DRAMs that are compliant to the JESD79-4, DDR4 SDRAM Standard, JEDEC Solid State Technology
Association.
Performance
System Throughput
The following table shows memory controller performance for running a 4kp60, 4:2:2, 10-bit decode-display pipeline. These
throughput numbers are based on x8 configuration of the controller at 2133 DRAM speed.
Read bandwidth 1879.78 MBps 1913.62 MBps 1328.09 MB/s 950.72 MBps
X8 93 frames/sec 74 frames/sec
✎ Note: These numbers do not include the latency information from the interconnect in the fabric. For more information, see
the UltraScale Architecture-Based FPGAs Memory IP LogiCORE IP Product Guide (PG150).
Latency number for the access times in "VCU DDR4 Controller" is 130 ns on SODIMM MTA8ATF51264HZ-2G6B1 @ 2400.
Resource Utilization
The following table shows details about performance and resource utilization.
CARRY8 170 0
F7 MUXES 1147 0
DSPs 3 0
HPIOBDIFFINBUF 9 0
BITSLICE_RX_TX 105 0
PLL 3 0
MMCM 1 0
The following table illustrates the resource utilization for VCU DDR4 Controller based on the
vcu_llp2_trd_vcu_ddr4_controller_0_0.
Memory Usage
Memory Usage
Port Descriptions
The following figure shows the block diagram of VCU DDR4 Controller which has five AXI ports, s_axi_clk, s_axi_rst, c0_sys_clk,
and sys_rst.
S_AXI_CLK / S_AXI_RST: The AXI ports work with respect to this clock and reset. Active-High reset is used.
C0_SYS_CLK / SYS_RST: This is the actual clock used for VCU DDR4 controller, which is of frequency 125 MHz for x16
configuration and 300 MHz for x8 configuration. Active-High reset is used.
AXI Ports
pl_cust_slot_awidX [15:0] In
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
pl_cust_slot_awvalidX In
cust_pl_slot_awreadyX Out
pl_cust_slot_widX[15:0] In
pl_cust_slot_wvalidX In
cust_pl_slot_wreadyX Out
cust_pl_slot_bidX[15:0] Out
cust_pl_slot_bvalidX Out
pl_cust_slot_breadyX In
pl_cust_slot_aridX[15:0] In
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
pl_cust_slot_arvalidX In
cust_pl_slot_arreadyX Out
cust_pl_slot_ridX[15:0] Out
cust_pl_slot_rlastX Out
cust_pl_slot_rvalidX Out
pl_cust_slot_rreadyX In
The following table shows the signals to be used while designing the core and their descriptions:
Signal Description
phy_Clk These are the clock rates that are set for each of the memory speed bins:
✎ Note: You should handle the domain crossing in the interconnect. You can use the phy_clk as master clock and phy_sRst
as master reset for the interconnect that interfaces with DDR4 memory ports.
Endianness
BA317 ports use normally little endian convention. The following table shows accesses to same data in memory from different
kind of ports.
0x1 0x08070605
0x1 0x0403
0x2 0x0605
0x3 0x0807
0x1 0x02
0x2 0x03
0x3 0x04
0x4 0x05
0x5 0x06
0x6 0x07
0x7 0x08
0x1 0x08070605
Physical Interface
The following table lists all available physical interfaces with the supported SDRAM and FPGA devices. Some additional FPGAs
might be supported because they are compatible with the listed FPGAs.
Core Architecture
This section describes the UltraScale™ architecture-based FPGAs Memory Interface Solutions core with an overview of the
modules and interfaces.
Overview
The UltraScale architecture-based FPGAs Memory Interface Solutions is shown in the following figure.
Memory Controller
The memory controller (MC) is designed to take Read, Write, and Read-Modify-Write transactions from the user interface (UI)
block and issues them to memory efficiently with low latency, meeting all DRAM protocol and timing requirements while using
minimal FPGA resources. The controller operates with a DRAM to system clock ratio of 4:1 and can issue one Activate, one
CAS, and one Precharge command on each system clock cycle.
The controller supports an open page policy and can achieve very high efficiencies with workloads with a high degree of spatial
locality. The controller also supports a closed page policy and the ability to reorder transactions to efficiently schedule
workloads with address patterns that are more random. The controller also allows a degree of control over low-level functions
with a UI control signal for AutoPrecharge on a per transaction basis and also the signals that you can use to determine when
DRAM refresh commands are issued.
The key blocks of the controller command path include:
The Group FSMs that queue up transactions, check DRAM timing, and decide when to request Precharge, Activate, and
CAS DRAM commands.
The "Safe" logic and arbitration units that reorder transactions between Group FSMs based on additional DRAM timing
checks while also ensuring forward progress for all DRAM command requests.
The Final Arbiter that makes the final decision about which commands are issued to the PHY and feeds the result back to
the previous stages.
The controller prioritizes reads over writes when reordering is enabled. If both read and write CAS commands are safe to issue
on the SDRAM command bus, the controller selects only read CAS commands for arbitration. When a read CAS issues, write
CAS commands are blocked for several SDRAM clocks specified by parameter tRTW. This extra time is required for a write CAS
to become safe after issuing a read CAS allows groups of reads to issue on the command bus without being interrupted by
pending writes.
Reordering
Requests that map to the same mcGroup are never reordered. Reordering between the mcGroup instances is controlled with the
ORDERING parameter. When set to "NORM," reordering is enabled and the arbiter implements a round-robin priority plan,
selecting in priority order among the mcGroups with a command that is safe to issue to the SDRAM.
The timing of when it is safe to issue a command to the SDRAM can vary on the target bank or bank group and its page status.
This often contributes to reordering.
When the ORDERING parameter is set to "STRICT," all requests have their CAS commands issued in the order in which the
requests were accepted at the native interface. STRICT ordering overrides all other controller mechanisms, such as the
tendency to coalesce read requests, and can therefore degrade data bandwidth utilization in some workloads.
When a new transaction is accepted from the UI, it is pushed into the stage 1 transaction FIFO. The page status of the
transaction at the head of the stage 1 FIFO is checked and provided to the stage 1 transaction FSM. The FSM decides if a
Precharge or Activate command needs to be issued, and when it is safe to issue them based on the DRAM timers.
When the page is open and not already scheduled to be closed due to a pending RDA or WRA in the stage 2 FIFO, the
transaction is transferred from the stage 1 FIFO to the stage 2 FIFO. At this point, the stage 1 FIFO is popped and the stage 1
FSM begins processing the next transaction. In parallel, the stage 2 FSM processes the CAS command phase of the transaction
at the head of the stage 2 FIFO. The stage 2 FSM issues a CAS command request when it is safe based on the tRCD timers. The
stage 2 FSM also issues both a read and write CAS request for RMW transactions.
Read-Modify-Write Flow
When a wr_bytes command is accepted at the user interface it is eventually assigned to a group state machine like other write
or read transactions. The group machine breaks the Partial Write into a read phase and a write phase. The read phase performs
After read data is stored in the controller, the write phase begins as follows:
Write data is merged with the stored read data based on the write data mask bits.
Any multiple bit errors in the read phase results in the error being made undetectable in the write phase as new check bits
are generated for the merged data.
When the write phase completes, the group machine becomes available to process a new transaction. The RMW flow ties up a
group machine for a longer time than a simple read or write, and therefore might impact performance.
PHY
PHY is considered the low-level physical interface to an external DDR3 or DDR4 SDRAM device, and also all calibration logic for
ensuring reliable operation of the physical interface itself. PHY generates the signal timing and sequencing required to interface
to the memory device.
PHY contains the following features:
Clock/address/control-generation logics
Write and read datapaths
Logic for initializing the SDRAM after power-up
In addition, PHY contains calibration logic to perform timing training of the read and write datapaths to account for system
static and dynamic delays.
The PHY is included in the complete Memory Interface Solution core, but can also be implemented as a standalone PHY only
block. A PHY only solution can be selected if you plan to implement a custom Memory Controller. For details about interfacing
to the PHY only block. Refer to the UltraScale Architecture-Based FPGAs Memory IP LogiCORE IP Product Guide (PG150) for
more information.
PLDDR supports a wide range of clocks from the GUI drop down list.
The memory interface requires one MMCM, one TXPLL per I/O bank used by the memory interface, and two BUFGs. These
clocking components are used to create the proper clock frequencies and phase shifts necessary for the proper operation of
the memory interface.
There are two TXPLLs per bank. If a bank is shared by two memory interfaces, both TXPLLs in that bank are used.
✎ Note: DDR4 SDRAM generates the appropriate clocking structure and no modifications to the RTL are supported.
The DDR4 SDRAM tool generates the appropriate clocking structure for the desired interface. This structure must not be
modified. The allowed clock configuration is as follows:
GCIO
Must use a differential I/O standard
Must be in the same I/O column as the memory interface
Must be in the same SLR of memory interface for the SSI technology devices
The I/O standard and termination scheme are system dependent. For more information, refer to the UltraScale Architecture
SelectIO Resources User Guide (UG571).
MMCM
MMCM is used to generate the FPGA logic system clock (1/4 of the memory clock)
Must be located in the center bank of memory interface
Must use internal feedback
Input clock frequency divided by input divider must be 70 MHz (CLKINx / D = 70 MHz)
Resets
An asynchronous reset (sys_rst) input is provided. This is an Active-High reset and the sys_rst must assert for a minimum pulse
width of 5 ns. The sys_rst can be an internal or external pin.
Port Connection Recommendations for Different Use Cases with VCU DDR Memory Controller
Port0
Encoder_0
Frame_buffer_write_0
Frame_buffer_write_1
Port1
Encoder_1
Frame_buffer_write_2
Frame_buffer_write_3
Port2 MCU
Port4 NC
Port0
Encoder_0
Decoder_0
Frame_buffer_write_0
Frame_buffer_write_1
Port1
Encoder_1
Decoder_0
Frame_buffer_write_2
Frame_buffer_write_3
Port2 MCU
Vivado Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994)
Vivado Design Suite User Guide: Designing with IP (UG896)
Vivado Design Suite User Guide: Getting Started (UG910)
Vivado Design Suite User Guide: Logic Simulation (UG900)
✎ Note: For the memory parts not listed in the previous table, compare the datasheet timing specifications (JEDEC) for
compatibility. The key timing specifications are T~RCD~, T~RP~, T~AA~, T~FAW~, T~CCD~, Write CWL, and CL. If the
datasheet specifications have similar or better timing, then the memory part should be compatible. Only the memory parts
listed in the previous table have been validated by AMD.
Choose RAM
You can select from BRAM or URAM + BRAM.
✎ Note: For 4EV and 5EV devices, the "URAM + BRAM" option is not valid. For 7EV devices, you should be aware that
additional IPs in the design can consume URAMs and can result in a overutilization of URAMs in the 7EV device.
The port priority is fixed. Five ports are enabled in the IP. The high priority ports need to be connected to a decoder/display
interface. The low priority ports need to be connected to a PS, Micro Controller Unit (MCU) interface.
Custom Flow
In the current VCU, DDR4 logic core IP supports only listed memories. and If additions are required, the user needs to get
support from AMD team to add the new memories. To avoid this scenario, the VCU DDR4 controller is enhanced to support
custom memory addition by getting the parameters and CSV format such as MIG input.
The following figure shows the GUI changes:
The preceding figure shows the GUI with updated DDR4 controller.
To add memory, perform the following steps:
The following figure shows how the IP is connected to VCU LogiCORE IP.
Figure: IP Connection
I/O Planning
DDR4 SDRAM I/O pin planning is completed with the full design pin planning using the Vivado I/O pin planner. DDR4 SDRAM I/O
pins can be selected through several Vivado I/O pin planner features including assignments using I/O Ports view, Package view,
or Memory Bank/Byte Planner. Pin assignments can additionally be made through importing an XDC or modifying the existing
XDC file.
These options are available for all DDR4 SDRAM designs and multiple DDR4 SDRAM IP instances can be completed in one
setting. To learn more about the available Memory IP pin planning options, see the Vivado Design Suite User Guide: I/O and
Clock Planning (UG899).
Required Constraints
For DDR3/DDR4 SDRAM Vivado IDE, you specify the pin location constraints. For more information on I/O standard and other
constraints, see the Vivado Design Suite User Guide: I/O and Clock Planning (UG899). The location is chosen by the Vivado IDE
according to the banks and byte lanes chosen for the design.
The I/O standard is chosen by the memory type selection and options in the Vivado IDE and by the pin type. A sample for dq[0]
is shown here.
Internal VREF is always used for DDR4. Internal VREF is optional for DDR3. A sample for DDR4 is shown here.
✎ Note: Internal VREF is automatically generated by the tool and you do not need to specify it. The VREF value listed in this
constraint is not used with PODL12 I/Os. The initial value is set to 0.84V. The calibration logic adjusts this voltage as needed for
maximum interface performance.
The system clock must have the period set properly:
For HR banks, update the output_impedance of all the ports assigned to HR banks pins using the reset_property command. For
more information, see AR: 63852.
The following code example shows the maximum delay constraints for the ZCU104 (300 MHz)
✎ Note: Clock constraints for IPs are managed in Vivado and you need not enter constraint values.
Clock Frequencies
This section is not applicable for this IP core.
Clock Management
For more information on clocking, see Clocking.
Clock Frequencies
This section is not applicable for this IP core.
Clock Placement
This section is not applicable for this IP core.
Banking
This section is not applicable for this IP core.
Transceiver Placement
This section is not applicable for this IP core.
Simulation
Simulation of the Zynq AMD UltraScale™ EV Architecture Video Codec Unit DDR4 LogiCORE IP is not supported.
xczu7ev-fbvb900-1LV-i NO NO NO 0.72
xczu7ev-ffvc1156-1LV-i NO NO NO 0.72
xczu7ev-ffvf1517-1LV-i NO NO NO 0.72
xczu4ev-fbvb900-1LV-i NO NO NO 0.72
xczu4ev-sfvc784-1-e NO NO NO 0.85
xczu4ev-sfvc784-1-i NO NO NO 0.85
xczu4ev-sfvc784-1L-i NO NO NO 0.85
xczu4ev-sfvc784-1LV-i NO NO NO 0.85
xazu4ev-sfvc784-1-i NO NO NO 0.85
xazu4ev-sfvc784-1LV-i NO NO NO 0.85
xazu4ev-sfvc784-1Q-q NO NO NO 0.85
xazu5ev-sfvc784-1-i NO NO NO 0.85
xazu5ev-sfvc784-1LV-i NO NO NO 0.85
xazu5ev-sfvc784-1Q-q NO NO NO 0.85
xazu7ev-fbvb900-1-i NO NO NO 0.85
xazu7ev-fbvb900-1Q-q NO NO NO 0.85
xczu5ev-fbvb900-1-e NO NO NO 0.85
xczu5ev-fbvb900-1-i NO NO NO 0.85
xczu5ev-fbvb900-1L-i NO NO NO 0.85
xczu5ev-fbvb900-1LV-i NO NO NO 0.72
xczu5ev-sfvc784-1-e NO NO NO 0.85
xczu5ev-sfvc784-1-i NO NO NO 0.85
xczu5ev-sfvc784-1L-i NO NO NO 0.85
xczu5ev-sfvc784-1LV-i NO NO NO 0.85
IP Facts
Core Specifics
Simulation 3 For supported simulators, see the Vivado Design Suite User Guide: Release Notes,
Installation, and Licensing (UG973).
Support
1. For a complete list of supported devices, see the AMD Vivado™ IP catalog.
2. For the supported versions of the tools, see the Vivado Design Suite User Guide: Release Notes, Installation, and
Licensing (UG973).
3. Behavioral simulations using only Verilog simulation models are supported. Netlist (post-synthesis and post-
implementation) simulations are not supported.
The VCU Sync IP core is designed to act as a fence IP between the Video DMA and VCU IPs. It is used in video applications that
require ultra-low latencies. The Sync IP performs AXI-transaction-level tracking so that the producer and consumer can be
synchronized at the granularity of AXI transactions instead of the granularity of the video buffer level. The Sync IP element is
responsible for synchronizing buffers between the capture DMA and the VCU encoder. The Sync IP can only synchronize
between capture DMA and VCU encoder.
While the capture element is writing into the DRAM, the capture hardware writes video buffers in raster scan order and the Sync
IP monitors the buffer level. It allows the encoder to read input buffer data, if the requested data is already written by the DMA;
otherwise, it blocks the encoder until the DMA completes its writes. On the decoder side, the VCU decoder writes the decoded
video buffer into the DRAM in block raster scan order and the display reads data in raster scan order.
1: The frame buffer writes into the memory. In encoder mode, the Sync IP snoops the transactions.
2: The VCU encoder reads the transactions. The Sync IP only allows reads when the frame buffer has completed the writes to
those sections of the memory.
3: The VCU encoder writes the compressed video stream back to DRAM.
4: The VCU decoder reads the compressed video stream from DRAM.
5: The VCU decoder writes back decoded frames into DRAM.
6: The display reads decoded frames after half the frame is written by the VCU decoder.
Features
This AMD LogiCORE IP module is provided at no additional cost for Zynq UltraScale+ MPSoC EV Devices with the AMD Vivado
Design Suite under the terms of the AMD End User License. Information about other AMD LogiCORE IP modules is available at
the AMD Intellectual Property page. For information on pricing and availability of other AMD LogiCORE IP modules and tools,
contact your local AMD sales representative.
Product Specification
Standards
N/A
Performance
N/A
Resource Utilization
F8 MUXES (57600) 4 0
DSPs (1728) 0 0
HPIOBDIFFINBUF (192) 0 0
BITSLICE_RX_TX (416) 0 0
PLL (16) 0 -
MMCM (8) 0 -
✎ Note: The Sync IP logic core GUI has an option of selecting the DDR4 memory size depending on the requirement. Table 1
gives the full address width utilization.
Port Descriptions
S_AXI_CTRL_ACLK/S_AXI_MM_ALCLK/S_AXI_MM_P_ACLK
The AXI ports work with respect to this clock and reset. The clock frequency used for the Sync IP is 300 MHz.
S_AXI_CTRL_ARESETN/S_AXI_CTRL_MM_ARESETN/S_AXI_CTRL_MM_P_ARESETN
An active-Low reset is used in the Sync IP.
Figure: Sync IP
s_axi_ctrl_awvalidX Input
s_axi_ctrl_awreadyX Output
s_axi_ctrl_awprotX[2:0] Input
s_axi_ctrl_wreadyX Output
s_axi_ctrl_bvalidX Output
s_axi_ctrl_breadyX Input
s_axi_ctrl_arprotX[2:0] Input
s_axi_ctrl_arvalidX Input
s_axi_ctrl_arreadyX Output
s_axi__ctrl_rvalidX Output
s_axi__ctrl_rreadyX Input
s_axi_mm_x_awid[3:0] Input
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
s_axi_mm_x_awvalid Input
s_axi_mm_x_awready Output
s_axi_mm_x_awcache[3:0] Input
s_axi_mm_x_awlock Input
s_axi_mm_x_awprot[2:0] Input
s_axi_mm_x_awqos[3:0] Input
s_axi_mm_x_awregion[3:0] Input
s_axi_mm_x_awuser Input
s_axi_mm_x_wvalid Input
s_axi_mm_x_wready Output
s_axi_mm_x_wuser Input
s_axi_mm_x_bid[3:0] Output
s_axi_mm_x_bvalid Output
s_axi_mm_x_bready Input
s_axi_mm_x_buser Output
s_axi_mm_x_arid[3:0] Input
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
s_axi_mm_x_arvalid Input
s_axi_mm_x_arready Output
s_axi_mm_x_arcache[3:0] Input
s_axi_mm_x_arlock Input
s_axi_mm_x_arprot[2:0] Input
s_axi_mm_x_arqos[3:0] Input
s_axi_mm_x_arregion[3:0] Input
s_axi_mm_x_aruser Input
s_axi_mm_x_rid[3:0] Output
s_axi_mm_x_rlast Output
s_axi_mm_x_rvalid Output
s_axi_mm_x_rready Input
s_axi_mm_x_ruser Output
s_axi_mm_p_x_awid[3:0] Input
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
s_axi_mm_p_x_awvalid Input
s_axi_mm_p_x_awready Input
s_axi_mm_p_x_awcache[3:0] Input
s_axi_mm_p_x_awlock Input
s_axi_mm_p_x_awprot[2:0] Input
s_axi_mm_p_x_awqos[3:0] Input
s_axi_mm_p_x_awregion[3:0] Input
S_axi_mm_p_x_awuser Input
s_axi_mm_p_x_wvalid Input
s_axi_mm_p_x_wready Input
s_axi_mm_p_x_wuser Input
s_axi_mm_p_x_bid[3:0] Input
s_axi_mm_p_x_bvalid Input
s_axi_mm_p_x_bready Input
s_axi_mm_p_x_buser Input
s_axi_mm_p_x_arid[3:0] Input
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
s_axi_mm_p_x_arvalid Input
s_axi_mm_p_x_arready Input
s_axi_mm_p_x_arcache[3:0] Input
s_axi_mm_p_x_arlock Input
s_axi_mm_p_x_arprot[2:0] Input
s_axi_mm_p_x_arqos[3:0] Input
s_axi_mm_p_x_arregion[3:0] Input
s_axi_mm_p_x_aruser Input
s_axi_mm_p_x_rid[3:0] Input
s_axi_mm_p_x_rlast Input
s_axi_mm_p_x_rvalid Input
s_axi_mm_p_x_rready Input
s_axi_mm_p_x_ruser Input
m_axi_mm_x_awid[3:0] Output
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
m_axi_mm_x_awvalid Output
m_axi_mm_x_awready Input
m_axi_mm_x_awcache[3:0] Output
m_axi_mm_x_awlock Output
m_axi_mm_x_awprot[2:0] Output
m_axi_mm_x_awqos[3:0] Output
m_axi_mm_x_awuser Output
m_axi_mm_x_wvalid Input
m_axi_mm_x_wready Output
m_axi_mm_x_wuser Output
s_axi_mm_p_x_bid[3:0] Input
s_axi_mm_p_x_bvalid Input
s_axi_mm_p_x_bready Output
s_axi_mm_p_x_buser Input
m_axi_mm_x_arid[3:0] Output
000: 8-bit
001: 16-bit
010: 32-bit
011: 64-bit
00: Fixed
01: Incremental
10: Wrapping
11: Reserved
m_axi_mm_x_arvalid Output
m_axi_mm_x_arready Input
m_axi_mm_x_arcache[3:0] Output
m_axi_mm_x_arlock Output
m_axi_mm_x_arprot[2:0] Output
m_axi_mm_x_arqos[3:0] Output
m_axi_mm_x_aruser Output
m_axi_mm_x_rid[3:0] Input
m_axi_mm_x_rlast Input
m_axi_mm_x_rvalid Input
m_axi_mm_x_rready Output
m_axi_mm_x_ruser Input
M_AXI_MM_0 To Memory
M_AXI_MM_1 To Memory
Core Architecture
This section describes the UltraScale™ architecture-based FPGAs Memory Interface Solutions core with an overview of the
modules and interfaces.
The major modules of the VCU Sync IP core are the control, consumer, and producer modules.
Control Module
The control module implements the programming register interface and the ring buffer mechanism of buffer control.
Consumer Module
The consumer module receives AXI read transactions from the consumer and schedules the reads to memory based on the
tracking information from the producer. The consumer module implements fencing: It holds the read transactions in the internal
FIFO and waits for the producer write transactions to complete. When the producer write transactions for a specific section of
memory are completed, the consumer allows the read transactions to go to the DRAM.
Producer Module
The producer module tracks AXI write transactions from the producer and sends write tracking information to the consumer for
gating or un-gating the read transactions.
Clocking
The Sync IP datapath and control path run in the same clock domain as VCU Xilinx low-latency mode. It is the same as the VCU
encoder AXI bus.
Resets
The Sync IP uses the same reset, the synchronous reset, as the VCU Xilinx low-latency mode. There is no specific reset
sequence for the Sync IP. Additionally, the Sync IP has per channel soft reset which is used to clear the internal states of each
channel. This soft-reset is software programmable and it is auto cleared internally.
The VCU encoder and decoder operate in slice mode for low-latency use cases. An input frame is divided into multiple slices (8
or 16) horizontally, and the encoder generates a slice_done interrupt at the end of every slice. Generated NAL unit data can be
passed to a downstream element immediately without waiting for the whole frame to be encoded. The VCU decoder also starts
processing data as soon as one slice of data is ready in the decoder circular buffer instead of waiting for complete frame data.
The hardware Sync IP shown in the block diagram is responsible for synchronizing the AXI read/writes between capture DMA
and the VCU encoder.
Capture DMA writes video buffers in raster scan order. The Sync IP core monitors the buffer level while capture DMA is writing
into DRAM, and allows the encoder to read input buffer data, if the requested data is already written by DMA; otherwise, it blocks
encoder AXI transactions until the Capture DMA completes its writes to that section.
On the decoder side, the VCU decoder writes decoded video buffers into DRAM in block raster scan order, and the display reads
the data in raster scan order. The software ensures a phase difference of ~frame_period/2 between the VCU decoder start and
display read so that the decoder is ahead of the display. This is achieved by releasing decoded buffers early to display stack and
waiting 1/2 frame duration at base-sink/kmssink before setting the plane for display.
Capture to Encode
The v4l2src component (producer) programs the Sync IP for the capture to encode use case. The capture driver releases the
buffer to the encoder module immediately without filling the data (early call-back) when the pipeline goes into a playing state.
The Sync IP between the capture device and VCU encoder is responsible for buffer synchronization.
The API flow is as follows:
1. v4l2src programs syncip as per address ranges for the first input buffer and enables the sync IP.
2. v4l2src calls VIDIOC_DQBUF and sends empty input buffer to omx encoder using early dequeue mechanism.
3. omx encoder receives this empty buffer and starts generating read request after calling OMX_EmptyThisBuffer and sends
an event to v4l2src to start the DMA and start filling input buffer.
4. SyncIP blocks the omx encoder until v4l2src is done writing data corresponding to read request made by omx encoder.
5. Similarly, for consecutive buffers v4l2src programs SyncIP, submits buffer to omx encoder using VIDIOC_DQBUF and
syncip block the omx encoder until v4l2src has written sufficient data. This maintains the synchronization between
producer (v4l2src) and consumer (omxh265enc/omxh264enc).
You can find sample Sync IP programming code and API descriptions in the following directories:
gst-plugins-good/sys/v4l2/ext/xlnx-ll/
xlnxsync.h (SyncIp driver ioctl header file
xvfbsync.c (SyncIp Application API definition file)
xvfbsync.h (SyncIp Application API header file)
Decode to Display
The VCU OMX decoder component has a custom early FillBufferDone callback which releases the output buffer to the next
component, the display at the start of decoding, thus allowing concurrent access to the buffer for the decoder and display
components and thus reducing the latency. To maintain synchronization between the decoder and display components so that
display DMA does not start reading before the decoder has written, the decoder is at least half a frame period ahead of display
as per the following API flow.
Vivado Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994)
Vivado Design Suite User Guide: Designing with IP (UG896)
Vivado Design Suite User Guide: Getting Started (UG910)
Vivado Design Suite User Guide: Logic Simulation (UG900)
This section includes information about using AMD tools to customize and generate the core in the AMD Vivado™ Design Suite.
For details, see the Vivado Design Suite User Guide: Designing with IP (UG896) and the Vivado Design Suite User Guide: Getting
Started (UG910).
Figures in this chapter are illustrations of the Vivado IDE. The layout depicted here might vary from the current version.
You can customize the VCU Sync IP core in the Vivado IDE. The following configuration options are available:
Configuration page
Memory Size
This option is to enable the user to select the external memory size of the board. The range of the memory size are
fixed. Values are 2 GB, 4 GB, 16 GB and 16K GB. (Default: 16k GB)
Enable Multi Clock
This option is to enable when the design uses more than one clocks for capture, consumer and control side. (Default:
Disable)
Producer address alignment
This option is to enable the customer to configure the producer write size. This will aid the resource optimization. The
options are fixed. Options are Unaligned, 32-byte aligned, 64-byte aligned, 128-byte aligned, 256-byte aligned, and
512-byte aligned (Default: Unaligned)
Supported Configuration
Table: Encoder
The following diagrams show how the Sync IP is connected to the VCU LogiCORE IP.
This section contains information about constraining the core in the Vivado Design Suite.
Required Constraints
None
Clock Frequencies
This section is not applicable for this IP core.
Clock Management
For more information on clocking, see Clocking.
Clock Placement
This section is not applicable for this IP core.
Banking
This section is not applicable for this IP core.
Transceiver Placement
This section is not applicable for this IP core.
Simulation
Simulation of the VCU Sync IP is not supported.
If the device usage is more than 60% and the design is unable to meet the timing, use the following instructions for timing
closure.
Xilinx low-latency mode PS DDR NV12 HDMI Audio Video Capture and Display
This is a VCU based HDMI design to showcase ultra low latency support using Sync IP, encoding and decoding with PS
DDR for NV12 format. This module also supports single-stream audio. See the Wiki page and build and run the flow of this
design module.
Xilinx low-latency mode PL DDR XV20/NV16 HDMI Video Capture and Display
This is a VCU based HDMI design to showcase ultra low latency support using Sync IP, encoding with PS DDR and
decoding with PL DDR for XV20/NV16 format. See the Wiki page and build and run the flow of this design module.
Xilinx low-latency mode PL DDR XV20 HLG SDI Audio Video Capture and Display
This is a VCU based SDI design to showcase ultra low latency support using Sync IP, encoding with PS DDR and decoding
with PL DDR for XV20 format. See the Wiki page and build and run the flow of this design module.
Glass-to-Glass Latency
As illustrated in the following figure, glass-to-glass latency (L) is the sum of the following:
Camera latency
On-chip latency (L1)
Source frame buffer DMA latency
Encoder latency
Transmission bitstream buffer latency
Network or storage latency
On-chip latency (L2)
Coding Picture Buffer (CPB)/jitter buffer latency
Decoder latency
Decoded picture buffer (DPB) latency
Display frame buffer DMA latency
Display monitor latency
When B-frames are enabled, one frame of latency is incurred for each B-frame due to the usage of the reordering buffer. To
optimize the CPB latency, a handshaking mechanism in PL is required between decoder and the display DMA. It is assumed that
both capture side and display side works on a common VSYNC timing.
VSYNC timing can be asynchronous and a clock recovery mechanism is needed to synchronize source timing with sync.
With independent VSYNC timing and without clock recovery mechanism, it requires one additional frame latency to synchronize
with the display devices.
✎ Note: These numbers do not include the latency information from the interconnect in the fabric. For more information on
memory parts, see the UltraScale Architecture-Based FPGAs Memory IP LogiCORE IP Product Guide (PG150).
The VCU supports four latency modes: normal latency, reduced latency (also called no-reordering mode), low latency, and Xilinx
low latency modes. The pipeline instantaneous latency can vary depending upon the frame structure, encoding standard, levels,
profiles, and target bitrate.
Normal-Latency
The VCU encoder and decoder works at frame level. All possible frame types (I, P, and B) are supported and there is no
restriction on GOP structure. The end-to-end latency depends on the profile/level, GOP structure, and the number of
internal buffers used for processing. This is standard latency and can be used with any control rate mode.
No Reordering (Reduced-Latency)
The VCU encoder works at frame level. Hardware rate control is used to reduce the bitrate variations. I-only, IPPP, and low-
delay-P are supported. There is no output reordering, thus reducing latency on the decoder side. The VCU continues to
operate at frame level.
Low-Latency
The frame is divided into multiple slices; the VCU encoder output and decoder input are processed in slice mode. The VCU
Encoder input and Decoder output still works in frame mode. The VCU encoder generates a slice done interrupt at every
end of the slice and outputs stream buffer for slice, and it will be available immediately for next element processing. So,
with multiple slices it is possible to reduce VCU processing latency from one frame to one-frame/num-slices. In the low-
latency mode, a maximum of four streams for the encoder and two streams for the decoder can be run.
Xilinx Low-Latency
In the low-latency mode, the VCU encoder and decoder work at subframe or slice level boundary but other components at
the input of encoder and output of decoder namely capture DMA and display DMA still work at frame level boundary. This
means that the encoder can read input data only when capture has completed writing full frame. In the Xilinx low-latency
mode, the capture and display also work at subframe level thus reducing the pipeline latency significantly. This is made
possible by making the producer (Capture DMA) and the consumer (VCU encoder) work on the same input buffer
concurrently but maintaining the synchronization between the two such that consumer read request is unblocked only
once the producer is done writing the data required for that read request. This functionality to maintain synchronization is
managed by a separate IP block called the synchronization IP.
Similarly, the decoder and the display are also allowed to have concurrent access to the same buffer, but here there is no
separate hardware synchronization IP block between them. The software handles the synchronization by making sure that
buffer starts getting displayed only when the decoder has written at least half a frame period of data.
Similar to the low-latency mode, the Xilinx low-latency also supports a maximum of four streams for the encoder and two
streams for the decoder. See VCU Sync IP v1.0 for more information.
The maximum number of streams should be equivalent to 4kp60 bandwidth. Following are the possible combinations of latency
modes:
The VCU encoder and decoder use mono-threaded micro-controller based scheduler for sending commands to underlying
hardware IP blocks. For the multi-stream environment it is possible that command requests for one stream can get
blocked if the scheduler is busy serving commands for another stream in parallel. This can cause momentary spike in
latency. Also the spike in latency can occur in an already running pipeline whenever a new pipeline is launched or closed as
the command requests for running stream can get blocked while a new channel is being created for encoding a new
stream, or an existing channel for an already running stream gets destroyed during closure. Latencies can go higher as the
number of streams increase.
Encoder
The following table shows the latency numbers for 1x, 2x, and 4x encoder latencies. This data is for NV12 1080p60 HEVC It
captures the instantaneous latency of the pipeline by capturing number of samples occurring at different ranges.
Range (ms) 1x 1 2x 2 4x 3
[0-1] 0 0 0
[1-2] 0 0 0
[4-5] 0 7 10443
[5-6] 0 1 9148
[6-7] 0 0 9
[7-8] 0 1 7
[8-9] 0 2 1
[9-10] 0 0 0
[10-11] 0 0 1
Range (ms) 1x 1 2x 2 4x 3
3. 4x: Use case with four streams (e.g. video0, video1, video2 and video3) running in parallel
Decoder
The following table shows the latency numbers for 1x and 2x decoder latencies. This data is for NV12 4kp30 HEVC:
Range (ms) 1x 1 2x 2
[0-1] 0 0
[1-2] 0 0
[2-3] 0 0
[3-4] 0 0
[4-5] 0 0
[5-6] 0 0
[6-7] 0 2
[7-8] 4 6
[9-10] 4 2130
[10-11] 4 3
This section describes some recommended settings to be used for encoder and decoder to run Xilinx low-latency mode
pipelines with optimum latency.
The following are the common encoder parameters for Xilinx low latency pipelines:
✎ Note: Use the decoder parameter internal-entropy-buffers=3 for XV20 2x 4kp30 use case only. Use decoder
parameter internal-entropy-buffers=3 for XV20 2x 4kp30 due to memory constraints. Use processing-
deadline=5 ms for 4x HEVC use-cases as shown in above table to mitigate high latencies in the multi-stream
environment.
For 4x hevc serial pipelines, it is recommended to set processing-deadline = 5 ms for NV12 and processing-
deadline = 7 ms for XV20 to mitigate higher latencies due to multistream as mentioned in the table.
It is recommended to set the decoder parameter internal-entropy-buffers=3 for XV20 2x 4kp30 use case only due
to memory constraints.
It is recommended to use more output buffers for HEVC use-case in multistream scenarios to optimize the latencies
further. This is possible by prefixing pipeline with ENC_EXTRA_OP_BUFFERS as shown in the following pipes:
The following table shows the latency for the VCU pipeline stages.
1. The values in the preceding table are estimated numbers for 4kp60.
2. In case of normal latency, the decode reported latency is calculated according to the below formula. The DPB size here
is depends on profile and level reported latency = DPB size + internal entropy buffers + reconstruction buffers +
concealment buffers.
3. Xilinx low-latency is achieved using Sync IP. For more information, see VCU Sync IP v1.0.
4. Xilinx low-latency supports 25 Mb/s for single stream of 2160p60 and 6 Mb/s for four streams of 1080p60.
5. The exact worst case latencies depend upon the input content and system load. Latency numbers mentioned in this
table correspond to the theoretical latencies reported by the respective elements. The latencies of elements should
stay under the reported ones.
Latency Numbers
Latency numbers are derived from the following equations:
omxh265enc (HEVC encoder) 1 frame period for input capture buffer rounded to ms + 1 ms 18 ms
margin = 17 ms + 1 ms
omxh264enc (AVC encoder) 1 frame period for input capture buffer rounded to ms + 1 35 ms
intermediate buffer + 1 ms margin = 17 ms + 17 ms + 1 ms
Normal Latency [omxh265dec & DPB size (5 for default level, that is, 5.2) + internal entropy 200 ms
omxh264dec (AVC/HEVC buffers (default=5) + reconstruction buffers (default=1)+
Decoder)] concealment buffers (default=1) = 12 *16.66 ms
Reduced Latency 1 frame period to insert frame into decoder + 1 frame period to 50 ms
decode + 1 frame period margin = 16.6 ms + 16.6 ms + 16.6 ms
1. All the equations in this table have been calculated for 60 fps video.
Usage
Normal Latency
This is standard latency and can be used with any of the rate control mode.
Ctrl-SW Command
The v4l2 capture control software encoder application demonstrates AMD's Low-Latency feature using the VCU ctrlsw
APIs. It is an enhanced version of normal VCU ctrlsw app (ctrlsw_encoder).
Refer to this Wiki page section for commands and additional details.
Ctrl-SW command
At the control software level, it can be specified using command line arguments.
Low Latency
This mode supports sub-frame latency. For encoder, recommended to set rate control mode to low-latency rate for best
performance and pass alignment = NAL through caps at encoder source pad. For decoder, set low-latency parameter with
alignment set to NAL through caps for sink pad. Low latency can be enabled at the encoder and decoder sides as follows:
Ctrl-SW command
At the control software level, it can be specified through command line arguments.
Client
Server
Ctrl-SW command
The Xilinx Low Latency mode is not supported in the example control software encoder application ctrlsw_encoder,
that comes with PetaLinux as this latency mode is used mainly for live use-cases that the ctrlsw_encoder does not
support. However, there is a separate v4l2 capture control software encoder application implemented which also supports
AMD’s Low-Latency feature using the ctrlsw APIs. It is an extension to the normal VCU ctrlsw app (ctrlsw_encoder)
to support live record and streamout use-cases. Refer to this Wiki page for commands and additional details.
Latency
The GStreamer framework includes a tracing module that helps determine source to sink latencies by injecting custom events
at source and processing them at sinks. It effectively measures the time between when the buffer is produced by the source
pad of the first element and when it reaches the sink pad of the last element. Consider the following pseudo pipeline:
The GStreamer tracing module measures the latency introduced by element ! element, which is the inner processing of the
pipeline. The rest (introduced by the capture source and display device) cannot be measured accurately by the user space.
Each element reports to GStreamer the maximum latency time it takes to output the buffer after it is received. GStreamer uses
this information for synchronization.
The latency tracer module gives instantaneous latencies which might not be the same as the reported latencies. The latencies
might be higher if the inner pipeline (element ! element) takes more time, or lower if the inner pipeline is running faster, but the
GStreamer framework waits until the running time equals the reported latency.
You can measure the average userland latency using the following formulas:
Average userland latency = MAX (Pipeline reported latency, AVG (Instantaneous latency))
The following section provides an example of measuring the latency-related data for the AVC 4kp60
capture→encode→decode→display use case. Follow the setup steps described in the wiki.
Pipeline
Pipeline reported latency = Non-zero value assigned to the latency set field in above string=16 ms
GST_DEBUG="GST_TRACER:7" GST_TRACERS=latency
GST_DEBUG_FILE="/run/instantaneous_latency_serial_4k_avc.txt" gst-launch-1.0 -v v4l2srcio-
mode=dmabuf-import device=/dev/video0 num-buffers=1000 '!' video/x-raw, width=3840, height=2160,
The initial few readings might go high due to the initialization time, and after that the latency becomes stable, as shown in the
previous snapshot where it stabilizes to ~9.4 ms.
Check for the time (in nanosecond for latency) marked in bold in the following logs. The initial few readings might go high
due to initialization time and after that becomes stable. For example, the following logs shows ~12 ms of latency for
stream-out pipeline.
Check for the time (in nanosecond for latency) marked in bold in the following logs. The initial few readings might go high
due to initialization time and after that it becomes stable. For example, the following logs shows ~17 ms of latency for
stream-in pipeline.
Modify as below:
GST_DEBUG=*v4l2*:6,*omx*:6,*base*:6 GST_DEBUG_FILE="/run/latency.txt”
gst-launch-1.0 <pipeline>
grep -inr "latency” /run/latency.txt | grep v4l2
grep -inr “latency” /run/latency.txt | grep omx
This should show the latency reported by v4l2src, encoder and decoder. For example, for the following use-case:
According to the above log, the v4l2src reported latency is 16.66 ms.
Checking OMX encoder reported latency:
The overall latency of the encoder is the steady state latency which is equal to the sum of the input latency, the hardware
latency, and the output latency. The bitstream buffer latency is application dependent. The picture reordering latency equals one
frame duration per B-frame.
The overall latency of the decoder is the steady state latency, equal to the sum of the hardware latency and the output latency.
Hardware latency is the sum of the successive cancellation decoding (SCD) latency, the entropy decoding latency, and the pixel
decoding latency. Initialization latency is the sum of the CPB latency and the Decoder Initialization (Dec Init) latency.
Debugging
This appendix includes details about resources available on the Support website and debugging tools.
★ Tip: If the IP generation halts with an error, there might be a license issue.
To help in the design and debug process when using the core, the Support web page contains key resources such as product
documentation, release notes, answer records, information about known issues, and links for obtaining further product support.
The Community Forums are also available where members can learn, participate, share, and ask questions about AMD Adaptive
Computing solutions.
Documentation
This product guide is the main document associated with the core. This guide, along with documentation related to all products
that aid in the design process, can be found on the Support web page or by using the AMD Adaptive Computing Documentation
Navigator. Download the Documentation Navigator from the Downloads page. For more information about this tool and the
features available, open the online help after installation.
Answer Records
Answer Records include information about commonly encountered problems, helpful information on how to resolve these
problems, and any known issues with an AMD Adaptive Computing product. Answer Records are created and maintained daily
to ensure that users have access to the most accurate information available.
Answer Records for this core can be located by using the Search Support box on the main Support web page. To maximize your
search results, use keywords such as:
Product name
Tool message(s)
Summary of the issue encountered
A filter search is available after results are returned to further target the results.
AR 66763.
Technical Support
AMD Adaptive Computing provides technical support on the Community Forums for this AMD LogiCORE™ IP product when used
as described in the product documentation. AMD Adaptive Computing cannot guarantee timing, functionality, or support if you
do any of the following:
Debug Tools
There are many tools available to address H.264/H.265 Video Codec Unit (VCU) design issues. It is important to know which
tools are useful for debugging various situations.
The AMD Vivado™ Design Suite debug feature inserts logic analyzer and virtual I/O cores directly into your design. The debug
feature also allows you to set trigger conditions to capture application and integrated block port signals in hardware. Captured
signals can then be analyzed. This feature in the Vivado IDE is used for logic debugging and validation of a design running in
AMD devices.
The Vivado logic analyzer is used to interact with the logic debug LogiCORE IP cores, including:
See the Vivado Design Suite User Guide: Programming and Debugging (UG908).
The Vivado Design Suite debug feature inserts logic analyzer and virtual I/O cores directly into your design. The debug feature
also allows you to set trigger conditions to capture application and integrated block port signals in hardware. Captured signals
can then be analyzed. This feature in the Vivado IDE is used for logic debugging and validation of a design running in AMD
devices.
The Vivado logic analyzer is used with the logic debug IP cores, including:
See the Vivado Design Suite User Guide: Programming and Debugging (UG908) for more information.
Reference Boards
Various AMD development boards support the VCU. These boards can be used to prototype designs and establish that the core
can communicate with the system.
ZCU106
Hardware Debug
Hardware issues can range from link bring-up to problems seen after hours of testing. This section provides debug steps for
common issues. The Vivado debug feature is a valuable resource to use in hardware debug. The signal names mentioned in the
following individual sections can be probed using the debug feature for debugging the specific problems.
General Checks
Ensure that all the timing constraints for the core were properly incorporated from the example design (if applicable) and that all
constraints were met during implementation.
If using MMCMs in the design, ensure that all MMCMs have obtained lock by monitoring the locked port.
If your outputs go to 0, check your licensing.
Troubleshooting a VCU based system can be complicated. This appendix offers a decision tree to guide you efficiently to the
most productive areas to investigate. The troubleshoot steps apply to a VCU based system capable of performing live capture,
encode, decode, transport, and display.
Debug Flow
Troubleshooting
If you are using capture and display interfaces based on high-speed serial I/O (eg. HDMI, MIPI, SDI etc), ensure physical links
are up before debugging the upper layers. Follow these steps:
1. Monitor for LEDs that indicate heart beat clocks that are derived based on reference clock input to transceiver. If the heart
beat clocks are absent, check for GT PLL lock status.
2. If PLL is not locked, check for Video PHY IP configuration and reference clock constraint. Most of the time, the reference
clock is sourced from a programmable clock chip. Ensure the frequency is programmed properly in the device tree source
file for the programmable clock chip and the frequency is not modified by other Linux devices (happens when the phandle
of the clock source node is shared between multiple components).
3. If the capture interface is compatible with the Video for Linux (v4l2) framework, use the media-ctl API to verify the link
topology and link status.
media-ctl –p –d /dev/mediaX
The mediaX represents the pipeline device in the v4l2 pipeline. If you have multiple TPG/ HDMI pipelines, they appear as
/dev/media0, /dev/media1, etc. The link status is indicated in the corresponding sub-device node. For example, a HDMI
RXSS node indicates link status for the HDMI link in its sub-device properties while using the above command. If link up
fails, a "no-link" message appears. Otherwise, valid resolution with proper color format is detected.
4. If the display interface is compatible with DRM/KMS framework, ensure DRM is linking up by running the one of following
commands:
1. Ensure VCU init driver probe is successful during boot process. Check init driver with the following command.
lsmod
lsmod
which should show al5d, al5e and Allegro modules as being inserted.
4. Check if sufficient CMA is allocated. VCU operation requires at least 1000 MB of CMA. You can check for available CMA by
using
cat /proc/meminfo
5. If CMA size is not sufficient, increase the size either in U-Boot using the following steps:
a. Stop the boot process at U-Boot level and run the following command to increase the CMA size or while building the
Linux kernel using kernel configuration property.
b. In the kernel config (for PetaLinux, run petalinux-config -c kernel and for standalone compilation, run make
menuconfig), change the size of the CMA by selecting Library Routines > DMA Contiguous Memory Allocator > Size
in MegaBytes.
6. Run a VCU sanity test using on of the VCU Control Software sample applications. You can use ctrlsw_encoder and
ctrlsw_decoder applications to run a sanity test.
To debug control software based application (ctrlsw_encoder, ctrlsw_decoder), perform the following steps.
1. Run file to file-based encoder and decoder sample applications to ensure they operate as expected.
Encoding file-to-file
The application exits with appropriate error message. Refer error description table for more details. For example, if the
VCU power is insufficient to encode 4kp resolution file at 120fps, the following message appears:
To mitigate the memory error, check that CMATotal and CMAFree are sufficient using the following command:
cat /proc/meminfo
Increase CMA size either in U-Boot using a command, setenv bootargs ${bootargs} cma=xxxxM or, while building
the Linux kernel using kernel configuration property.
2. If the output file is not generated and no failure/error message occurs on the terminal, check for kernel level message
using a command:
dmesg
Check "al5e", “al5d” keyword in the dmesg log for the error, if any.
3. If the application freezes, debug using “gdb”. For that, remove optimization from control software Makefile as below.
replace: CFLAGS+=-O3
By CFLAGS+=-O0
(gdb)run
When it hangs, use the backtrace function to view the last function flow from where it hangs.
(gdb)bt
1. Run capture to display pipeline to ensure live capture to display is working as expected.
2. If you see a streamon error with the capture → display pipeline, ensure that the format is set to NV12 using v4l2-ctl and
media-ctl API inside hdmi configuration script. Here is an example
3. Run a pipeline with VCU components in the pipeline. Use omxh264/omxh265enc/dec components in the pipeline.
4. For a list of supported properties of omxh264/omxh265enc/dec, use the following command:
gst-inspect-1.0 <omxh264/omxh265enc/dec>
which lists all the supported properties of VCU encoder and decoder blocks. Use the properties as appropriate in the
pipeline.
5. Start with a known pipeline example like the one below:
gst-launch-1.0 -v \
v4l2src num-buffers=2100 device=/dev/video8 io-mode=4 \
! video/x-raw,format=NV12,width=3840,height=2160, framerate=30/1 \
! omxh265enc target-bitrate=70000 prefetch-buffer=TRUE \control-rate=2 gop-length=30 b-
frames=0 \
! video/x-h265, profile=main,level=\
(string\
)5.1,tier=main \
! omxh265dec low-latency=0 internal-entropy-buffers=2 \
! queue \
! fpsdisplaysink name=fpssink text-overlay=false \
video-sink="kmssink max-lateness=100000000 async=false \
sync=true driver-name=xilinx_drm_mixer" -v
For additional debugging information, see the Debugging Tools page of GStreamer website at
https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html.
If the problem is low frame rate or frame dropping, follow these steps to debug the system.
1. Use the fpsdisplaysink element to report frame rate and dropped frame count (refer to the example above)
2. Check the QoS settings of HP ports that interface VCU with PS DDR. Check for outstanding transaction count
configuration.
5. Try with different encoder/decoder properties to see if the performance drop is related to any of the properties. Avoid B-
frames in the pipeline to see if there is any performance improvement. If there is improvement, it might indicate a system
bandwidth issue. Reduce the bit rate to see if there is improvement in framer ate. If reducing target-bit rate gives better
throughput, it might indicate a system bandwidth issue.
6. Check for CPU usage while the pipeline is running. A higher CPU usage indicates that there could be an impact in interrupt
processing time which explains the lower framerate.
7. Try using a queue element between two GStreamer plugins that are in the datapath to check for any performance
improvement.
8. Check for DDR bandwidth utilization using DDR APM and VCU APM.
9. Use gst-shark (a GStreamer-based tool) to verify performance and create scheduletime and interlatency plots to
understand which element is causing performance drops.
10. Using environment variables, you can increase encoder input and output buffers count for debug or performance tuning
purpose. Use the ENC_EXTRA_IP_BUFFERS and ENC_EXTRA_OP_BUFFERS environment variable to provide extra buffers
needed on encoder ports. For example, suppose X number of buffers are allocated for encoder input/output by default. To
add five more buffers to it, assign ENC_EXTRA_IP_BUFFERS=5. So, new allocated buffers for encoder input is X+5.
Similarly, for encoder output buffers, use ENC_EXTRA_OP_BUFFERS. The pipelines are as follows:
If the problem is high end-to-end latency, follow these steps to debug the system:
1. Understand how much the jitter buffer is used in the client side pipeline (udpsrc).
✎ Note: The latency for the rtpjitterbuffer should correspond to the CPB size in the server pipeline. Often it is useful to use
LowLatency rate control (or hardware rate control) algorithm to maintain a lower CPB buffer that reduces the rtpjitterbuffer
latency.
2. Many times, latency is related to frame drop. Ensure there is no frame drop in the pipeline before measuring latency.
3. Check if use of the filler-data setting in the encoder pipeline is causing longer latency. If so, set filler-data=false in the
pipeline and check for latency.
4. Use a fine-tuned internal-entropy-buffers count.
✎ Note: Internal-entropy-buffers setting impacts the latency and an optimal value needs to be used. You can update this
decoder property to ensure no frame drop first and later to optimize the pipeline latency.
Interface Debug
To verify that the interface is functional, try reading from a register that does not have all 0s as its default value. Output
s_axi_arready asserts when the read address is valid, and output s_axi_rvalid asserts when the read data/response is
valid. If the interface is unresponsive, ensure that the following conditions are met:
AXI4-Stream Interfaces
If transmit <interface_name>_tready is stuck Low following the <interface_name>_tvalid input being asserted,
the core cannot send data.
If the receive <interface_name>_tvalid is stuck Low, the core is not receiving data.
Check that the aclk inputs are connected and toggling.
Check that the AXI4-Stream waveforms are being followed.
Check core configuration.
GStreamer
OpenMAX Integration Layer
VCU Control Software
The GStreamer is a cross‑platform open source multimedia framework. GStreamer provides the infrastructure to integrate
multiple multimedia components and create pipelines. The GStreamer framework is implemented on the OpenMAX™ Integration
Layer API-supported GStreamer version is 1.14.4.
The OpenMAX Integration Layer API defines a royalty‑free standardized media component interface to enable developers and
platform providers to integrate and communicate with multimedia codecs implemented in hardware or software.
The VCU Control Software is the lowest level software visible to VCU application developers. All VCU applications must use an
AMD provided VCU Control Software, directly or indirectly. The VCU Control Software includes custom kernel modules, custom
user space library, and the ctrlsw_encoder and ctrlsw_decoder applications. The OpenMAX IL (OMX) layer is integrated on top
of the VCU Control Software.
User applications can use the layer or layers of the VCU software stack that are most appropriate to their requirements.
Software Prerequisites
All of the software prerequisites for using the VCU are included in AMD PetaLinux included in AMD Vitis™ software development
platform release. Refer to the Release Notes Links at the bottom of the Embedded Design Hub - PetaLinux Tools or to the
Release Notes link on the download page.
For the AMD Linux kernel, refer to xilinx_zynqmp_defconfig at linux-xlnx/arch/arm64/configs/xilinx_zynqmp_defconfig, where all
the AMD driver configurations options are enabled.
For the vanilla Linux kernel, refer to xilinx_zynqmp_defconfig to enable and disable an AMD driver in the Linux kernel. If the
design enables or disables the AMD IP, the corresponding device-tree node should be set to enable the driver to probe at run
time kernel.
The application software using the VCU is written on top of the following libraries and modules, shown in the following table.
The VCU supports multi-standard video encoding, shown in the following table.
Profiles
Main Baseline
Main Intra Main
Main10 High
Main10 Intra Progressive High and Constrained
Main 4:2:2 10 High (subsets of the High profile)
Main 4:2:2 10 Intra High10
High 4:2:2
High10 Intra
High 4:2:2 Intra
Bit Depth
Chroma Format
Bit Rate Limited by level and profile Limited by level and profile
Rate Control Mode control rate Available bit rate control modes, Constant Bit rate (CBR) and
Variable Bit rate (VBR) Encoding
Default value: 2
✎ Note: Variable Skip Frames and Constant Skip Frames are
coming from gstreamer default plugin, not supported in VCU.
1, 2, 3, 4, 5
Use Output Port Buffer use-out-port-pool If enabled, use DMA based buffer pool on encoder’s the output
Pool pool.
Default value: FALSE
Maximum Bit Rate 6 max bitrate Maximum bitrate, in Kbps, used in VBR rate control mode.
Default value: target-bitrate. The max-bitrate value should
always be the target-bitrate
Default value: 64
Profile Through caps Supported profiles for corresponding codec are mentioned in
Table 1
Default value: HEVC_MAIN
Level Through caps Supported Levels for corresponding codec are mentioned in
Table 1
Default value: 51
Tier Through caps Supported Tier for H.265 (HEVC) codec is mentioned in Table
1
Default value: MAIN_TIER
Slice QP / I-frame QP quant-i-frames Quantization parameter for I‑frames in CONST_QP mode, also
used as initial QP in other rate control modes. Range 0–51.
Default value: 30
GOP Length gop-length Distance between two consecutive Intra frames. Specify
integer value between 0 and 1,000. Value 0 and 1 corresponds
to Intra-only encoding
Default value: 30
Number of B-frames b-frames Number of B-frames between two consecutive P-frames. Used
only when gop-mode is basic or pyramidal.
Number of Slices num-slices Specifies the number of slices used for each frame. Each slice
contains one or more full LCU row or rows and are spread over
the frame as regularly as possible. The minimum value is 1.
Max supported value as below:
In low-latency mode
H.264(AVC): 32
H.265 (HEVC): 22
More than 16 slices will not bring much benefit in low latency
mode because of overhead
In normal latency-mode
H.264(AVC): picture_height/16
H.265(HEVC): minimum of picture_height/32
Above max supported values in normal mode will work but
HEVC has some more limitations when encoder uses multiple
cores
When HEVC encoder uses multiple cores (i.e. anything beyond
1080p60 resolution) then the max num-slices is limited by
"Max # of tile rows MaxTileRows."
Refer to Table 1. In 4k encoding, num-slices should be set to 8
for all latency-modes to achieve best performance.
Default value: 1
0 = basic (IPPP..IPPP…)
1 = basic-b (basic GOP settings, includes only B-frames)
2 = pyramidal (advanced GOP pattern with hierarchical B
frame, works with B=3, 5, 7 and 15)
3 = pyramidal-b (advanced GOP pattern with hierarchical
B-frames, includes only B-frames)
4 = adaptive (advanced GOP pattern with adaptive B-
frames)
5 = low-delay-p ( IPPPPP…)
6 = low-delay-b (IBBBBB…)
Gradual Decoder Refresh gdr-mode Specifies which Gradual Decoder Refresh scheme should be
used when gop-mode = low_delay_p
0 = disable
1 = vertical (Gradual refresh using a vertical bar moving
from left to right)
2 = horizontal (Gradual refresh using a horizontal bar
moving from top to bottom)
Default value: 0
Default value: 2
Filler data filler-data Enable/Disable filler data adding functionality in CBR rate
control mode. Boolean: TRUE or FALSE
Default value: True
Entropy Mode entropy-mode Specifies the entropy mode for H.264 (AVC) encoding process
0 = CAVLC
1 = CABAC
Default value: 1
0 = enable
1 = disable
2 = disable-slice-boundary (Excludes slice boundaries
from filtering)
Default value: 0
IDR picture frequency periodicity-idr Specifies the number of frames between consecutive
instantaneous decoder refresh (IDR) pictures. The periodicity-
idr property was formerly called gop-freq-idr.
Allowed values: <Positive value> or -1 to disable IDR insertion
Default value: 0 (first frame is IDR)
Initial Removal Delay initial-delay Specifies the initial removal delay as specified in the
HRD model in milliseconds. Not used when control-rate =
disable
✎ Note: If this value is set too low (less than 1 frame period),
you can see reduced visual quality.
Default value: 1500
Coded Picture buffer size cpb-size Specifies the coded picture buffer (CPB) as specified in the
HRD model in milliseconds. Not used when control-rate =
disable
Default value: 3000
Dependent slice dependent-slice Specifies whether the additional slices are dependent on other
slice segments or regular slices in multiple slices encoding
sessions. Used in H.265 (HEVC) encoding only. Boolean: TRUE
or FALSE
Default value: False
Target slice size slice-size If set to 0, slices are defined by the num-slices parameter, else
it specifies the target slice size in bytes. Used to automatically
split the bitstream into approximately equally‑sized slices.
Range 0–65,535.
Default value: 0
0 = flat
1 = default
Default value: 1
Vertical Search Range low-bandwidth Specifies low bandwidth mode. Decreases the vertical search
range used for P-frame motion estimation. TRUE or FALSE.
Default values: FALSE
Slice Height slice-height This parameter is used for 2017.4 and earlier releases only. It
is deprecated in 2018.3 and later releases. Specifies input
buffer height alignment of upstream element, if any.
Default value: 1
Aspect-Ratio aspect-ratio Selects the display aspect ratio of the video sequence to be
written in SPS/VUI.
Default value: 0
Constrained Intra constrained-intra-prediction If enabled, prediction only uses residual data and decoded
Prediction samples from neighboring coding blocks that are coded using
Long term Ref picture long-term-ref If enabled, encoder accepts dynamically inserting and using
long-term reference picture events from upstream elements
Boolean: TRUE or FALSE
Default value: False
Long term picture long-term-freq Periodicity of Long-term reference picture marking in encoding
frequency process Units in frames, distance between two consecutive
long-term reference pictures
Default value: 0
Dual pass encoding look-ahead The number of frames processed ahead of second pass
encoding. If smaller than 2, dual pass encoding is disabled.
Default value: 0
Default value: au
Note: When encoder alignment is set to low-latency mode, it is
recommended that you set the rate control mode (control-rate)
to low latency. See VCU Latency Modes on how to set
alignment. When using Low Latency mode, the encoder and
decoder are limited by the number of internal cores. The
encoder has a maximum of four streams and the decoder has
a maximum of two streams.
Max Quality Target max-quality-target Caps quality at certain limit keeping the bit rate variable. Only
used with capped variable rate-control. It xcps the quality at a
certain high limit, keeping bitrate variable
Range: 0 to 20 (20 = lossless quality)
Default value: 14
The allowed values are between 0 and 20 (where 0 is really
poor quality and 20 is close to visual lossless quality). If the
encoder cannot reach this quality level due to the video
complexity and/or a restricted bitrate, or If the quality target is
too high, then the Capped-VBR has no effect compared to
VBR.
✎ Note: The controlSW parameter name in cfg is MaxQuality.
Max Picture Size max-picture-size You can curtail instantaneous peak in the bit-stream using this
parameter. It works in CBR/VBR rate-control only. When Max
Picture Size is enabled, the VCU encoder uses CBR/VBR but it
also enables the hardware rate control module to keep track of
encoding frame size. The hardware rate control module
adjusts the QPs within the frame to ensure that the encoded
picture sizes honor the provided max-picture-size.
MaxPictureSize = TargetBitrate / FrameRate *
AllowedPeakMargin
For 100 Mb/s TargetBitrate, 60 fps FrameRate, and 10%
AllowedPeakMargin,
Slice Type Value Selection uniform-slice-type Enable/Disable uniform slice type in slice header.
When this is enabled, the following slice-type values are used
in slice header.
Input yuv Crop Feature input-crop The input-crop parameter sets <pos-x, pos-y,
cropWidth, cropHeight> values for the VCU encoder.
The property type of max-picture-sizes is
GstValueArray.
This parameter is supported only from release 2021.1
onwards.
Latency latency-mode For 2018.3 and prior releases (Not used in 2019.1 or later
releases; for 2019.1 and future releases, use the Alignment
parameter), specifies encoder latency modes:
Default value: 0
✎ Note: The low-latency latency mode is not recommended
when Rate Control is CBR/VBR.
Default ROI Quality default-roi-quality Default quality level to apply to each Region of Interest
0=high – Delta QP of -5
1=medium – Delta QP of 0
2=low – Delta QP of +5
3=don't-care – Maximum delta QP value
Default: 0
Frame Skip skip-frame If enabled and encoded, picture exceeds the CPB buffer size;
the specific picture is discarded and replaced by a picture with
all MB/CTB encoded as skip. Only use of control-
rate=constant/variable and b-frames are less than 2.
Default: FALSE
Max picture size for frame max-picture-sizes Max picture sizes based on frame types ('<I, P, B>’) Maximum
types picture size of I, P, and B frames in Kb, encoded picture size is
limited to max-picture-size-x value. If set it to 0, the max-
picture-size-x does not have any effect. GstValueArray of
GValue of type "gint" Write only.
The property type of max-picture-sizes is GstValueArray.
Beta offset for deblocking loop-filter-beta-offset Beta offset for the deblocking filter is used only when loop-
filters filter-mode is enabled.
Range: -6 to 6
Default: -1
Alpha offset for loop-filter-alpha-c0-offset Alpha C0 offset for the deblocking filter, used only when loop-
deblocking filters filter-mode is enabled.
Range: -6 to 6
Default: -1
1. Rate control is handled in MCU FW only and no signals (either in Software API or FPGA signals) that are triggered
during rate control process.
2. CBR: The goal of CBR is to reach the target bitrate on average (at the level of one or a few GOPs) and to comply with the
"HRD" model, that is avoiding decoder buffer overflows and underflows. In CBR mode, a single bitrate value defines
both the target stream bitrate and the output/transmission (leaky bucket) bitrate. The reference decoder buffer
parameters are CPBSize and Initial Delay. The CBR rate control mode tries to keep the bit rate constant whilst avoiding
buffer overflows and underflows. If a buffer underflow happens, the QP is increased (up to MaxQP) to lower the size in
bits of the next frames. If a buffer overflow occurs, the QP is decreased (down to MinQP) to increase the size in bits.
3. VBR: When using VBR, the encoder buffer is allowed to underflow (be empty), and the maximum bitrate, which is the
transmission bitrate used for the buffering model, can be higher than the target bitrate. So VBR relaxes the buffering
constraints and allows to decrease the bitrate for simple content and can improve quality by allowing more bits on
complex frames. VBR mode constrains the bitrate with a specified maximum while keeping it on the target bit rate
where possible. Similar to CBR, it avoids buffer underflow by increasing the QP. However, the target bit rate can be
exceeded up to the maximum bit rate. So, the QP has to be increased by a smaller factor. A buffer overflow results in an
unchanged QP and a lower bit rate.
4. Both CBR and VBR use frame-level statistics from the hardware to update the initial QP for the next frame (rate control
can be combined with QP control that can adjust the QP at block level for improving subjective quality).
5. LOW_LATENCY rate control (hardware rate control) computes the QP at block level to reach a target bitstream size for
each frame in an accurate way, and is especially useful for the support of low-latency pipelines.
6. Xilinx low latency supports 25 Mb/s for single stream of 4K and 6 Mb/s for 4x stream of 1080.
7. The following is an example for the v4l2src encode and file sink using custom max-pictures-sizes:
The tier and level limits are shown in the following table.
Level Max Luma Picture Size Max CPB Size Max Slice Segment
Max
per#Picture
of Tile RowsMax # of Tile Columns
1 36,864 350 - 16 1 1
2 122,880 1,500 - 16 1 1
3 552,960 6,000 - 30 2 2
The following table shows the max supported num-slices for 1080p and 4k resolution in the subframe/normal latency mode.
HEVC 34 22
HEVC 32 22
Profiles
Main Baseline (Except FMO/ASO)
Main Intra Main
Main10 High
Main10 Intra Progressive High and
Main 4:2:2 10 Constrained High (subsets of
Main 4:2:2 10 Intra the High profile)
High10
High 4:2:2
High10 Intra
High 4:2:2 Intra
Resolutions
4096x2160p60 with specific 4096x2160p60 with specific
device(-2,-3) device(-2,-3)
Up to 3840x2160p60 Up to 3840x2160p60
Entropy Buffers internal-entropy-buffers Specifies decoder internal entropy buffers, used to smooth
out entropy decoding performance. Specify values in integer
between 2 and 16. Increasing buffering-count increases
decoder memory footprint.
Default value: 5.
Set this value to 10 for higher bit-rate use cases. For example,
uses cases where the bitrate is more than 100 Mb/s.
Default value: 0.
For 2018.3 and prior releases:
0 = default
1 = reduced-latency (Low reference DPB mode)
2 = low-latency
Split-input mode split-input When enabled, the decoder has 1 to 1 mapping for input and
output buffers. When disabled, the decoder copies all the
input buffers to internal circular buffer and processes them.
Default: FALSE
Pipeline used for measuring maximum bit rate for which decoder produces 30 fps.
Max-Bitrate Benchmarking
The following tables summarize the maximum bit rate achievable for 3840x2610p60 resolution, XV20 pixel format at GStreamer
level. The maximum supported target bit rate values vary based on what elements and type of input used in the pipeline.
Maximum Bit Rate support for Record Use Case with 4kp60 Resolution
Video Recording ( Live video capture → VCU encoder → Parser → Muxer → filesink )
Format Codec Entropy Mode Rate Control ModeB-Frames = 4 DDR Mode Max Target Bitrate
4:2:2, 10-bit H.264 (AVC) CABAC VBR IBBBBP PS-DDR 160 Mb/s
✎ Note:
Maximum Bit Rate Support for Playback Use Case with 4kp60 Resolution
Format Codec Entropy Mode Rate Control ModeB-Frames = 4 DDR Mode Max Target Bitrate
4:2:2, 10-bit H.264 (AVC) CABAC VBR IBBBBP PL_DDR 120 Mb/s
Maximum Bit Rate Support for Streaming Use Case with 4kp60 Resolution
Table: Maximum Bit Rate Support for Streaming Use Case with 4kp60 Resolution
Video Streaming ( Live video capture → VCU encoder → Parser → rtppay → Stream-out Stream-in → rtpdepay → Decoder → Display )
Format Codec Rate Control ModeLatency Mode B-Frames = 0 DDR Mode Max Target Bitrate
Server
Client
H.265 (HEVC) with CBR Rate Control Mode and Normal Latency Mode
Server
Client
✎ Note: The above data is captured by streaming elementary stream over RTP.
Maximum Bit Rate Support for Serial Use Case with 4kp60 Resolution
Table: Maximum Bit Rate Support for Serial Use Case with 4kp60 Resolution
Format Codec Rate Control ModeLatency Mode B-Frames = 0 or 4 DDR Mode Max Target Bitrate
4:2:2, 10-bit H.264 (AVC) LOW_LATENCY Low Latency IPPP Encoder 25 Mb/s
(PS_DDR),
Xilinx Low IPPP 25 Mb/s
Decoder
Latency
(PL_DDR)
CBR + max- Normal IBBBBP 90 Mb/s
picture-size
Reduced IPPP 200 Mb/s
Format Codec Rate Control ModeLatency Mode B-Frames = 0 or 4 DDR Mode Max Target Bitrate
H.265 (HEVC) with Low-latency Rate Control Mode and Xilinx Low Latency
H.265 (HEVC) with CBR Rate Control Mode and Normal Latency Mode
These formats signify the memory layout of pixels. It applies at the encoder input and the decoder output side.
The encoder needs to know the memory layout of pixel at input side for reading the raw data, so the corresponding video
format needs to be specified at encoder sink pad using caps.
For the decoder, you can specify the format to be used to write the pixel in memory by specifying the corresponding
Gstreamer video format using caps at decoder source pad.
When the format is not supported between two elements, the cap negotiation fails and Gstreamer returns an error. In that
case, you can use the video convert element to perform software conversion from one format to another.
The following table shows the GStreamer and V4L2 related formats that are supported.
✎ Note:
5. Increase the read and write issuing capability of the port connected to M_AXI_DEC0. By default, it can take a maximum of
four requests at a time, and increasing the issuing capability can keep the ports busy with always some requests in the
queue.
Set the S_AXI_HP0_FPD RDISSUE (AFIFM) register to allow 16 commands
6. Increase the read and write issuing capability of the port connected to M_AXI_DEC1
Now, the GStreamer, OMX, and Control Software pipelines can be run on the board.
cd project-spec/meta-user
mkdir recipe-multimedia
cd recipe-multimedia
mkdir gstreamer
b. There are 5 different recipes files for gstreamer that downloads the code and compile
gstreamer1.0_%.bbappend
gstreamer1.0-omx_%.bbappend
gstreamer1.0-plugins-bad_%.bbappend
gstreamer1.0-plugins-base_%.bbappend
gstreamer1.0-plugins-good_%.bbappend
c. Depending upon patches to which gstreamer package it belongs to, bbappend file for that package needs to be
created to get those patches applied and compiled on latest source code. For example, if patch fix is for gst-omx,
follow these steps
i. Create a gstreamer1.0-omx directory in the recipe-multimedia/gstreamer folder
cd gstreamer
mkdir gstreamer1.0-omx
cp test1.patch recipe-multimedia/gstreamer/gstreamer1.0-omx
cp test2.patch recipe-multimedia/gstreamer/gstreamer1.0-omx
vi gstreamer1.0-omx_%.bbappend
FILESEXTRAPATHS_prepend: = "${THISDIR}/gstreamer1.0-omx:"
SRC_URI_append = " \
file://test1.patch \
file://test2.patch \“
Create similar bbappend files and folder for other gstreamer package to integrate any custom patches in PetaLinux
build.
4. For VCU patches, follow these steps:
a. Create a vcu directory in the recipe-multimedia folder.
cd project-spec/meta-user/recipe-multimedia
b. There are four different recipes files for VCU that downloads the code and compile
kernel-module-vcu_%.bbappend
vcu-firmware_%.bbappend
libvcu-xlnx_%.bbappend
libomxil-xlnx_%.bbappend
c. Depending upon patches to which VCU source code it belongs to, bbappend file for that code base needs to be
created to get those patches applied and compiled on latest source code. For example, if the patch fix is for VCU
drivers, follow these steps:
i. Create a kernel-module-vcu directory in the recipe-multimedia/vcu folder
cd vcu
mkdir kernel-module-vcu
cp test1.patch recipe-multimedia/vcu/kernel-module-vcu
cp test2.patch recipe-multimedia/vcu/kernel-module-vcu
vi kernel-module-vcu_%.bbappend
Create similar bbappend files and folder for other VCU component to integrate any custom patches in PetaLinux
build.
5. Follow PetaLinux build steps to generate updated binaries.
✎ Note: If you are not compiling with PetaLinux, review the recipes for additional files necessary for setting up GStreamer.
For example, you must include the /etc/xdg/gstomx.conf in the root file system. This file tells gst-omx where to find the
OMX integration layer library - libOMX.allegro.core.so.1.
zynqmp_vcu_encode --help
Constraints
The initial encoding session should start with the worst case maximum value for dynamic bitrate and dynamic B-frames
parameters, for example, encoding session should start with num-bframes = 4 if you are planning to modify B-frames
dynamically during the encode session. Similarly, for target-bitrate the encoding session should start with max-bitrate planned.
You can alter the bitrate later, during encode.
Dynamic-bitrate is the ability to change encoding bitrate (target-bitrate) while the encoder is active.
To change the bitrate of the video at frame number 100 to 1 Mb/s:
Dynamic GOP
Dynamic GOP is the ability to change gop-length, number of B-Frames, and forcing IDR picture while the encoder is active.
To change the gop-length of the video at frame number 100 to 45:
Dynamic KeyFrame
Dynamic Keyframe is the ability to force the IDR picture while the encoder is active. The VCU encoder can insert Keyframe (IDR
picture) dynamically in normal latency pipelines, as well as in the low latency (LLP1) pipelines.
To inset a key frame at frame number 35:
To insert a key frame in LLP1 based pipeline with low-delay-P GOP mode and horizontal GDR at frame number 100:
The VCU encoder currently supports Region of Interest (ROI) encoding which allows you to define several independent or
overlapped regions within a frame. The ROI encoding tags regions in a video frame to be encoded with user supplied quality
You can provide the region of interest (ROI) location (top, left) in pixels and width and height in pixels along with quality index.
Multiple and overlapped ROI regions within a frame are supported. The sample GStreamer application only adds one ROI region
but you can attach multiple ROI meta data properties to the buffer.
The input format is:
A sample GStreamer application command line option to encode video with ROI region attached to specific frame number at
100 is:
"ROI_BY_VALUE:<frame number>:<top>x<left>:<width>x<height>:<delta_qp>""
This creates a new gstreamer meta structure: roi-by-value/omx-alg. Using this meta structure, the gstreamer fills out the new
OMX config, OMX_ALG_VIDEO_CONFIG_REGION_OF_INTEREST_BY_VALUE, to send the parameters to the VCU.
The following is an example pipeline:
✎ Note: This feature must be enabled at the start of the stream (frame 0). This is a limitation of the VCU. Values for delta-qp
must be within –32 to 31 because they are relative QP values.
ROI-QP
You can set ROI live streaming dynamically using the GStreamer and control software.
GStreamer
As showcased in example application code, use the following two APIs to set ROI region and quality for each frame:
https://github.com/Xilinx/gstreamer/blob/xlnx-rebase-v1.20.5/subprojects/gst-omx/examples/zynqultrascaleplus/test-vcu-
encode.c (line number #501).
gst_buffer_add_video_region_of_interest_meta()
gst_video_region_of_interest_meta_add_param()
Control Software
The control software exports a few APIs to provide ROI information to the encoder per frame. Use these APIs to add ROIs in live
streaming.
Inserts long-term picture dynamically in a GOP and uses LT reference for particular picture based on user input.
Sample Gstreamer application cmd line option to insert a longterm picture at frame 10:
Adaptive GOP
Specify the maximum number of B-frames that can be used in a GOP and set the GOP mode to adaptive using
GopCtrlMode=ADAPTIVE_GOP in the encoder configuration file at control software and gop-mode=adaptive at GStreamer.
The encoder adapts the number of B-frames used in the GOP pattern based on heuristics on the video content. The encoder
does not go higher than the maximum number of B frames that you specified.
AL_Encoder_AddSei() adds SEI NAL to the stream. You are responsible for the SEI payload and have to write it as specified
in Annex D.3 of ITU-T. The encoder does the rest including anti-emulation of the payload. The SEI cannot exceed 2 KB. This
should be largely sufficient for all the SEI NALs.
The stream buffer needs to have enough space to add the SEI section. If not, AL_Encoder_AddSei() reports a failure. Prefix
and suffix SEI are supported. The encoder puts the SEI section at the correct place in the stream buffer to create a conformant
stream. The control software application show cases adding a SEI message using AL_Encoder_AddSei(). To insert multiple
SEI messages to stream, AL_Encoder_AddSei() can be called multiple times or multiple SEI payloads can be added to
AL_Encoder_AddSei() API. 2 KB limit is per stream buffer, that is, all SEI messages per AU should be with-in the 2 KB limit. SEI
messages are already synchronized with corresponding video frames; there should not be a need for timestamping mechanism
for synchronization. VCU-SW supports adding SEI meta-data through omx/gstreamer application. The following OMX_API
/Indexes and Struct are used to insert SEI meta-data.
OMX_ALG_IndexConfigVideoInsertPrefixSEI
OMX_ALG_IndexConfigVideoInsertSuffixSEI
Struct: OMX_ALG_VIDEO_CONFIG_DATA
You can insert your own data using gstreamer application at any frame. The reference implementation function is as follows:
The SEI message can be retrieved from the decoder. The AL_CB_ParsedSEI callback is invoked each time the decoder parses
an SEI. The sei_payload data is provided as described in Annex D.3 of ITU-T. Anti-emulation has been removed by the decoder.
The control software application shows an example of API usage which can be triggered using the -sei-file option on the
command-line. The VCU-SW supports parsing of SEI meta-data using the omx/gstreamer application too.
When an SEI is parsed, the OMX component should call EventHandle with eEvent based on the SEI type:
OMX_ALG_EventSEIPrefixParsed and OMX_ALG_EventSEISuffixParsed. The following is the reference implementation to handle
SEI event using Gstreamer application:
Encode the video twice; the first pass collects information about the sequence, and stats are used to improve the encoding of
the second pass. The two levels are:
Until the 2019.1 release, only real-time GOP/Frame-level dual-pass is supported. IDR frames are automatically inserted based on
first-pass scene change detection. The QP of each frame is adjusted based on internal stream size/complexity statistics/Scene
change. Gop/Fame-level dualpass encoding is enabled by using "LookAhead" parameter at control-software and "look-ahead" at
Gstramer.
Dual pass is not supported in the low latency mode.
Constraints: A maximum of 4kp30 is allowed in dual pass mode because the maximum VCU performance is 4kp60 and dual
pass reduces the performance to half.
LookAhead = <value>
The AMD video scene change detection IP provides a video processing block that implements scene change detection
algorithm. The IP core calculates the histogram on a vertically subsampled luma frame for consecutive frames. The histogram
of these frames are then compared using the sum of absolute differences (SAD). This IP core programmable through a
comprehensive register interface to control the frame size, video format, and subsampling value. For more information, see the
Video Scene Change Detection LogiCORE IP Product Guide (PG322).
The scene change detection IP is a configurable IP core that can read up to eight video streams in memory mode. In memory
mode. All inputs are read from memory mapped AXI4 interface. The IP supports resolutions ranging from 64x64 to 8192x4320
with various 8-bit and 10-bit color formats. The IP sends the interrupt after generating the SAD values for all the input streams
for every frame. In case of memory mode, the SAD values are calculated for every input stream one after the other sequentially
and the interrupt is after the SAD calculation of the final stream. On the interrupt generation, the SAD values are read from SAD
registers for configured number of streams to compare them with a threshold value that you decide to determine if a scene
change has occurred.
Interlaced scene change detection is supported in the memory mode. For interlaced content, SAD values are computed for each
field instead of each frame, so a scene change can be detected in between fields of the same frame.
The most important application of scene change detection are video surveillance, machine learning and video conference. In
these use cases, the SCD IP would be in the capture pipeline and the generate event is attached with each buffer captured. The
same event is passed along with the buffer to the encoder, where the encoder makes decisions to insert an I-frame instead of a
P-frame or a B-frame. Inserting an I-frame at the place where a scene is changed would retain the quality of the video.
Gstreamer pipelines:
Interlaced Video
The Interlaced video is a technique to double the perceived frame rate of a video display with the same bandwidth. The
interlaced frame contains two fields captured at two different times. This improves motion perception to the viewer, and
reduces flicker by taking advantage of viewing in rapid succession as continuous motion.
Interlaced signals require a display that is capable of showing the individual fields in a sequential order. CRT displays are made
for displaying interlaced signals. Interlaced scan refers to drawing a video image on display screen by scanning each line or row
of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image and the other
contains all even-numbered lines.
The VCU supports encoding and decoding of H265 interlaced video. VCU can decode and encode of various video resolutions
like 1920x1080i, 720x576i, and 720x480i.
Gstreamer pipelines:
Audio serial pipeline: v4l2src → omxh265enc → omxh265dec → kmssink alsasrc → audioconvert → audioresample →
alsasink
Audio file playback pipeline: filesrc → tsdemux → omxh265dec → kmssink tsdemux → faad → audioconvert →
audioresample → alsasink
Gradual Decoder Refresh (GDR) when GOPCtrlMode is set to LOW_DELAY_P, the GDRMode parameter used to specify whether
GDR scheme should be used or not. When GDR is enabled (horizontal/vertical), the encoder inserts intra MBs row/column in the
picture to refresh the decoder. The Gop.FreqIDR specifies the frequency at which the refresh pattern should happen. To allow
full picture refreshing, Gop.FreqIDR parameter should be greater than the number of CTB/MB rows (GDR_HORIZONTAL) or
columns (GDR_VERTICAL).
When the GDR is enabled, the encoder inserts SPS/PPS along with an SEI recovery message at the start of a refresh pattern,
and the decoder can synchronize to the SEI recovery points in the bit-stream. The advantage of having the non-IDR decoder
synchronization is that there is no longer any need to send IDR pictures in case of a packet loss scenario. Inserting periodic IDR
pictures in video encoding generates a spike in bit consumption which is not recommended for video streaming. Vertical GDR
mode has an MV (motion vector) limitation that can cause full picture reconstruction to take more than one refresh cycle. Refer
to the following table for more details:
AVC/H.264 HEVC/H.265
GDR_VERTICAL: A vertical intra-column moving Exact match is supported. Exact match is NOT supported; it
from left to right Deblocking filter is automatically can take up to two refresh intervals
disabled to fully reconstruct the frame
deblocking filter is automatically
disabled
A sample gstreamer pipeline is as follows. Use "gdr-mode=horizontal" & "periodicity-idr=<higher than Picture Height in Mbs>" in
the encoder element.
At the control software level, add the following section and parameter (in the GOP section of the cfg file)
[GOP] #---------------------------------------------------
GopCtrlMode = LOW_DELAY_P
Gop.GdrMode = GDR_HORIZONTAL
Gop.FreqIDR = 270
✎ Note: SPS/PPS is inserted at the start of each GDR frame start. VCU decoder can synchronize onto any GDR start frame, no
need to start with the IDR frame always. The GDR frequency can be set using GOP.FreqIDR at control software and periodicity-
idr at gstreamer when GDR mode is enabled.
Dynamic Resolution Change is supported at the VCU control software and Gstreamer level for both the encoder/decoder. This
feature is not supported at the OMX level yet.
An input compressed stream can contain multiple resolutions. The VCU Decoder can decode pictures without re-creating a
channel.
Constraints
The maximum resolution must be known.
All the streams should belong to same codec either AVC or HEVC.
Same chroma-format and bit-depth.
Maximum resolution can be set either with the pre-allocation mode or with the first valid SPS. In case the SPS defines a
resolution higher than the one provided in pre-allocation, the decoder skips the frames until the next valid SPS.
While going from low to high without pre-allocation, resolutions lower than or equal to the first valid resolution that are met
are decoded. Pictures and references should always fit in the DPB.
CtrlSWAPI
Callbacks
resolutionFoundCB is raised each time a new resolution is found.
DRC Decode: To decode a DRC file which is encoded at the control software level using the Gstreamer, use:
VCU Encoder
Constraints
Maximum resolution must be known. You can only change the resolution; changing the chroma mode or bit-depth is not
allowed.
CtrlSWAPI
AL_Encoder_SetInputResolution:Maximum resolution is specified in channel parameters.
Refer to the API section for more details on new APIs. Provide multiple resolution files in the .cfg file in the following format:
[INPUT] #-------------------------------------------------
YUVFile = input_1.yuv
Format = NV12
Width = 1920
Provide the number of frame index for input and variable fps parameters (for DRC with variable FPS) in the input_cmd.txt
file.
Use the following command to generate an output file that contains an encoded file containing various resolutions.
DRC Encode: Checking encoder use case for DRC using transcoding of DRC file which is encoded at the control software level,
use:
✎ Note: In case of Gstreamer level, the encoded file should have the first resolution as the maximum resolution of the stream.
The VCU encoder currently supports the frame skip feature to strictly achieve the required target bitrate. When an encoded
picture is too large and exceeds the CPB buffer size, the picture is discarded and replaced by a picture with all MB/CTB encoded
as « Skip ». This feature is useful especially at low bitrates when the encoder must maintain a strict target-bitrate irrespective of
video-complexity.
You can also control the maximum number of consecutive skipped frames. However, if the maximum number of consecutive
skipped frames is set too low, the output bitstream might not achieve the target bitrate. So, you might need to experiment to find
a balance between the achieved bitrate and number of actual encoded frames.
Constraints
This only applies if the reference frame has already been encoded. The first frame is encoded irrespective of the buffer
overflow. Only subsequent pictures are discarded if actual frame size is exceeded.
If the CPB is very small and the video is complex, all frames might be discarded.
HRD compliance for dynamic commands is not guaranteed when the bitrate or fps change. Skip frames can be added or
missed when those parameters change.
The back-and-forth visual effect can appear when the value of NumB is greater than 1 and enable-skip is set to
true.
In the dual pass mode, frame skip can only be used on the second pass.
Inserting the frame skip in GOP pattern with more than one B-frame can have back and forth visual effect. NumB >1
is not recommended.
Avoid frame skip in adaptive GOP feature.
Not Supported
Frameskip is not supported in the low latency (--slicelat) mode.
Frameskip is not supported in Pyramidal GOP.
OMX API
New SetParam Index: OMX_ALG_IndexParamVideoSkipFrame
Data structure: OMX_ALG_VIDEO_PARAM_SKIP_FRAME
bEnableSkipFrame flag of OMX_ALG_VIDEO_PARAM_SKIP_FRAME structure is used to indicate if skip frame should be
enabled.
nMaxConsecutiveSkipFrame of OMX_ALG_VIDEO_PARAM_SKIP_FRAME structure is used to indicate the maximum
number of consecutive skipped frames.
EnableSkip = TRUE
For setting the maximum number of consecutive skipped frames, set the “MaxConsecutiveSkip” parameter in cfg file:
MaxConsecutiveSkip = 5
Gstreamer Pipeline
32-Streams Support
This feature checks functionality of encoding, decoding, and transcoding capacity at a time for 32 streams parallelly.
DCI-4k Encode/Decode
VCU is capable of encoding or decoding at 4096x2160p60, 422, provided the VCU Core-clk frequency is set to 712 MHz, keeping
rest of the AXI and MCU frequency as recommend in the hardware section.
Standard Rate-Control GOP GOP-length Num-bframes Max Bitrate (Mb/s) Total CPU (4 cores) (%)
The VCU encoder assigns a temporal layer ID to each frame based on its hierarchical layer as per the AVC/HEVC standard. This
enables having a temporal-ID based QP and the Lambda table control for encode session. This is for Pyramidal GOP only.
Gop.TempDQP = 1 1 1 1
This forces specific blocks to be encoded in intra prediction mode. The bool AL_Encoder_Process (AL_HEncoder
hEnc, AL_TBuffer* pFrame, AL_TBuffer* pQpTable) API forces Intra at block level and can be controlled with the
QP table in the API. The QP table buffer must contain a byte per MB/CTB, in raster scan format:
Prediction mode QP
2 Reserved
3 Reserved
Constraints
To use force intra at block level, external QP tables must be enabled. Then, a QP table must be provided for each frame.
This feature is supported with RELATIVE_QP as well; Prediction mode and QP bit distribution remains the same as above.
This feature is also support with ROI_QP mode. The entire ROI region can be encoded with Intra MBs using the following
quality option for the region:
<quality> = INTRA_QUALITY
Decoder Meta-data Transfer Using 1-to-1 Relation Between Input and Output Buffer
Until now, each incoming buffer is copied into the decoder internal circular buffer, and the frame boundaries are (re-)detected
afterwards by the decoder itself. This prevents it from keeping a true 1-to-1 relationship between the input buffer and decoded
output frame. An application can try to retrieve the 1-to-1 relationship based on the decoding order but this is not reliable in
case of error concealment. This new feature consists of bypassing the circular buffer stage for use cases where the incoming
buffers are frame (or slice) aligned. In this case, the decoder can directly work on the input buffer (assuming DMA input buffer)
and any associated metadata can be retrieved on the output callback.
Decoder config setting bSplitInput is added to enable and disable this feature at control-sw. gst-omx decoder supports also this
new parameter. When split-input is enabled, the decoder has a 1-to-1 mapping for input and output buffers. When disabled, the
decoder copies all input buffers to internal circular buffer and processes them.
Constraints: When split-input mode is enabled for decoder, input to decoder should be aligned with the frame (AU) or slice (NAL
unit) boundary. Part of AU or NAL is not allowed.
Two new GOP control modes, Control software and Gstreamer framework are supported
Control_SW
DEFAULT_GOP_B & PYRAMIDAL_GOP_B: Patterns are identical to DEFAULT_GOP and PYRAMIDAL_GOP except that P frames
are replaced with B Pictures.
Gstreamer
Omxh264enc/omxh265enc element gop-mode parameter supports these two new settings from 2019.2 onwards, "basic-b"
corresponds to DEFAULT_GOP_B and “pyramidal-b” corresponds to PYRAMIDAL_GOP_B.
Constraints
New offset values are applied on the chosen frame and on the following frames in the decoding order.
Control-SW API:
In command file, define the settings "LF.BetaOffset" and “LF.TcOffset”. Encode Library APIs are (more details can be found in
API description section)
Gstreamer
Omxh264enc/omxh265enc elements support below two mutable parameters, values can be modified during run time.
1. loop-filter-beta-offset: Beta offset for the deblocking filter; used only when loop-filter-mode is enabled.
Along with each frame, it is possible to associate a QP Table specifying, for each encoding block of the frame, the QP you want
to use. The QP value can be relative or absolute. The QP table buffer must contain a byte per MB/CTB, in raster scan format:
Pipeline:
XAVC
VCU encoder can produce xAVC Intra or xAVC Long GOP bitstreams.
Constraints: You need to provide the information required to create XAVC bitstream following the specifications. For instance,
you must provide at least the resolution, framerate, clock ratio, etc
You have to increase AL_ENC_MAX_SEI_SIZE to 10 kb (include/lib_common/StreamBuffer.h) to support XAVC INTRA CBG.
‼ Important: The default size of AL_ENC_MAX_SEI_SIZE is 10 kb from 2020.1 onwards.
✎ Note: Currently the VCU encoder does not support all XAVC features. A list of unsupported XAVC features are as follows:
The VCU encoder does not support Class 480 4k Intra Profile.
The VCU encoder does not support XAVC interlaced.
The VCU encoder will only produce h264 bitstreams. It is up to user applications to wrap this within mp4 or mxf.
containers
CtrlSW Profile
Following are the new profiles added:
AL_PROFILE_XAVC_HIGH10_INTRA_CBG
AL_PROFILE_XAVC_HIGH10_INTRA_VBR
AL_PROFILE_XAVC_HIGH_422_INTRA_CBG
AL_PROFILE_XAVC_HIGH_422_INTRA_VBR
AL_PROFILE_XAVC_LONG_GOP_MAIN_MP4
AL_PROFILE_XAVC_LONG_GOP_HIGH_MP4
AL_PROFILE_XAVC_LONG_GOP_HIGH_MXF
AL_PROFILE_XAVC_LONG_GOP_HIGH_422_MXF
OMX Profile
New OMX_ALG_VIDEO_AVCPROFILETYPE:
OMX_ALG_VIDEO_XAVCProfileHigh10_Intra_CBG
OMX_ALG_VIDEO_XAVCProfileHigh10_Intra_VBR
OMX_ALG_VIDEO_XAVCProfileHigh422_Intra_CBG
OMX_ALG_VIDEO_XAVCProfileHigh422_Intra_VBR
OMX_ALG_VIDEO_XAVCProfileLongGopMain_MP4
OMX_ALG_VIDEO_XAVCProfileLongGopHigh_MP4
OMX_ALG_VIDEO_XAVCProfileLongGopHigh_MXF
OMX_ALG_VIDEO_XAVCProfileLongGopHigh422_MXF
1280x720
3840x2160
4096x2160
2048x1080
1280x720
XAVC Examples
Gstreamer Pipelines
Control Software
HDR10
The VCU supports HDR10 encoding and decoding. To comply with HDR standards, the VCU supports HDR10 color primaries,
transfer characteristics, and matrix coefficients in the sequence parameter set of intra-encoded frames. Additionally, you can
insert and extract Mastering Display Color Volume and Content Light Level supplemental enhancement information packets.
Gstreamer Pipelines
HDR10 video can be captured using the v4l2src element (assuming the necessary EDID and design changes have been made).
v4l2src automatically detects an HDR10 source and transmits the metadata and colorimetry. Currently, HDR10 capture and
display is only supported with the AMD HDMI IPs. An example serial pipeline:
Control Software
See HDR10 File Format.
HLG Support
Hybrid Log-Gamma (HLG) is a HDR standard aimed at providing backwards compatibility with non-HDR monitors, while also
achieving a wider color gamut for HDR monitors. As HLG EOTF is similar to standard gamma functions, HLG streams look
proper when played on SDR monitors. Meanwhile, because the gamma and PQ curves are completely different, the PQ EOTF
streams are displayed properly and usually look washed out when played on a SDR monitor. Unlike HDR10, HLG does not require
any extra metadata that goes along with the video data. HLG streams consist of the HLG transfer characteristics/EOTF and
BT2020 color primaries and BT2020 color matrix.
From VCU point of view, there are two types of HLG that can be enabled.
There is a backwards compatible mode, that uses the BT2020 value in the SPS VUI parameters, instead of the HLG transfer
characteristics. Then the VCU encoder inserts a Alternative Transfer Characteristics (ATC) SEI with the HLG value.
There is a HLG only mode. This directly uses the HLG value in the SPS VUI parameters.
Sample VUI parameters in encoded bitstream for Backward Compatible mode are as follows:
Along with the preceding VUI colour_description_parameter, the encoder embeds SEI message (ATC), which indicates
that preferred_transfer_characteristics is set to 18.
HLG Only mode VUI is as follows:
Control-SW Application
Set the following encoder configuration parameters for HLG support.
Section [SETTINGS]:
TransferCharac = TRANSFER_BT_2100_HLG
ColourMatrix = COLOUR_MAT_BT_2100_YCBCR
ColourDescription=COLOUR_DESC_BT_2020
EnableSEI = SEI_ATC
GRAY8/GRAY10 Support
VCU encoder and decoder supports encoding and decoding of GRAY8 and GRAY10 format support at gstreamer level from
release 2021.1 onwards. Sample serial (encode + decode) test case pipelines are as follows:
VCU encoder and decoder supports encoding and decoding of YUV444 Full chroma 8-bit and 10-bit format support at gstreamer
level from the 2022.1 release onwards.
This feature does the following:
Supported Resolutions
The following table provides the supported resolutions from the command line app for only this design.
4kp60 × ×
4kp30 √ ×
1080p60 √ ×
√- Support
×- Not Supported
Sample pipelines
Passthrough
H265 pipelines:
Stream-out
Stream-in
Stream-out
Stream-in
✎ Note: This feature is available only from the 2022.1 version of the IP.
VCU encoder provides DMA based buffer pool on the output port at gstreamer level, letting the other components directly use
the encoder's output through DMA based memory.
To enable this feature, you must enable the "use-out-port-pool" property of VCU encoder’s gstreamer pipelines.
GStreamer Pipeline
GStreamer Pipelines
Examples of running GStreamer from the PetaLinux command line are as follows. To see the description of gstreamer elements
and properties used in each of them, use the gst-inspect-1.0 command.
For example, to get description of each parameters for "omxh264dec" element, enter the following at the command prompt:
gst-inspect-1.0 omxh264dec
Decode H.264 based input file and display it over the monitor connected to the display port
H.265 Decoding
Decode H.265 based input file and display it over the monitor connected to the display port
4:2:0 8-bit
4:2:2 8-bit
4:2:0 10-bit
4:2:2 10-bit
To reduce frame decoding time for bitstreams greater than 100 Mb/s at 4kP30, use the following options:
The following command decodes an H.264 MP4 file using an increased number of internal entropy buffers and displays it via
DisplayPort.
H.264 Encoding
✎ Note: The command lines above assume the file input-file.yuv is in the format specified.
Convert H.264 based input container format file into H.265 format
Convert H.265 based input container format file into H.264 format
Encode and decode the input YUV into two streams by using two different encoder and two different decoder elements.
V4L2 → AVC encode → AVC decode → kmssink V4l2 → HEVC encode → HEVC decode → kmssink (AVC & HEVC
encoding and decoding in single pipeline)
In the above gstreamer pipeline, the mixer IP is used to overlay plane 33 and 34 on to the primary display plane. Plane 33 is
displayed at 0,0 with a width of 1920 and height of 2160. Plane 34 is displayed at 1920,0 with a width of 1920 and height
of 2160. The output resembles the following figure:
Multistream Decoding
Decode the H.265 input file using four decoder elements simultaneously and saving them to separate files
✎ Note: The tee element is used to feed same input file into four decoder instances; you can use separate gst-launch-1.0
application to fed different inputs.
Multistream Encoding
Encode the input YUV file into eight streams by using eight encoder elements simultaneously.
✎ Note: The tee element is used to feed the same input file into eight encoder instances. You can separate the gst-launch-1.0
application to be fed with different inputs.
For alternate input YUV formats, the following changes are required in the above pipeline:
Format/Profile Arguments
✎ Note: 192.168.1.1 is an example client IP address. You might need to modify this with actual client IP address.
For AVC, 1 MB=16x16 pixels and for HEVC, 1MB=32x32 pixels. You have to calculate picture height in Mbs=
roundup(Height,64)/#Mb rows.
If you are not using buffer-size property of udpsrc, then you must set it manually using sysctl command as per their
network bandwidth utilization and requirements.
-> sysctl -w net.core.rmem_default=60000000
VBR is not a preferred mode of streaming.
Element Description
qtdemux Demuxes a .mov file into raw or compressed audio and/or video streams.
queue Queues data until one of the limits specified by the "max-size-buffers", “max-size-bytes” or “max-
size-time” properties has been reached
Element Description
rtpjitterbuffer Reorders and removes duplicate RTP packets as they are received from a network source
v4l2src Captures video from v4l2 devices, like webcams and television tuner cards
AVI Yes No
MPEG2-PS Yes No
FLV Yes No
3GP Yes No
Two sample applications built using OpenMax Integration Layer are available. The source code for the OpenMax sample
applications omx_encoder and omx_decoder are at https://github.com/Xilinx/vcu-omx-il/tree/master/exe_omx.
✎ Note: Input YUV file should be in NV12 or NV16 format for 8-bit input sources.
Sync IP API
Structure and Function Definitions
SyncIp
Description
Holds a handle to save the hardware configuration like maximum number of channels, maximum number of users, maximum
number of buffers, buffer descriptor, mutex for locking and channel status.
See
xvfbsync.h
SyncChannel
Description
Holds the channel id assigned by the driver and the status of the channel. Each independent stream gets unique channel
context with unique SyncChannel structure and channel id.
EncSyncChannel, DecSyncChannel
Description
Encoder and decoder specific wrapper sync IP channel structures.
Description
Get the hardware configuration and capabilities and initialize Sync IP structure. Update the reserved channel id assigned by
driver and creates a thread to poll for Sync IP errors.
Description
Initialize encoder Sync IP channel.
Description
Enable Sync IP channel.
Description
Set Sync IP interrupt mask. Application can mask unnecessary Sync IP interrupts thus reducing overall system load by setting
appropriate intr_mask.
Description
Push buffer to Sync IP channel.
Description
Unmask Sync IP interrupt.
Description
Disable Sync IP channel and clean Sync IP configurations and resources.
The producer is V4L2 based application programs the buffer parameters to synchronization IP and give the early buffer done
signal to consumer. Based on the signal, consumer perform a read transaction on the buffer and blocks on. The synchronization
IP, block the consumer transactions until producer completes writing a slice data to the memory.
Color Format Luma start Address Luma End Offset Chroma Start Offset Chroma End Offset
YUV 420 8-bit Luma start pointer 1920x1080 + 1920 - 1920 - Chroma start 1920x540 + 1920 -
1080p 1 pointer 1920 - 1
YUV 422 10-bit 4k Luma start pointer 5120x2176 + 2176 - 2176 - Chroma start 5120x2176 + 5120 -
1 pointer 5120 - 1
The application register for POLLPRI errors. Using all the above steps, the Sync IP gets configured with buffer address
details and does the synchronization by making sure that consumer requests for the particular buffer get unblocked only
when producer has written sufficient data. If any error occurs on running pipeline then Sync IP driver will mask
corresponding interrupt bit and unblocks the poll thread with error event so that application will get the channel error
status, once the particular error gets resolve or obsolete then application should unmask the Sync IP interrupt by using
xvfbsync_syncip_reset_err_status().
After completion of the use case, the application should reset the syncip context and de-initialize the syncip channel by
using xvfbsync_enc_sync_chan_depopulate(). This will disable the particular Sync IP channel and do necessary
cleanup of resources.
The producer is V4L2 based application programs the buffer parameters to synchronization IP and give the early buffer done
signal to consumer. Based on the signal, consumer perform a read transaction on the buffer and blocks on. The synchronization
IP, block the consumer transactions until producer completes writing a slice data to the memory.
Color Format Luma start Address Luma End Offset Chroma Start Offset Chroma End Offset
YUV 420 8-bit Luma start pointer 1920x1080 + 1920 - 1920 - Chroma start 1920x540 + 1920 -
1080p 1 pointer 1920 - 1
YUV 422 10-bit 4k Luma start pointer 5120x2176 + 2176 - 2176 - Chroma start 5120x2176 + 5120 -
1 pointer 5120 - 1
The application register for POLLPRI errors. Using all the above steps, the Sync IP gets configured with buffer address
details and does the synchronization by making sure that consumer requests for the particular buffer get unblocked only
when producer has written sufficient data. If any error occurs on running pipeline then Sync IP driver will mask
corresponding interrupt bit and unblocks the poll thread with error event so that application will get the channel error
status, once the particular error gets resolve or obsolete then application should unmask the Sync IP interrupt by using
xvfbsync_syncip_reset_err_status().
After completion of the use case, the application should reset the syncip context and de-initialize the syncip channel by
using xvfbsync_enc_sync_chan_depopulate(). This will disable the particular Sync IP channel and do necessary
cleanup of resources.
The VCU Control Software API is comprised of an encoder library and a decoder library, both in user space, which user space
applications link with to control VCU hardware.
There several error handling mechanisms in the VCU Control Software API. The most common mechanism is for functions to
return a status value, such as a boolean or a pointer that is NULL in the failing case.
The encoder and decoder objects each store an error code to be accessed with AL_Encoder_GetFrameError and
AL_Decoder_GetFrameError, respectively.
User-defined callbacks are sometimes notified of unusual conditions by passing NULL for a pointer that is not normally NULL or
do not provide any notification but assume the callback itself uses one of the accessor functions to retrieve the error status
There are various ways encoded bitstream can be corrupted and detecting those errors in a compressed bitstream is complex
because of the syntax element coding and parsing dependencies. The errors are usually not detected on corrupted bit but more
likely on the following syntax elements.
For example, an encoded bitstream has scaling matrices and "scaling matrices present bit" is corrupted in the stream. When a
decoder reads this bitstream, it first assumes that there is no scaling matrices present in the stream and goes on parsing actual
scaling matrix data as next syntax element which can cause an error. Ideally, the error was corruption of scaling matrix bit, but
the decoder is not able to detect that, and such kind of scenarios are common in video codecs.
Refer VPS/SPS/PPS parsing function for more details on error handling and reporting:
https://github.com/Xilinx/vcu-ctrl-sw/tree/master/lib_parsing
lib_parsing/AvcParser.c and lib_parsing/HevcParser.c and check the calls to the macro COMPLY.
Error resilience is handled either at control software level or at hardware level. As errors are difficult to predict, it is possible that
the hardware decoder hangs in an infinite loop. In that case, a watchdog is used to reset the decoder in a safe way to restart the
decoding for the next frames.
The hardware IP only parses the slice data part of the bitstream. All headers are parsed and managed by the control software.
The error resilience for the headers is managed by the software and the error resilience for the slice data is managed by the
hardware.
Error Detection
At slice header level, the software can detect different kinds of errors:
Missing slices
Inconsistent first LCU address syntax element.
When the software detects an error, a slice conceal command is sent to the hardware IP to fill the intermediate buffer. The
intermediate buffer must always be fully filled so as to avoid dec timeout.
At slice data level, the hardware can detect different kinds of errors, like inconsistencies in the number of LCUs or in the range
of various syntax elements. When an error is detected, a concealment flag is set in the corresponding LCU data in the
intermediate buffer up the last LCU of the slice.
Error Concealment
Error concealment is performed in the reconstruction process. When a concealment flag is set in the intermediate buffer, the
reconstruction of the LCU will be done using fixed parameters:
If there is a reference picture available, the LCU is skipped using this picture as a reference.
If there is no reference picture, the default intra prediction mode is applied.
When errors are detected by the hardware IP, it conceals the remaining part of the slice; there is no error code, only a single flag
indicating if the slice has been concealed or not.
Memory Management
Memory operations are indirected through function pointers. The AL_Allocator default implementation simply wraps malloc and
free, etc.
Two higher level techniques are used for memory management: reference counted buffers, and buffer pools. A reference
counted buffer is created with a zero reference count. The AL_Buffer_Ref and AL_Buffer_Unref functions increment and
decrement the reference count, respectively. The AL_Buffer interface separates the management of buffer metadata from the
management of the data memory associated with the buffer. Usage of the reference count is optional.
The AL_TBufPool implementation manages a buffer pool with a ring buffer. Some ring buffers have sizes fixed at compile time.
Exceeding the buffer pool size results in undefined behavior. See AL_Decoder_PutDisplayPicture.
AL_Buffer
Holds a handle to memory managed by a AL_TAllocator object. Provides reference count, mutex, and callback to be invoked
when the reference count is decremented to zero. Use of the reference count is optional. This type is opaque. The
implementation uses al_t_BufferImpl.
See
BufferAPI.h, al_t_BufferImpl
See
AL_Buffer_Create_And_Allocate, AL_Buffer_WrapData
Decreases the reference count of hBuf by one. If the reference count is zero, calls the pCallback function associated with the
buffer.
See
AL_Buffer_Create_And_Allocate, AL_Buffer_WrapData
AL_TAllocator
The AL_TAllocator type enables the developer to either wrap or replace memory management functions for memory tracking,
alternative DMA buffer handling, etc.
const AL_AllocatorVtable* vtable Pointer to function pointers for memory management functions.
See
AL_AllocatorVtable
AL_AllocatorVtable
bool (*pfnDestroy) pfnDestroy Releases resources associated with the allocator. This function relates
(AL_TAllocator*) to the allocator, not an allocated handle.
AL_HANDLE pfnAlloc Allocates a handle of a given size. Returns NULL if allocation fails.
(*pfnAlloc)
(AL_TAllocator*,
size_t)
bool (*pfnFree) pfnFree Releases resources associated with a handle returned by pfnAlloc.
(AL_TAllocator*, Returns true, if successful and false otherwise.
AL_HANDLE)
AL_HANDLE pfnAllocNamed Allocates a buffer with a given name. The name is intended for
(*pfnAllocNamed) developer code and is not used internally to the VCU Encoder API or
(AL_TAllocator*, VCU Decoder API.
size_t, char const*
name)
Return
Returns a pointer to newly allocated memory or NULL if allocation fails.
Return
Returns hBuf.
See
LinuxDma_GetVirtualAddr, LinuxDma_Map, AL_Allocator_GetPhysicalAddr
Return
Returns NULL.
See
AL_Allocator_GetVirtualAddr
AL_EMetaType
See
BufferMeta.h
Allocate a stream metadata object. Metadata objects are associated with a buffer and provide context regarding how the buffer
should be processed. The uMaxNumSection argument determines how many section structures to allocate. The section
structure associates a flag with a region of a buffer as set by AL_StreamMetaData_AddSection. The metadata object must be
associated with exactly one AL_TBuffer object. Functions that deallocate buffers such as AL_Buffer_Destroy invoke the destroy
function of each metadata object.
Return
Returns a pointer to an AL_TStreamMetaData structure capable of specifying uMaxNumSection sections, unless memory
allocation fails, in which case it returns NULL.
See
AL_StreamMetaData_ClearAllSections, AL_StreamMetaData_AddSection, AL_Buffer_AddMetaData, AL_Buffer_Destroy,
AL_MAX_SECTION, BufferStreamMeta.h
See
AL_StreamMetaData_AddSection
Assigns the parameters defining a new section. The uOffset and uLength are expressed in bytes. If the region extends beyond
the end of the buffer, memory corruption can result. The uFlags argument is a bit field. See the AL_Section_Flags enumeration.
Return
Returns the section ID (index) of the added section.
See
AL_ StreamMetaData_ClearAllSectionData, AL_StreamMetaData_ChangeSection, AL_StreamMetaData_SetSectionFlags,
AL_StreamMetaData_Create, AL_Section_Flags
Section flags are used to specify attributes that apply to a region of a buffer.
See
AL_StreamMetaData_AddSection
Adds the pMeta pointer to the metadata of pBuf. Metadata objects are associated with a buffer and provide context regarding
how the buffer should be processed. It is inadvisable to add more than one metadata of a given type.
Return
Returns true on success. Returns false if memory allocation fails. Thread-safe.
See
AL_Buffer_GetMetaData, AL_Buffer_RemoveMetaData
Return
Gets the first metadata of pBuf that has type eType. Returns NULL if no matching metadata is found. Thread-safe.
See
AL_Buffer_AddMetaData, AL_Buffer_RemoveMetaData
This function removes the pMeta pointer from the metadata of pBuf.
Return
Always returns true. Thread-safe.
See
AL_Buffer_GetMetaData, AL_Buffer_AddMetaData
Allocates and initializes a buffer able to hold zSize bytes. Free the buffer with AL_Buffer_Destroy. Thread safe. The pCallBack is
called after the buffer reference count reaches zero and the buffer can safely be reused. The AL_Buffer does not take ownership
of the memory associated with its AL_HANDLE. Use AL_Allocator_Free to remove the AL_HANDLE memory, and then use
AL_Buffer_Destroy to remove the buffer itself.
✎ Note: Use of reference counting is optional.
Return
Returns a pointer to the buffer.
See
AL_Buffer_Ref, AL_Buffer_Unref, AL_Buffer_SetUserData, AL_Buffer_Destroy, AL_Buffer_Create,
AL_Buffer_Create_And_Allocate_Named
Frees memory associated with buffer pBuf including metadata. The reference count of pBuf must be zero before this function is
called. Does not deallocate the AL_HANDLE used for data memory. This memory is managed separately by the caller. For
example, it might be freed using the reference count callback.
See
AL_Allocator_Free
Allocates a buffer pointing to pData with a reference count of zero. The caller is expected to call AL_Buffer_Ref to increment the
reference count. When the reference count is decremented to zero, pCallBack is invoked.
Return
Returns a buffer, if successful. Returns NULL, otherwise.
See
AL_Buffer_Ref, AL_Buffer_Unref
See
AL_TAllocator
Opens deviceFile and allocates a small structure to record the device name, file descriptor, and function pointers for
manipulating the allocator. The deviceFile must be /dev/allegroIP. DmaAlloc_Create does not take ownership of the device file
string. When the allocator is no longer needed, AL_Allocator_Destroy should be called to free associated system resources.
Return
Returns a pointer to an allocator if successful, or NULL otherwise.
See
AL_Allocator_Destroy
Closes the underlying file descriptor and frees associated resources. This function is called when the given memory allocator is
no longer needed, invoked when destroying an encoder or decoder instance. After this function is invoked, all handles previously
returned by the allocator's alloc function should be considered invalid.
Return
Returns true if successful and false, otherwise.
See
DmaAlloc_Create
Return
Returns the size in bytes needed for the reference frame buffer.
Return
Returns the pitch value depending on the source format. Assumes 32-bit burst alignment.
See
AL_GetBitDepth, GetStorageMode
Many picture data conversion functions are provided. All convert an AL_TBuffer to an AL_TBuffer. Because the behavior is
evident from the function names and the parameter types are unvarying, after one example, these functions are listed below in
tabular form with each (row, column) entry corresponding to a (source, destination) pair.
From I0AL I2AL I420 I422 IYUV NV12NV16P010 P210 RX0ARX2ARXmAT608 T60A T628 T62A T6m8Y010 Y800 YV12
I0AL ● ● ● ● ● ● ● ●
I2AL ● ● ●
I420 ● ● ● ● ● ● ● ●
I422 ● ● ●
IYUV ● ● ● ● ● ● ●
NV12 ● ● ● ● ● ● ●
NV16 ● ● ● ●
P010 ● ● ● ● ● ● ● ●
P210 ● ●
RX0A ● ● ● ● ● ● ● ●
RX2A ● ● ● ●
T608 ● ● ● ● ● ● ● ●
T60A ● ● ● ● ● ● ● ●
T628 ● ● ● ● ● ●
T62A ● ● ● ● ● ●
T6m8 ●
Y010 ● ●
Y800 ● ● ● ● ● ● ● ● ● ●
YV12 ● ● ● ● ● ● ●
Constants
config.h
ENCODER_CORE_FREQUENCY_MARGIN 10 10 Hz
HW_IP_BIT_DEPTH 10 10 bpc
I420, IYUV, YV12, NV12, I0AL, P010, T608, T60A, T508, T50A, RX0A 4:2:0
YV16, NV16, I422, P210, I2AL, T628, T62A, T528, T52A, RX2A 4:2:2
Return
Returns the Chroma mode for the given tFourCC argument. If the FourCC mode is not defined, either an assertion violation
occurs or -1 is returned.
See
AL_EChromaMode, FOURCC
FOURCC(A)
Converts A into a FourCC value by translating the literal characters of A into their ASCII codes and packing them into a 32-bit
value, e.g. FOURCC(A321) becomes 0x33 32 31 41.
See
AL_GET_BITDEPTH_LUMA, AL_GET_BITDEPTH_CHROMA, AL_GET_BITDEPTH, AL_GET_CHROMA_MODE,
AL_SET_BITDEPTH_LUMA, AL_SET_BITDEPTH_CHROMA, AL_SET_BITDEPTH, AL_SET_CHROMA_MODE
AL_GET_BITDEPTH_LUMA(PicFmt)
Return
Returns the Luma depth from PicFmt. PicFmt must be an AL_EPicFormat value.
See
AL_EPicFormat
AL_GET_BITDEPTH_CHROMA(PicFmt)
Return
Returns the Chroma depth from PicFmt. PicFmt must be an AL_EPicFormat value.
See
AL_EPicFormat
AL_GET_BITDEPTH(PicFmt)
Return
Returns the maximum of the Luma depth and the Chroma depth from PicFmt. PicFmt must be an AL_EPicFormat value.
See
AL_EPicFormat
Return
Returns the Chroma mode from PicFmt. PicFmt must be an AL_EPicFormat value.
See
AL_EPicFormat
AL_SET_BITDEPTH_LUMA(PicFmt, BitDepth)
Return
Assigns BitDepth to the low-order byte (byte 0) of PicFmt. PicFmt must be an AL_EPicFormat value.
See
AL_EPicFormat
AL_SET_BITDEPTH_CHROMA(PicFmt, BitDepth)
Return
Assigns BitDepth to byte 1 of PicFmt. PicFmt must be an AL_EPicFormat value.
See
AL_EPicFormat
AL_SET_BITDEPTH(PicFmt, BitDepth)
Return
Assigns BitDepth to byte 0 and byte 1 of PicFmt. PicFmt must be an AL_EPicFormat l-value.
See
AL_EPicFormat
AL_SET_CHROMA_MODE(PicFmt, BitDepth)
Return
Assigns BitDepth to byte 2 of PicFmt. PicFmt must be an AL_EPicFormat l-value.
See
AL_EPicFormat
I420, IYUV, YV12, NV12, I422, YV16, NV16, Y800, T6m8, T608, T628, T5m8, T508, T528 8-bpc
I0AL, P010, I2AL, P210, Y010, T6mA, T60A, T62A, T5mA, T50A, T52A, RX0A, RX2A, RXmA 10-bpc
Return
Returns 8 or 10 depending on the given FourCC value. If the FourCC mode is not defined, either an assertion violation occurs or
-1 is returned.
See
FOURCC
Return
Returns 1 if uBitDepth is 8 or less. Returns 2 if uBitDepth is 9 or greater.
Return
Returns AL_FB_TILE_32x4 or AL_FB_TILE_64x4 according to the tFourCC argument.
See
AL_Is32x4Tiled, AL_Is64x4Tiled
Other AL_FB_RASTER
See
AL_EFbStorageMode, AL_ESrcMode
Description
Implements the following mapping.
AL_FB_TILE_64x4 4
AL_FB_TILE_32x4 4
AL_FB_RASTER 1
See
AL_EFbStorageMode, AL_ESrcMode
AL_IS_AVC(Prof)
Description
Returns true if Prof is AVC. Prof must be an AL_EProfile value.
See
AL_EProfile
AL_IS_HEVC(Prof)
Description
Returns true if Prof is HEVC. Prof must be an AL_EProfile value.
See
AL_EProfile
AL_IS_STILL_PROFILE(Prof)
Description
Returns true if Prof is HEVC Main Still Profile. Prof must be an AL_EProfile value.
See
AL_EProfile
Description
Returns true if tFourCC is a tiled format or one of: NV12, P010, NV16, P210, RX0A, or RX2A.
See
AL_Is64x4Tiled, AL_Is32x4Tiled
Description
Returns true if tFourCC is a tiled format.
See
AL_Is64x4Tiled, AL_Is32x4Tiled
Description
Returns true if tFourCC is one of: T508, T528, T5m8, T50A, T52A, or T5mA.
See
AL_IsTiled, AL_Is64x4Tiled
Description
Returns true if tFourCC is one of: T608, T628, T6m8, T60A, T62A, or T6mA.
See
AL_IsTiled, AL_Is32x4Tiled
Description
Returns true if tFourCC is a tiled format or one of: RX0A, RX2A, or RXmA.
Description
Writes the subsampling of the FourCC mode into sx and sy as shown in the following mapping.
Chroma Mode sx sy
4:2:2 2 2
4:2:0 2 1
Other 1 1
Description
Implements the following mapping. If the picture format picFmt not listed below, either an assertion violation occurs or -1 is
returned.
10 T62A
4:2:0 8 T608
10 T60A
Mono 8 T6m8
10 T6mA
10 T52A
4:2:0 8 T508
10 T50A
Mono 8 T5m8
10 T5mA
See
AL_GetSrcFourCC, Get64x64FourCC, Get32x4FourCC
Driver* AL_GetHardwareDriver()
Description
The hardware driver is static structure holding function pointers. The device is opened while creating the encoder.
Return
Returns a pointer to the driver function pointer structure.
See
AL_SchedulerMcu_Create, AL_Encoder_Create
Description
Allocates and initializes memory for the scheduler.
Return
Returns true if successful and false, otherwise.
See
AL_GetHardwareDriver, DmaAlloc_Create, AL_SchedulerMcu_Destroy
Description
Frees memory associated with the schedulerMcu.
Return
Returns true if successful and false, otherwise.
See
AL_Encoder_Create, AL_SchedulerMcu_Create
Return
Returns AL_CODEC_AVC or AL_CODEC_HEVC according to eProf.
The ctrlsw_encoder and ctrlsw_decoder are complete sample applications that encode and decode video respectively.
These applications are intended as a learning aid for the VCU Control Software API and for troubleshooting. The source code
for the ctrlsw_encoder and ctrlsw_decoder applications are at https://github.com/Xilinx/vcu-ctrl-sw.
Sample configuration files and input .yuv file mentioned in examples below can be found in the VCU Control Software source
tree test/cfg folder. The parameters are described after the examples below.
✎ Note: For a complete list parameters, type the following in the command line:
ctrlsw_decoder --help
ctrlsw_encoder --help
Input Parameters
Format I420 (Planar Format): YUV file contains 4:2:0 8‑bit video samples stored in planar format with all
picture Luma (Y) samples followed by Chroma samples (all U samples then all V samples) IYUV
(Planar Format): Same as I420. YV12: Same as I420 with inverted U and V order NV12 (Semi-
planar Format): YUV file contains 4:2:0 8‑bit video samples stored in planar format with all picture
Luma (Y) samples followed by interleaved U and V Chroma samples.
NV16 (Semi-planar Format): YUV file contains 4:2:2 8-bit video samples stored in planar format
with all
picture Luma (Y) samples followed by interleaved U and V Chroma samples.
I0AL (Planar Format): YUV file contains 4:2:0 10‑bit video samples each stored in a 16‑bit word in
planar format with all picture Luma (Y) samples followed by Chroma samples (all U samples then
all V samples). P010 (Semi-planar Format): YUV file contains 4:2:0 10‑bit video samples each
stored in a 16‑bit word in planar format with all picture Luma (Y) samples followed by interleaved
U and V Chroma samples. I422 (Planar Format): YUV file contains 4:2:2 8‑bit video samples
stored in planar format with all picture Luma (Y) samples followed by Chroma samples (all U
samples then all V samples). YV16 (Planar Format): Same as I422 I2AL (Planar Format): YUV file
contains 4:2:2 10‑bit video samples each stored in a 16‑bit word in planar format with all picture
Luma (Y) samples followed by Chroma samples (all U samples then all V samples). P210 (Semi-
planar Format): YUV file contains 4:2:2 10‑bit video samples each stored in a 16‑bit word in planar
format with all picture Luma (Y) samples followed by interleaved U and V Chroma samples.
XV20 (Semi-planar Format): YUV file contains 4:2:2 10-bit video samples where 3 samples are
stored per 32-bit word using a semi-planar format with all picture Luma(Y) samples followed by
interleaved U and V Chroma samples.
Y800 (Planar Format): Input file contains monochrome 8-bit video samples.
XV15 (Semi-planar Format): YUV file contains 4:2:0 10-bit video samples where 3 samples are
stored per 32-bit word using a semi-planar format with all picture Luma(Y) samples followed by
Framerate Number of frames per second of the YUV input file. When this parameter is not present, its value
is set equal to the framerate specified in Rate Control Parameters. When this parameter is greater
than the frame rate specified in the rate control section the rate specified in the rate control
section, the encoder repeats some frames encoder drops some frames; when this parameter is
lower than the frame.
CmdFile Text file specifying commands to perform at given frame numbers. The commands include scene
change notification, key frame insertion, etc.
RoiFile Text file specifying a sequence of Region of Interest changes at given frame numbers.
QpTablesFolder Specifies the location of the files containing the QP tables to use for each frame.
Output Parameters
RecFile Optional output file name for reconstructed picture (in YUV format).
When this parameter is not present, the reconstructed YUV file is not saved.
Format FOURCC code of the reconstructed YUV file format, see Input section for possible values. If not
specified, the output uses the format of the input.
CONST_QP: No rate control, all pictures use the same QP defined by the SliceQP parameter.
CBR: Use constant bit rate control.
VBR: Use variable bit rate control.
LOW_LATENCY: Use variable bit rate for low latency application.
CAPPED_VBR, PLUGIN
BitRate Target bit rate in Kb/s. Not used when RateCtrlMode = CONST_QP
Default value: 4000
MaxBitRate Target bit rate in Kb/s. Not used when RateCtrlMode = CONST_QP
Default value: 4000
✎ Note: When RateCtrlMode is CBR, MaxBitRate should be set to the same value as BitRate.
FrameRate Number of frames per second
Default value: 30
SliceQP Quantization parameter. When RateCtrlMode = CONST_QP the specified QP is applied to all
slices. When RateCtrlMode = CBR the specified QP is used as initial QP.
Allowed values: from 0 to 51
Default value: 30
MinQP 1 Minimum QP value allowed. This parameter is especially useful when using VBR rate control. In
VBR rate control the value AUTO can be used to let the encoder select the MinQP according to
SliceQP.
Allowed values: from 0 to SliceQP
Default value: 10
InitialDelay Specifies the initial removal delay as specified in the HRD model, in seconds. Not used when
RateCtrlMode = CONST_QP.
✎ Note: If this value is set too low (less than 1 frame period), you might see reduced visual
quality.
Default value: 1.5
CPBSize Specifies the size of the Coded Picture Buffer as specified in the HRD model, in seconds. Not
used when RateCtrlMode = CONST_QP.
Default value: 3.0
ScnChgResilience Specifies the scene change resilience handling during encode process. Improves quality during
scene changes.
ENABLE, DISABLE, TRUE, FALSE
Default value: TRUE
MaxPictureSize 2 Maximum frame size allowed in Kb/s. If only MaxPictureSize is provided, it sets MaxPictureSize
for all I, P, and B frames. To set MaxPictureSize individually, refer MaxPictureSize.I,
MaxPictureSize.P, and MaxPictureSize.B
MaxPictureSize = TargetBitrate(in Kbits/s) / FrameRate * AllowedPeakMargin
Recommend a 10%AllowedPeakMargin.
That is, MaxPictureSize = (100 Mb/s / 60 fps) * 1.1 = 1834 Kbits per frame
EnableSkip Enabling frameskip feature. available values: DISABLE, ENABLE, FALSE, TRUE
Default value: FALSE
MaxConsecutiveSkip Specifies the maximum number of consecutive skipped frames if EnableSkip is enabled.
Default value: 4294967295
MaxPictureSize.B Specifies a coarse size (in Kbits) for B-frame that should not be exceeded.
Default value: DISABLE
MaxPictureSize.I Specifies a coarse size (in Kbits) for I-frame that should not be exceeded.
Default value: DISABLE
MaxPictureSize.P Specifies a coarse size (in Kbits) for P-frame that should not be exceeded.
Default value: DISABLE
1. In VBR the MinQP is computed based on the InitialQP. For example, MinQP = InitialQP - 8. When set to AUTO, the
InitialQP is computed based on the video resolution, frame rate, and the target bit-rate. MinQP value less than 10 can
generate a very huge picture in case of scene change, and the rate controller can take a longer time with low quality, to
recover and achieve the target bit-rate. When InitialQP is 8 (less than 10), the automatic MinQP is always set to 10.
When the value of MinQP is AUTO, it should have the same behavior in AVC as in HEVC. When the value of
RateCtrlMode is LOW_LATENCY, it enables the HW rate-control of the VCU. This means the QP is adapted within the
frame, thus preventing a huge frame. So MinQP does need to be constrained as in frame level ratecontrol.
2. When user set MaxPictureSize parameter, MCU firmware enables the hardware rate control module to keep track of
encoding frame size, It adjusts the QPs with in the frame to make sure encoded picture sizes honor the provided max-
picture-size. Initial statistics is based on the start of the frame (first few macro block (MB) rows), and the rate control
algorithm further modulates the QP as the picture progresses to restrict the maximum picture size with the limit
specified by the MaxPictureSize, as much as possible. This is different from than the normal rate control (CBR/VBR),
without the MaxPIctureSize parameters enabled because the rate control only receives feedback after the picture is
encoded. This means that the QP withing a frame is manipulated to curtail the frame size to the specified limit
(MaxPictureSize). When the MaxPictureSize is enabled, it uses the hardware (VCU Encoder IP, has hardware rate control
module in it, which is generally used for low-latency application (low-latency rcmode)) statistics to restrict maximum
picture size with in the limit as much as possible.
The CPBSize default value is set to 3 sec in the VCU software (if allowed by the defined encoding level and bit rate parameters),
with an option to change this value based on the application requirements. The encoder CBR rate control tries to reach the
target bit rate over the period of the GOP length but the main constraint is that it must avoid buffer underflows/overflows as
defined by the standard Hypothetical Reference Decoder (HRD) model. A larger CPBSize allows the Encoder rate control to
distribute the encoded bits over a larger number of frames so that it can handle larger bit rate variations among consecutive
frames and increase video quality. Setting the CPBSize so a smaller value can reduce the bit rate peaks but can also impact the
video quality.
Video recording and storage applications, where instantaneous bit rate fluctuations are unimportant, and can support
larger buffering, and should set the CPBSize to larger values (~1s-3s).
It is recommended to set the CPBSize to a value that is greater than the GOP length duration (for example, for a 60 fps
setting, the GOP length could be set to 60 frames and the CPBSize to more than 1s.)
Applications that require smaller bit rate variations can reduce the CPBSize but it is recommended to set a value larger
than ~6 frame periods. It is also recommended in such cases to enable the low-latency rate control mode.
Low-latency applications should use a "low-delay" GOP type (or intra-only GOP type) and then can reduce the CPBSize
down to ~1-2 frame periods.
For applications that require lower bit-rates like one Mb/s and minimal variation expected in the instantaneous bit-rate, it is
recommended to have smaller CPBSize, which makes it easier for the VCU to keep a constant bit-rate. The only reason to
scale the CPBSize between very high rates and very low rates is to trade-off quality versus instantaneous bit-rate
fluctuations.
If your use case does bother about instantaneous bit-rate fluctuations (which can cause high CPU usage resulting in
dropped frames), then its recommended to increase the CPBSize, which gives the rate algorithm a bit more flexibility to
allocate more bits to specific frames, which can improve the quality. For very high bitrates like 100 Mb/s, the quality
difference should be negligible, but it might make a difference as the bit-rate decreases.
Ensure to update InitialDelay whenever you change the CPBSize. InitialDelay can be any value less than or equal to the
CPBSize.
The InitialDelay should have a very minimal effect on the frame-bits consumption. AMD recommends that you use half of
the CPBSize, but for a small CPBSize (approximately 6 frames), set CPBSize=InitialDelay. The default InitialDelay
parameter is 1.5s (half of the default CPBSize of 3s).
The CPBSize and InitialDelay parameters are generally used to verify the hypothetical reference decoder (HDF) buffer
model conformance. The InitialDelay parameter specifies the time at which the first picture data needs to be removed from
the buffer. It does not refer to the physical buffers, but it helps in verifying the HRD conformance. For more details, see the
H264 standard Annex C.
It is not recommended to have the CPBSize smaller than 6 frames (approximately). This can result in cases where the bits,
allocated to each frame, are not as expected because the algorithm does not have enough frames for adjusting the
quantization.
Gop.Length GOP length in frames including the I-picture. Used only when GopCtrlMode is set to
DEFAULT_GOP. Should be set to 0 for Intra-only.
Range: 0–1,000.
Gop.FreqIDR Specifies the minimum number of frames between two IDR pictures (AVC and HEVC). IDR
insertion depends on the position of the GOP boundary.
Allowed values: positive integer or -1 to disable IDR region.
Default value: -1
Gop.FreqLT Specifies the long term reference picture refresh frequency in number of frames.
Allowed values: positive integer or 0 to disable use of Long term reference picture
Default value: 0
Gop.NumB Maximum number of consecutive B-frames in a GOP. Used only when GopCtrlMode is set to
DEFAULT_GOP or PYRAMIDAL_GOP.
Allowed values:
When GopCtrlMode is set to DEFAULT_GOP, Gop.NumB shall be in range 0 to 4. When
GopCtrlMode is set to PYRAMIDAL_GOP, Gop.NumB shall take 3, 5, 7, and 15.
Gop.GdrMode 1 When GopCtrlMode is set to LOW_DELAY_P or LOW_DELAY_B this parameter specifies whether a
Gradual Decoder Refresh (GDR) scheme should be used or not. When GDR is enabled, the
Gop.Length specifies the frequency at which the refresh pattern should happen. To allow full
picture refreshing, this parameter should be greater than the number of CTB/MB rows
(GDR_HORIZONTAL) or columns (GDR_VERTICAL).
DISABLE: no GDR
GDR_VERTICAL: Gradual refresh using a vertical bar moving from left to right.
GDR_HORIZONTAL: Gradual refresh using a horizontal bar moving from top to bottom.
1. When GDR is enabled,, the GOP.FreqIDR specifies the frequency at which the refresh pattern should occur. To allow the
full picture to refresh, this parameter should be set to a value greater than the number of CTB/MB rows
(GDR_HORIZONTAL) or columns (GDR_VERTICAL).
Settings Parameters
Tier Specifies the tier to which the bitstream conforms (H.265 (HEVC) only)
Allowed values: MAIN_TIER, HIGH_TIER
Default value: MAIN_TIER
ChromaMode Selects the Chroma subsampling mode used to encode the stream
Allowed values:
BitDepth Specifies the bit depth of the Luma and Chroma samples in the encoded stream.
Allowed values: 8 or 10
Default value: 8
NumSlices Specifies the number of slices used for each frame. Each slice contains one or more full LCU
row(s) and are spread over the frame as regularly as possible.
Allowed values: from 1 up to number of Coding unit rows in the frame.
Default value: 1
SliceSize Target Slice Size specifies the target slice size, in bytes, that the encoder uses to automatically
split the bitstream into approximately equally-sized slices, with a granularity of one LCU. This
impacts performance, adding an overhead of one LCU per slice to the processing time of a
command. This parameter is not supported in H.264 (AVC) encoding when using multiple
cores. When SliceSize is zero, slices are defined by the NumSlices parameter. This parameter
is directly sent to the Encoder IP and specifies only the size of the Slice Data. It does not
include any margin for the slice header. So it is recommended to set the SliceSize parameter
with the target value lowered by 5%. For example if your target value is 1500 bytes per slice,
you should set "SliceSize = 1425" in the configuration file.
Allowed values: 1000-65,535 or 0 to disable the automatic slice splitting.
Default value: 0
DependentSlice When there are several slices per frame (e.g. NumSlices is greater than 1 or SliceSize is greater
than 0), this parameter specifies whether the additional slice are Dependent slice segments or
regular slices (H.265 (HEVC) only).
Allowed values: FALSE, TRUE
Default value: FALSE
EntropyMode Selects the entropy coding mode if Profile is set to AVC_MAIN, AVC_HIGH, AVC_HIGH10 or
AVC_HIGH_422 (AVC only)
Allowed values:
CabacInit Specifies the CABAC initialization table index (H.264 (AVC)) / initialization flag (H.265 (HEVC)).
Allowed values: from 0 to 2 (H.264 (AVC)), from 0 to 1 (H.265 (HEVC))
Default value: 0
PicCbQpOffset Specifies the QP offset for the first Chroma channel (Cb) at picture level. (H.265 (HEVC) only)
Allowed values: from –12 to +12
Default value: 0
PicCrQpOffset Specifies the QP offset for the second Chroma channel (Cr) at picture level (H.265 (HEVC)
only)
Allowed values: from –12 to +12
Default value: 0
SliceCbQpOffset Specifies the QP offset for the first Chroma channel (Cb) at slice level.
Allowed values: from –12 to +12
Default value: 0
SliceCrQpOffset Specifies the QP offset for the second Chroma channel (Cr) at slice level
Allowed values: from –12 to +12
Default value: 0
✎ Note:
(PicCbQPOffset + SliceCbQPOffset) shall be in range –12 to +12
(PicCrQPOffset + SliceCrQPOffset) shall be in range –12 to +12
ScalingList Specifies the scaling list mode (H.264 (AVC) and H.265 (HEVC) only).
Allowed values: FLAT, DEFAULT
CuQpDeltaDepth Specifies the Qp per CU granularity (H.265 (HEVC) only). Used only when QpCtrlMode is set to
AUTO_QP or ADAPTIVE_AUTO_QP
0: down to 32×32
1: down to 16×16
2: down to 8×8
Default value: 0
VrtRange_P Specifies the vertical search range used for P‑frame motion estimation:
Allowed values for H.265 (HEVC): 16 or 32: using 16 allows to reduce the memory
bandwidth (Low Bandwidth mode)
Allowed values for H.264 (AVC): 8 or 16: using 8 allows to reduce the memory bandwidth
(Low Bandwidth mode)
LoopFilter.CrossSlice Enables/disables in-loop filtering across the left and upper boundaries of each slice of the
frame. Used only when LoopFilter is set to ENABLE.
Allowed values: ENABLE, DISABLE
Default value: ENABLE
LoopFilter.CrossTile Enables/disables in-loop filtering across the left and upper boundaries of each tile of the
frame. (H.265 (HEVC) only) Used only when LoopFilter is set to ENABLE.
Allowed values: ENABLE, DISABLE
Default value: ENABLE
LoopFilter.BetaOffset Specifies the beta offset for the deblocking filter. Used only when LoopFilter is set to ENABLE.
Allowed values: from –6 to +6
Default value: –1
LoopFilter.TcOffset Specifies the Alpha_c0 offset (H.264 (AVC)) or Tc offset (H.265 (HEVC)) for the deblocking
filter. Used only when Loop Filter is set to ENABLE.
Allowed values: from –6 to +6
Default value: –1
CacheLevel2 Enables/disables the optional Encoder buffer. This can be used to reduce the memory
bandwidth and it can slightly reduce the video quality.
If enabling this parameter displays an error message from the encoder, it means that the
encoder buffer size provided by the VCU driver is too small to handle the minimum motion
estimation range.
Allowed values: ENABLE, DISABLE
Default value: DISABLE
AspectRatio Selects the display aspect ratio of the video sequence to be written in SPS/VUI. Allowed
values:
ASPECT_RATIO_AUTO 4:3 for common SD video, 16:9 for common HD video, unspecified
for unknown format.
ASPECT_RATIO_4_3 4:3 aspect ratio
ASPECT_RATIO_16_9 16:9 aspect ratio
ASPECT_RATIO_NONE Aspect ratio information is not present in the stream.
ASPECT_RATIO_1_1
LookAhead LookAhead is the size of the LookAhead (number of frames between the two passes):
Default value: 0
EnableSEI 3 Determines which supplemental enhancement information are sent with the stream.
Available values: SEI_ALL, SEI_BP, SEI_CLL, SEI_MDCV, SEI_NONE, SEI_PT, SEI_RP.
Default value: SEI_NONE
LambdaFactors Specifies the lambda factor for each picture: I, P and B by increasing temporal ID
Default value: 0 0 0 0 0 0
AvcLowLat Enables a special synchronization mode for AVC low latency encoding (validation only).
Available values: DISABLE, ENABLE.
Default value: DISABLE
ColourMatrix Specifies the matrix coefficients used in deriving luma and chroma signals from RGB (HDR
setting).
Available values: COLOUR_MAT_BT_2100_YCBCR, COLOUR_MAT_UNSPECIFIED.
Default value: COLOUR_MAT_UNSPECIFIED
EnableAUD Determines if Access Unit Delimiter are added to the stream or not.
Available values: DISABLE, ENABLE.
Default value: ENABLE
FileScalingList If ScalingList is CUSTOM, specifies the file containing the quantization matrices
SCDFirstPass During first pass, to encode faster, enable only the scene change detection.
Available values: DISABLE, ENABLE.
TransferCharac Specifies the reference opto-electronic transfer characteristic function (HDR setting).
Available values: TRANSFER_BT_2100_PQ, TRANSFER_UNSPECIFIED.
Default value: TRANSFER_UNSPECIFIED
1. When using GStreamer, following mentioned gop-length and b-frames combinations are not supported but that are
mostly unused in real time scenarios. ({2, 1}, {3, 2}, {4, 2}, {4, 3}, {5, 3}, {5, 4}, {6, 3}, {6, 4}, {7, 4}, {8, 4}, {9, 3}, {11, 4}, {12,
4}, {16, 4}).
2. When using GStreamer, you cannot use the "Closed GOP" structures due to AMD specification.
3. SEI_UDU stands for user_data_unregistered SEI message. EnableSEI = SEI_ALL, then it enables all supported SEI which
includes the SEI_UDU. - SEI_UDU generates a specific SEI_UDU (prefix) along with empty filler data NAL units. EnableSEI
= SEI_BP | SEI_PT, then you will not observe any dummy filler data NAL units in the bitstream.
Run Parameters
Loop Specifies whether the encoder should loop back to the beginning of the YUV input stream when
it reaches the end of the file.
Allowed values: TRUE, FALSE
Default value: FALSE
BitrateFile The generated stream size for each picture and bitrate information is written to this file.
InputSleep Time period in milliseconds. The encoder is given frames each time period (at a minimum).
Default value: 0
ScnChgLookAhead Specify the number of frames for the scene change look ahead.
Default value: 3
UseBoard Specifies if you are using the reference model (DISABLE) or the actual hardware (ENABLE).
Available values: DISABLE, ENABLE.
Default value: ENABLE
The QP table modes are enabled by using QpCtrlMode = LOAD_QP or QpCtrlMode = LOAD_QP | RELATIVE_QP.
In this case, the reference model uses the file QPs.hex in working directory to specify the QP values at LCU level. Each line of
QPs.hex contains one 8-bit hexadecimal value. For H.264 (AVC), there is one byte per 16×16 MB (in raster scan format):
absolute QP in [0;51] or relative QP in [‑32;31]. For H.265 (HEVC), there is one byte per 32×32 LCU (in raster scan format):
absolute QP in [0;51] or relative QP in [‑32;31].
✎ Note: Only the 6 LSBs of each byte are used for QP or Segment ID, the 2 MSBs are reserved.
For example, to specify the following relative QP table:
‑1 ‑2 0 1 1 4
‑4 ‑1 1 2 2 1
‑1 0 ‑1 1 1 4
0 0 ‑2 2 2 6
1 ‑2 0 1 4 2
3F
3E
00
01
01
04
3C
3F
...
If a file with name equal to QP_<frame number>.hex is present in the working folder, the encoder uses it for frame number
<frame number> instead of QPs.hex.
For example if you have the following files in the working folder:
QP_0.hex
QP_4.hex
QPs.hex
The encoder uses QP_0.hex for frame #0, QP_4.hex for frame #4 and QPs.hex for all other frames (that is, frame #1, #2, #3, #5,
#6 ...).
Updating or loading QPs using LoadQP option is not supported at Gstreamer, but it is possible to use loadQP in livestreaming at
Control software level. The QP table is passed an input along with input frame buffer using AL_Encoder_Process API. You
must update pQPTable values for each frame in live-streaming.
Pushes a frame buffer to the encoder. According to the GOP pattern, this frame buffer could or could not be encoded
immediately.
Parameters
[in] pFrame Pointer to the frame buffer to encode The pFrame buffer needs to have an associated
AL_TSrcMetaData describing how the yuv is stored in memory. The memory of the buffer should not
be altered while the encoder is using it. There are some restrictions associated to the source
metadata. The chroma pitch has to be equal to the luma pitch and must be 32bits aligned and should
be superior to the minimum supported pitch for the resolution (See AL_EncGetMinPitch()). The
chroma offset shouldn't fall inside the luma block. The FourCC should be the same as the one from
the channel (See AL_EncGetSrcFourCC()).
[in] pQpTable Pointer to an optional qp table used if the external qp table mode is enabled
Returns
If the function succeeds the return value is nonzero (true) If the function fails the return value is zero (false).
The encoder uses lambda factors per QP value as bitrate as opposed to quality trade-off. By default, the encoder IP use internal
lambda factors but it can also use user specified lambda factors. This feature is enabled when LambdaCtrlMode is set to
LOAD_LDA and when a Lambdas.hex file is available in the working directory. Each line of Lambdas.hex contains one 32-bit
hexadecimal value per QP values (that is 52 lines for QP in range [0...51]).
Each 32 bits word is split as follows:
Bit 7 to 0
Lambda factor for slice B
Bit 15 to 8
Lambda factor for slice P
Bit 23 to 16
Lambda factor for slice I
Bit 31 to 24
Lambda factor for the motion estimation block (this value shall be a power of two)
1
Motion Estimation
2
Slice I
3
Slice P
4
Slice B
✎ Note: Usually, the lambdas factors follow the equation: λ(QP)= N−2(QP6−2), where N is different for I, P and B factors.
The encoder supports input commands simulating dynamic events, like dynamic bitrate and key frame insertion. This feature is
enabled when a command file is referenced in the configuration file. Refer to the CmdFile parameter in Input Parameters. The
command file format is described below. Each line of the file defines one frame identifier followed by one or more commands
applying to this frame.
Syntax
Command Description
SC Scene change
Example
0: LT
20: KF
30: BR =100, GopLen = 10
45: SC
50: UseLT
101: GopLen= 20, NumB =1
The region of interest definition starts with a line specifying the frame identifier, the background quality and an ROI order for
overlapping regions, followed by one or more lines each defining a ROI for this frame and the following frames until the next ROI
definition.
Syntax
Where <posX>, <posY>, <width>, and <height> are in pixel unit. They are then automatically rounded to the bounding LCU units
(16×16 in AVC and 32×32 in HEVC). The <quality> is one of the following:
The <order> is one of the following value:
Order Description
#--------------------------------------------------------------------------
[INPUT]
#--------------------------------------------------------------------------
HDRFile = HDRSEIs.txt
#--------------------------------------------------------------------------
[SETTINGS]
#--------------------------------------------------------------------------
TransferCharac = TRANSFER_BT_2100_PQ
ColourMatrix = COLOUR_MAT_BT_2100_YCBCR
EnableSEI = SEI_MDCV | SEI_CLL
ColourDescription = COLOUR_DESC_BT_2020
You can define the HDR10 metadata to be inserted through the HDRFile option. The HDR file needs to be in the following format:
Control Software
Encoder application command to insert the HDR10 metadata:
Driver
There are multiple VCU modules. The VCU Init(xlnx_vcu) which is part of Linux Kernel and which handles PL Registers such as
VCU Gasket and the clocking. The other three kernel drivers (al5e, al5d, allegro) together are the core VCU driver. The decoder
driver is called al5d and encoder driver is called al5e. The common driver is called allegro.
The allegro driver has the following responsibilities:
MCU Firmware
The MCU firmware running on the MCU has the following responsibilities:
Transforming frame-level commands from VCU Control Software to slice level commands for the hardware IP core.
Configuring hardware registers for each command.
Performing rate control between each frame.
Application
The application can either be test pattern generator or an OpenMAX-based application that uses the VCU decoder.
Decoder Library
The decoder library enables applications to communicate with the MCU firmware through the decoder driver.
Decoder Driver
The decoder driver passes control information as well as buffer pointers of the video to the MCU firmware. The decoder driver
uses a mailbox communication technique to pass this information to the MCU firmware.
MCU Firmware
The firmware receives control and buffer information through mailbox. Appropriate action is taken and status is communicated
back to decoder driver.
Scheduler
The scheduler, which is part of MCU firmware, programs the hardware IP, handles interrupts and manages the multi-channel and
multi-slice aspects of the decoding.
Encoder Stack
Application
Application refers to any OpenMAX based or standalone application that uses the underlying encoder capabilities of the VCU.
Encoder Library
The encoder library provides the entry points for configuring the encoder and sending frames to the encoder.
Encoder Driver
The encoder driver passes control information and buffer pointers of the video bit stream on which VCU encoder has to operate
to the MCU firmware. The encoder driver uses mailbox communication technique to pass this information to MCU firmware.
MCU Firmware
The firmware receives control and buffer information through mailbox. Appropriate action is taken and status is communicated
back to encoder driver.
Scheduler
The scheduler directs the activity of the hardware, handles interrupts, and manages the multi-channel and multi-slice aspects of
the encoding.
Encoder Flow
The following figure shows the typical flow of control using the VCU Control Software API.
Encoder API
https://github.com/Xilinx/vcu-ctrl-sw/blob/master/include/lib_encode/lib_encoder.h
https://github.com/Xilinx/vcu-ctrl-sw/blob/master/include/lib_common_enc/Settings.h
https://github.com/Xilinx/vcu-ctrl-sw/blob/master/include/lib_common/StreamBuffer.h
Description
Returns the error code from the context structure in the encoder. Thread-safe.
AL_SUCCESS Success
See
Error.h, AL_Decoder_GetLastError
AL_TEncSettings
Description
The Encoder settings are described in the following structure.
AL_TEncChanParam tChParam
bool bDependentSlice
bool bDisIntra
uint32_t uL2PSize
uint8_t ScalingList[4][6][64]
uint8_t SclFlag[4][6]
uint32_t bScalingListPresentFlags
uint8_t DcCoeff[8]
uint8_t DcCoeffFlag[8]
See
AL_Settings_SetDefaults, AL_Settings_SetDefaultParam, AL_Settings_CheckValidity, AL_Settings_CheckCoherency,
AL_Encoder_Create, AL_Common_Encoder_CreateChannel, AL_ExtractNalsData, AL_AVC_SelectScalingList,
AL_HEVC_SelectScalingList, AL_HEVC_GenerateVPS, AL_AVC_UpdateHrdParameters, AL_HEVC_UpdateHrdParameters,
AL_AVC_GenerateSPS, AL_HEVC_GenerateSPS, AL_AVC_GeneratePPS, AL_HEVC_GeneratePPS, GetHevcMaxTileRow,
ConfigureChannel, ParseRateControl, ParseGop, ParseSettings, ParseHardware, ParseMatrice (sic), RandomMatrice (sic),
GenerateMatrice (sic), ParseScalingListFile, PostParsingChecks, GetScalingListWrapped, GetScalingList
AL_EChEncOptions
Description
Name Value
AL_OPT_WPP 0x00000001
AL_OPT_TILE 0x00000002
AL_OPT_LF 0x00000004
AL_OPT_LF_X_SLICE 0x00000008
AL_OPT_LF_X_TILE 0x00000010
AL_OPT_SCL_LST 0x00000020
AL_OPT_CONST_INTRA_PRED 0x00000040
AL_OPT_QP_TAB_RELATIVE 0x00000080
Name Value
AL_OPT_FIX_PREDICTOR 0x00000100
AL_OPT_CUSTOM_LDA 0x00000200
AL_OPT_ENABLE_AUTO_QP 0x00000400
AL_OPT_ADAPT_AUTO_QP 0x00000800
AL_OPT_TRANSFO_SKIP 0x00002000
AL_OPT_FORCE_REC 0x00008000
AL_OPT_FORCE_MV_OUT 0x00010000
AL_OPT_FORCE_MV_CLIP 0x00020000
AL_OPT_LOWLAT_SYNC 0x00040000
AL_OPT_LOWLAT_INT 0x00080000
AL_OPT_RDO_COST_MODE 0x00100000
Description
Creates a new encoder and returns a handle to it. The encoding format is fixed when the encoder object is created. For
applications that encode both H.264 and H.265 streams, to switch from one encoding to the other, the encoder object must be
destroyed and recreated with different settings. The parameters of the VCU LogiCORE are fixed in the Programmable Logic
bitstream and should be selected for the most demanding case.
Return
On success, returns AL_SUCCESS. On error, returns AL_ERROR, AL_ERR_NO_MEMORY, or return value of ioctl. AL_ERROR might
indicate a memory allocation failure, failure to open /dev/allegroIP, or failure to post a message to the encoder. Error return
codes from ioctl indicate failure interacting with the device driver.
See
DmaAlloc_Create, AL_Settings_SetDefaultParam, AL_SchedulerMcu_Create, AL_GetHardwareDriver, AL_Encoder_Destroy,
AL_Allocator_Destroy, AL_CB_EndEncoding
Description
This callback is invoked when any of the following conditions occur:
void (*func)( void* func Called when a frame is encoded, the end of the stream
pUserParam, (EOS) is reached, or the stream buffer is released
AL_TBuffer* pStream,
AL_TBuffer const*
pSrc,int iLayerID)
Description
Assigns values to pSettings corresponding to HEVC Main profile, Level 51, Main tier, 4:2:0, 8-bits per channel, GOP length = 32,
Target Bit Rate = 4,000,000, frame rate = 30, etc. For the complete list of settings see AL_Settings_SetDefaults in Settings.c.
See
Settings.c.
Description
If pSettings->tChParam.eProfile is AVC, set pSettings->tChParam.uMaxCuSize to 4. This corresponds to 16×16. If HEVC and
pSettings->tChParam.uCabacInitIdc > 1, set it to 1.
See
AL_Settings_CheckValidity
Description
If pOut is non-NULL, messages are written to pOut listing the invalid settings in pSettings.
Return
Returns the number of errors found in pSettings.
See
AL_Settings_CheckCoherency
Description
Checks and corrects some encoder parameters in pSettings. If pOut is non-NULL, messages are written to pOut listing some of
the invalid settings in pSettings.
The following settings can be corrected:
eQpCtrlMode
eScalingList
iPrefetchLevel2
tChParam.eEntropyMode
tChParam.eOptions
tChParam.ePicFormat
tChParam.eProfile
tChParam.iCbPicQpOffset
tChParam.iCbSliceQpOffset
tChParam.iCrPicQpOffset
tChParam.tGopParam.uFreqIDR
tChParam.tGopParam.uFreqLT
tChParam.tGopParam.uGopLength
tChParam.tGopParam.uNumB
tChParam.tRCParam.uCPBSize
tChParam.tRCParam.uInitialRemDelay
tChParam.tRCParam.uMaxBitRate
tChParam.tRCParam.uTargetBitRate
tChParam.uCuQPDeltaDepth
tChParam.uLevel
tChParam.uMaxCuSize
tChParam.uMaxTuSize
tChParam.uMinCuSize
tChParam.uNumSlices
tChParam.uTier
uL2PSize
Return
0 if no incoherency.
Number of incoherency, if incoherency were found.
-1 if a fatal incoherency was found.
See
Settings.c, AL_Settings_CheckValidity
Description
The mitigated worst case NAL size is PCM plus one slice per row.
Return
Returns the maximum size of one NAL unit rounded up to the nearest multiple of 32.
See
AL_TDimension, AL_EChromaMode, AL_GetMaxNalSize
AL_TBufPoolConfig
Description
Structure used to configure the AL_TBufPool.
size_t zBufSize Size in bytes of the buffers that will fill the pool
See
AL_SrcMetaData_Create
Description
Tells the Encoder where to write the encoded bitstream. The firmware can only handle 320 stream buffers at a given time.
However, as the encoder releases one stream buffer after each encoded frame (or slice), the function
AL_Encoder_PutStreamBuffer can be called more than 320 times. The pStream argument must have an associated
AL_TStreamMetaData pointer added to it. The metadata can be created by AL_StreamMetaData_Create(sectionNumber,
uMaxSize) and added with AL_Buffer_AddMetaData(pStream, pMeta), subject to the following constraints:
sectionNumber = AL_MAX_SECTION, or greater if needed (such as for added SEI within the stream)
uMaxSize shall be 32-bit aligned
✎ Note: Does not perform error checking for too many buffers or duplicate buffers.
Return
Returns true
See
AL_StreamMetaData_Create, AL_Buffer_AddMetaData, AL_Encoder_Create, AL_Encoder_Destroy
[in] pQpTable Pointer to an optional quality parameter table used if the external quality parameter
mode is enabled. See AL_TEncSettings.eQpCtrlMode
Description
Pushes a frame buffer to the Encoder. The GOP pattern determines whether or not the frame can be encoded immediately. The
data associated with pFrame must not be altered by the application during encoding. The pFrame buffer must have an
associated AL_TSrcMetaData describing how the YUV data is stored in memory. There are some restrictions associated to the
source metadata:
Return
Returns true on success and false otherwise. Call AL_Encoder_GetLastError for the error status code.
See
AL_Encoder_ReleaseFrameBuffer, AL_TEncSettings, AL_CB_EndEncoding.
Description
Traverses the encoder context hEnc->pCtx deleting and clearing various structures and fields. Frees memory associated with
the encoder hEnc.
AL_EBufMode
Description
Indicates blocking or non-blocking mode for certain buffer operations. If a function expecting an AL_EBufMode is called with a
value other than AL_BUF_MODE_BLOCK or AL_BUF_MODE_NON_BLOCK, the behavior is undefined.
AL_BUF_MODE_BLOCK 0
AL_BUF_MODE_NONBLOCK 1
See
AL_Decoder_PushBuffer, AL_GetWaitMode
[in] iPayloadType SEI payload type. See Annex D.3 of ITU-T [in] pPayload. Raw data of the SEI payload
Description
Add an SEI to the stream This function should be called after the encoder has encoded the bitstream. The maximum final size
of the SEI in the stream cannot exceed 2Ko. The SEI payload does not need to be anti emulated, this is done by the Encoder.
Return
Returns section id.
Description
Notifies the encoder that the long-term reference frame will be used. This can improve background quality for use cases with
unchanging backgrounds such as fixed-camera surveillance.
Description
Notify the encoder that the next reference picture is a long term reference picture.
Description
Requests the encoder to insert a Keyframe and restart a new Gop.
Return
If successful, returns true. If unsuccessful, returns false. Call AL_Encoder_GetLastError to get the error code.
Description
Informs the encoder that the GOP length has changed. If the on-going GOP is longer than the new iGopLength, the encoder
restarts the GOP immediately. Otherwise, the encoder restarts the GOP when it reaches the new length. The iGopLength
argument must be between 1 and 1,000, inclusive.
Return
If successful, returns true. If unsuccessful, returns false. Call AL_Encoder_GetLastError to get the error code.
Description
Informs the encoder of the number of consecutive B-frames between I- and P-frames.
Return
If successful, returns true. If unsuccessful, returns false. Call AL_Encoder_GetLastError to get the error code.
Description
Informs the encoder of the target bit rate in bits per second.
Return
If successful, returns true. If unsuccessful, returns false. Call AL_Encoder_GetLastError to get the error code.
Description
Tells the encoder to change the encoding frame rate, which is calculated as follows.
fps = iFrameRate × 1,000 ÷ iClkRatio
For example, when uFrameRate = 60 and uClkRatio = 1,001, the frame rate is 59.94 fps.
Return
If successful, returns true. If unsuccessful, returns false. Call AL_Encoder_GetLastError to get the error code.
Description
Changes the resolution of the input frames to encode from the next pushed frame.
Return
If successful, returns true. If unsuccessful, returns false. Call AL_Encoder_GetLastError to get the error code.
Description
Changes the quantization parameter for the next pushed frame.
Return
If successful, returns true. If unsuccessful, returns false. Call AL_Encoder_GetLastError to get the error code.
Description
When the encoder has been created with bEnableRecOutput set to true, the AL_Encoder_GetRecPicture function allows
to retrieve the reconstructed frame picture in display order.
Return
Returns true if a reconstructed buffer is available, otherwise false.
Description
Release reconstructed buffer previously obtains through AL_Encoder_GetRecPicture.
Decoder Flow
The following figure shows an example of using the VCU Control Software API.
Decoder API
Description
Creates a decoder object. The AL_TIDecChannel object is created by AL_DecChannelMcu_Create. The AL_TAllocator object is
created by DmaAlloc_Create. The AL_TDecSettings object and the AL_TDecCallBacks object are initialized by settings their
fields directly.
Return
On success, returns AL_SUCCESS. Otherwise, returns one of the following errors:
See
AL_DecChannelMcu_Create, /dev/allegroDecodeIP, DmaAlloc_Create, AssignSettings, CheckSettings, AL_TDecCallBacks
AL_TDecCallBacks
Description
See
AL_CB_EndDecoding, AL_CB_Display, AL_CB_ResolutionFound
AL_CB_EndDecoding
Description
See
AL_TDecCallBacks, AL_Decoder_GetLastError
AL_CB_Display
Description
See
AL_TDecCallBacks, AL_Decoder_GetLastError, AL_Decoder_PutDisplayPicture
AL_CB_ParsedSei
Description
Resolution SEI callback definition is parsed.
void (* func)(int iPayloadType, func Called when a SEI is parsed. Anti-emulation has already
uint8_t* pPayload, int been removed by the decoder from the payload. See Annex
iPayloadSize, void* D.3 of ITU-T for the sei_payload syntax.
pUserParam);
AL_CB_ResolutionFound
Description
Resolution change callback definition.
void (*func)(int BufferNumber, func Called only once when the first decoding process occurs.
int BufferSize, The decoder doesn't support a change of resolution inside a
AL_TStreamSettings const* stream. Use AL_Decoder_GetLastError to check for error
pSettings, AL_TCropInfo const* information.
pCropInfo, void* pUserParam)
See
AL_TDecCallBacks, AL_Decoder_GetLastError, AL_Decoder_PutDisplayPicture
AL_TIDecChannel* AL_DecChannelMcu_Create()
Description
Returns a pointer to the newly allocated AL_TIDecChannel object or NULL if allocation fails.
AL_TIDecChannel
Description
When the decoder has finished decoding a frame, this callback is invoked. This callback structure contains a function pointer to
a function that takes a user parameter and a picture status. The picture status is a pointer to an AL_TDecPicStatus. The default
callback is AL_Default_Decoder_EndDecoding.
See
AL_Default_Decoder_EndDecoding
Description
Releases resources associated with the Decoder hDec.
See
AL_Decoder_Create
Description
Adds the display buffer pDisplay to the decoder's internal buffer pool used for outputting decoded pictures. At most 50 display
buffers are allowed. Depending on the compiler settings, exceeding this limit either results in an assertion violation or the buffer
is cleared but not added to the pool.
See
AL_TFrmBufPool, FRM_BUF_POOL_SIZE
Description
Give the minimum stride supported by the decoder for its reconstructed buffers.
Description
Give the minimum stride height supported by the decoder.
Restriction
The decoder still only supports a stride height set to AL_Decoder_GetMinStrideHeight. All other strideHeight is ignored.
Description
Allocates memory according to the stream settings and the selected codec. Also performs some buffer initialization.
Return
Returns true if allocation succeeded and false otherwise.
See
AL_Default_Decoder_PreallocateBuffers, AL_Default_Decoder_AllocPool, AL_Default_Decoder_AllocMv, AL_PictMngr_Init
Description
The default function pointer that is invoked when the decoder has finished decoding a frame performs various display buffer
operations:
See
AL_PictMngr_EndDecoding, AL_t_PictureManagerCallbacks, AL_PictMngr_UnlockRefMvID
AL_TDecPicStatus
Description
Associates counters, CRC, and flags with a frame.
AL_TDecSettings
Description
uint8_t uDDRWidth Width of the DDR uses by the decoder. Either 16,
32, and 64 depending on the board design.
AL_EFbStorageMode eFBStorageMode
AL_FB_RASTER = 0
AL_FB_TILE_32x4 = 2
AL_FB_TILE_64x4 = 3
AL_AU_UNIT = 0
AL_VCL_NAL_UNIT = 1
AL_DPB_NORMAL = 0
AL_DPB_NO_REORDERING
[in] tYPlane Array of luma plane parameters (offset and pitch in bytes)
[in] tUVPlane Array of chroma plane parameters (offset and pitch in bytes)
Description
Create a source metadata.
Return
On success, returns a pointer to the metadata. In case of an allocation failure, the value returned is NULL.
Description
Get the size of the luma inside the picture.
Return
Returns size of the luma region.
Description
Get the size of the chroma inside the picture.
Return
Returns size of the chroma region.
Description
Pushes a buffer into the decoder queue. Generates an incoming work event. It is decoded as soon as possible. The uSize
argument indicates how many bytes of the buffer to decode. The eMode argument specifies whether or not to block. The
function pointer supplied in the AL_CB_EndDecoding is invoked with a pointer to the decoded frame buffer and a user
parameter. This and other function pointers are provided when the decoder object is created. Note the provided buffer pBuf
must not have AL_TCircMetaData associated with it.
See
AL_EBufMode, AL_CB_EndDecoding, AL_Decoder_Create
Description
Requests the Decoder flush the decoding request stack when stream parsing is finished.
Description
Returns the maximum bit depth supported for the current stream profile.
Return
Returns the pitch rounded up to the burst DMA alignment.
Return
Returns the height aligned up to the nearest 64-bit boundary.
Description
Expresses the area pixels defined by tDim in units of 2uLCUSize. The uLCUSize argument must be 4, 5, or 6 corresponding to
blocks of 16×16, 32×32, or 64×64, respectively.
Return
Returns the number of LCU in a frame.
Return
Returns the size of an HEVC compressed buffer (LCU header + MVDs + Residuals).
Return
Returns the size of an AVC compressed buffer (LCU header + MVDs + Residuals).
Returns maximum size in bytes of the compressed map buffer (LCU Offset and size).
Return
Returns the size in bytes needed for the co-located HEVC motion vector buffer.
Return
Returns the size (in bytes) needed for the co-located AVC motion vector buffer.
[in] bFrameBufferCompression VCU hardware does not support any frame compression, this condition
is always false as it is raster only.
Return
Returns the size in bytes of the output frame buffer.
Return
Returns the size in bytes of a reference frame buffer.
Return
Returns maximum size in bytes needed to store a QP Ctrl Encoder parameter buffer (EP2).
Return
Returns maximum size in bytes needed for the YUV frame buffer.
Return
Returns size in bytes of the YUV Source frame buffer.
Return
Returns pitch value in bytes.
Return
Returns the Source frame buffer storage mode.
[in] pBuf Pointer to the decoded picture buffer for which to get the error status.
Description
Retrieves the error status related to a specific frame.
Return
Returns the frame error status.
Return
Returns the error code from the context structure in the decoder. Thread-safe.
1. Adjust the number of B-frames (Gop.NumB) according to the amount of motion, e.g. increased to 2 for static scenes or
video conference-like of content, or reduced to 0 for sequences with a lot of motion and/or high frame rates
2. The VBR rate control mode can improve the average quality when some parts of the sequence have lower complexity
and/or motion
3. For video conference or when random access is not needed, replace the IPP... GOP by the LOW_DELAY_P GOP and
optionally enable the GDR intra refresh
4. If there are many scene changes, enable the ScnChgResilience setting to reduce artifacts following scene change
transitions
5. If scene changes can be detected by the system, the encoder's scene change signaling API should be called instead (i.e.
with ScnChgResilience disabled) for the encoder to dynamically adapt the encoding parameters and GOP pattern. The
scene change information can be provided in a separate input file (CmdFile) when using the control software test
application.
★ Tip: To improve the video quality, consider using Scene Change Detect hardware.
6. If the highest PSNR figures are targeted instead of subjective quality, it is recommended to set QPCtrlMode =
UNIFORM_QP and ScalingList = FLAT.
11. Calculate the BD-rate using JCT-VC common test conditions evaluation metric.
12. If there is difference in PSNR between VCU and libx264/libx265, tune the following parameters to see impact:
Video streaming use-case requires very stable bitrate graph for all pictures.
Avoid periodic large Intra pictures during encoding session
Low-latency rate control (hardware RC) is the preferred control-rate for video streaming, it tries to maintain equal amount
frame sizes for all pictures.
Avoid periodic Intra frames instead use low-delay-p (IPPPPP…) with Intra refresh enable (gdr-mode=horizontal or vertical)
VBR is not a preferred mode of streaming.
It is preferred to use eight or higher slices for better AVC encoder performance.
AVC standard does not support Tile mode processing which results in processing of MB rows sequentially for entropy
coding.
Example Design
VCU Out of the Box Examples
The supported VCU out of box examples are listed below. Default Desktop application shows two VCU examples (4K AVC
Decode and 4K HEVC Decode) icons corresponding to VCU-Decode → Display use case, more details are mentioned in
Example-1.
AMD PetaLinux board support package (BSP) is a Linux operating system running a sample user design.
1. Ensure the board is connected to Ethernet to download sample video content from AMD web server.
2. If the board is connected to private network, then export proxy settings in /home/root/.bashrc file as follows:
If the board is not connected to Internet, then the compressed video files can be downloaded using host machine. Copy
the input files into the /home/root/ folder and use the following commands to download the content on the host Linux-
machine.
4. Download AVC sample file:
wget petalinux.xilinx.com/sswreleases/video-files/
bbb_sunflower_2160p_30fps_normal_avc.mp4
wget petalinux.xilinx.com/sswreleases/video-files/
bbb_sunflower_2160p_30fps_normal_hevc.mkv
4K AVC Decode
For 4K AVC Decode, click on the application to download sample AVC/AAC encoded bitstream and run VCU Example-1
(Decode → Display). If the sample AVC content is already present in /home/root, then it decode→display the content.
Run the following command to play a YouTube video to ensure that the Ethernet is connected to the board
vcu-demo-decode-display.sh -u "youtube-URL"
vcu-demo-decode-display.sh -h
1. Connect USB camera to the board (Verified Cameras: Logitech HD camera, C920).
2. Run the following command for USB video capture serial pipeline
vcu-demo-camera-encode-decode-display.sh -s 640x480
3. Run the following commands for USB video capture serial pipeline with audio for the applicable setting:
✎ Note: In above example using gstreamer auto plugins, gstreamer selects the API and devices to be used for
capture and playback automatically. Normally, it tries to use default devices enumerated at bootup by pulseaudio
server or as mentioned in the alsa configuration file. So, it might not choose the device you want to use sometimes, in
which case you can use the above-mentioned script arguments.
✎ Note: For selecting the values of arguments to be passed to -i and ---audio-output option, refer section 10.2.
Connect USB camera to the board (Verified Cameras: Logitech HD camera, C920).
1. Connect USB camera to the board (Verified Cameras: Logitech HD camera, C920).
2. Run the following command for USB video decode display pipeline:
3. Run the command for USB video decode pipeline with audio:
✎ Note: For "How to Guide" on selecting the values of arguments to be passed to -i and ---audio-output option, refer
section 10.2.
vcu-demo-camera-decode-display.sh -a aac
‼ Important: Resolutions for Example 2 and 3 must be set based on USB Camera Capabilities.
To find capabilities, use v4l2-ctl --list-formats-ext. If V4lutils is not installed in the pre-built image, install it using dnf
or rebuild the PetaLinux image including v4lutils.
This example requires input sample files. If Example-1 is executed already, then sample videos are stored in the /home/root
folder.
1. Use the following command to transcode sample avc file into hevc file.
vcu-demo-transcode-to-file.sh -i /home/root/
bbb_sunflower_2160p_30fps_normal_avc.mp4 -c avc -a aac -o /home/root/transcode.mkv
2. Use Example-1 to view the transcoded file on the Display. The sample command is shown below:
Example-5: (Transcode → Stream out using Ethernet ... Streaming In → Decode → Display)
This example requires two boards. Board-1 is used for transcoding and streaming-out (as a server) and board-2 is used for
streaming-in and decoding purposes (as a client). The VLC player on the host machine can be used as a client instead of board-
2.
vcu-demo-streamin-decode-display.sh -c avc
✎ Note: This means client is receiving AVC bitstream from server, Use -c hevc if server is sending HEVC bitstream.
3. If the VLC player is used as client, set the host machine IP address to 192.168.0.2 if it is connected to the board directly
with an Ethernet cable.
✎ Note: Setting up an IP address is not required if the boards are already connected to the same LAN.
4. Create a test.sdp file on the host with the following content (add a separate line in test.sdp for each of the following items)
and play test.sdp on host machine.
Trouble-shoot for VLC player setup: IP4 is client-IP address. H264/H265 is used based on received codec type on the
client. Turn-off the firewall in the host machine if packets are not received to VLC.
5. Set server IP and execute Transcode → stream-out example on board-1
✎ Note: Setting of IP is not required if the boards are already connected to the same LAN.
vcu-demo-transcode-to-streamout.sh -i /home/root/bbb_sunflower_2160p_30fps_normal_hevc.mkv -
c hevc -b 5000 -a <Client Machine/Board IP address>
1. Connect a USB camera to the board (Verified cameras: Logitech HD camera C920).
2. Execute the following command for USB video recording.
An output file camera_output.ts is generated in the current directory which has recorded an audio/video file in the mpegts
container.
The file can be played on media players like VLC.
Example-7: (Camera Audio/Video → Stream out using Ethernet ... Streaming In → Decode → Display with Audio)
This example requires two boards: Board-1 is used for encoding live A/V feed from camera and streaming-out (as a server) and
board-2 is used for streaming-in and decoding purposes to play, as a client, the received audio/video stream.
1. Connect two boards back to back with an Ethernet cable or make sure both the boards are connected to the same Ethernet
hub.
2. Set the client IP address and execute stream-in → Decode example on board-2.
✎ Note: Setting of IP is not required if the boards are already connected to the same LAN.
✎ Note: This means client is receiving AVC bitstream from server, use -c hevc. If server is sending HEVC bitstream).
3. Set server IP and run Camera → Encode → stream-out on board-1.
These examples can also be run through Jupyter notebook GUI utility. Using the PetaLinux project released up till 2021.1, you
need to build a project to enable Jupyter with the below mentioned steps. However from 2021.2, these examples are included in
the pre-built WIC image. So you can directly use it without building the project:
petalinux-config -c rootfs
c. Enable the gstreamer-vcu-notebooks option under User Packages and then build the BSP
>> petalinux-build
file:///.local/share/jupyter/runtime/nbserver-2812-open.html
Or copy and paste one of these URLs:
http://172.19.0.23:8888/?token=f1897000565b0255786124d56318b4852f9062f768a877a7
e. Copy the generated URL to the remote host’s web browser. For example: http://172.19.0.23:8888/ ?
token=f1897000565b0255786124d56318b4852f9062f768a877a7
The host PC's browser tab should show the following VCU notebook examples:
The audio device name (to provide with -i option) of audio source and playback device (to provide with --audio-output) can
be found using arecord and aplay utilities respectively.
arecord -l
In this example, card number 0 is being used for playback device for display port channel 0 and device id is 0, so hw:0,0 can be
used for selecting display port channel 0 as a playback device using the --audio-output option.
aplay -1
In this example, card number 0 is being used for playback device for display port channel 0 and device id is 0, so hw:0,0 can be
used for selecting display port channel 0 as a playback device using --audio-output option.
alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo ...
Appendices
Codec Parameters for Different Use Cases
The codec parameters for different use cases are given in the following table:
QoS Configurations
Check the Read QoS, Write QoS, Read Commands Issuing Capability, and Write Commands Issuing Capability configuration of
HP ports that interface the VCU with the PS DDR.
✎ Note: For VCU traffic, the QoS should be set as Best Effort (BE) and outstanding transaction count for read/write commands
should be set to maximum, which is 0xF for all AXI HP ports.
The AXI-QoS{3:0] lines behavior define three types of following Traffic in Decimal format on the AXI Bus.
#!/bin/bash
IRQ Balancing
Various multimedia use-cases involving video codecs such as audio/video conferencing, video-on-demand, playback, and
record use-cases also involve multiple other peripherals such as ethernet, video capture pipeline related IPs icluding image
sensor and image signal processing engines, DMA engines, and display pipeline related IP like video mixers and HDMI
transmitters, which in turn use unique interrupt lines for communicating with the CPU.
In these scenarios, it becomes important to distribute the interrupt processing load across multiple CPU cores instead of
utilizing the same core for all the peripherals/IP. Distributing the IRQ across CPU cores optimizes the latency and performance
of the running use-case as the IRQ context switching and ISR handling load gets distributed across multiple CPU cores.
Each peripheral/IP is assigned a unique interrupt number by the Linux kernel. Whenever a peripheral or IP needs to signal
something to the CPU (like it has completed a task or detected something), it sends a hardware signal to the CPU and the kernel
retrieves the associated IRQ number and then calls the associated interrupt service routine. The IRQ numbers can be retrieved
using the following command. This command also lists the number of interrupts processed by each core, the interrupt type, and
comma-delimited list of drivers registered to receive that interrupt.
$cat /proc/interrupts
The Zynq UltraScale+ MPSoC has four CPU cores available. If running a plain PetaLinux image without any irqbalance daemon,
then by default all IRQ requests are processed by CPU 0 by the Linux scheduler. To assign a different CPU core to process a
particular IRQ number, the IRQ affinity for that particular interrupt needs to be changed. The IRQ affinity value defines which CPU
cores are allowed to process that particular IRQ. For more information, see https://www.kernel.org/doc/Documentation/IRQ-
affinity.txt.
By default, the IRQ affinity value for each peripheral is set to 0xf, which means that all four CPU cores are allowed to process
interrupt as shown in the following example using the IRQ number 42.
$cat /proc/irq/42/smp_affinity
output: f
To restrict this IRQ to a CPU core n, you have to set a mask for only the nth bit. For example, if you want to route to only CPU
core 2, then set the mask for the second bit using the value 0x4.
The following section shows how IRQ balancing can be performed before running a multistream video conferencing use-case
that involves multiple peripherals and video IP.
As explained in Encoder Features > DMA_PROXY module usage, various DMA channels are used for constructing encoded
output, which in turn also utilize different interrupt lines as depicted by the zynqmp-dma blocks in the following figure.
As seen in the previous figure, all interrupt requests from different peripherals goes to CPU 0 by default.
To distribute the interrupt requests across different CPU cores as show in the following figure, follow these steps:
The numbers on the left are the IRQ numbers for the respective peripherals.
2. Assign CPU 1 to VCU IRQ with number 49.
By default, the interrupts for video1 xilinx_framebuffer DMA engine and various other peripherals are already being
processed by CPU 0 so there is no need to modify the smp_affinity for the same. Using the previous commands, the IRQ is
distributed as per the scheme mentioned in the previous figure, which can also be seen by running the following command
when the use-case is running and observing whether interrupts for the peripherals are going to respective CPU cores as
intended or not. Likewise, similar scheme of distributing interrupts can be followed for other use-cases too depending
upon the peripherals being used, system load, and intended performance.
$ cat /proc/interrupts
By default the interrupts for other peripherals will be processed by cpu 0 so there is no need to modify the smp_affinity for the
same. Using the preceding commands, the IRQ will get distributed as per the scheme mentioned in
gex1601891035556.html#gex1601891035556__image_lvr_zfs_fnb, which can also be seen by running the following command
when the use-case is running:
cat /proc/interrupts
12: 42151036 0 0 0 GICv2 156 Level zynqmp-
dma
13: 31494805 10644207 0 0 GICv2 157 Level zynqmp-dma
14: 31483922 0 10643127 0 GICv2 158 Level zynqmp-dma
15: 31518024 0 0 10595920 GICv2 159 Level zynqmp-
dma
Upgrading
2020.2 VCU Ctrl-SW API Migration
This section provides migration guide from 2020.1 to 2020.2 release. It covers list of modified and newly added VCU control
software API for 2020.2 release. For more details, see:
Modified APIs
The following table contains list of modified encoder and decoder control software APIs in 2020.2 release.
Table: Modified Encoder and Decoder Control Software APIs and Variables
bSplitInput is of bool type in bSplitInput is renamed to eInputMode Simpler and clearer way to handle such
AL_TDecSettings. It is true for decoder of enum type and it can be assigned as settings.
in split input mode and false for unsplit enum values like
input mode. AL_DEC_UNSPLIT_INPUT,
AL_DEC_SPLIT_INPUT
void “bool bConceal” is removed. bool Removed not used parameters and
AL_Decoder_SetParam(AL_HDecoder bUseBoard is now replaced with const using informative text instead of bool
hDec, bool bConceal, bool bUseBoard, char* sPrefix. Its value can be "Fpga" or datatype to increase readably
int iFrmID, int iNumFrm, bool "Ref".
bForceCleanBuffers, bool void
shouldPrintFrameDelimiter); AL_Decoder_SetParam(AL_HDecoder
hDec, const char* sPrefix, int iFrmID, int
iNumFrm, bool bForceCleanBuffers,
bool bShouldPrintFrameDelimiter);
Configuration file parser API, CFG file API changes as To have better control.
MaxPictureSize() used to take values in MaxPictureSizeInBits (I,P,B) and takes
Kb. values in bits instead of Kb
The following table contain list of newly added encoder and decoder control software APIs in 2020.2 release.
iParsingID is added in AL_CB_EndParsing callback. Retrieves the iParsingID. The iParsingID corresponds to the
id in the AL_HandleMetaData associated with the buffer*
int AL_Buffer_AllocateChunkNamed(AL_TBuffer* pBuf, Allocates and binds a new memory chunk to the buffer
size_t zSize, char const* name);
Add (char const* name) the name of the buffer as extra Debugs and tracks allocation
argument in AL_PixMapBuffer_Allocate_And_AddPlanes()
Add AL_TDynamicMeta_ST2094_10 dynamic SEI in Added support for new HDR Related SEIs
AL_THDRSEIs structure
Add AL_TDynamicMeta_ST2094_40 dynamic SEI in Added support for new HDR Related SEIs
AL_THDRSEIs structure
Add support for new added HDR related SEIs in Function AL_Encoder_SetHDRSEIs can be called dynamically
AL_Encoder_SetHDRSEIs() to change the value of ST2094_10 and ST2094_40 SEIs
Added AL_Plane_GetBufferPixelPlanes / Required planes for a FourCC can be obtained with these
AL_Plane_GetBufferPlanes APIS
Add new file lib_common/PicFormat.h Lists the formats of the pictures extracted from
lib_common/SliceConsts.h
Add new file lib_common/CodecHook.h Hooks the code to help user applications
This section provides a list of modified, newly added VCU control software APIs in 2020.1 release in comparison to 2019.2
release. For more details, see:
Modified APIs
The following table contains list of modified encoder and decoder control software APIs in 2020.1 release.
Table: Modified Encoder and Decoder Control Software APIs and Variables
AL_SrcMetaDatauses offsets to AL_TSrcMetaDatabecomes This simplifies the handling of the luma and
specify the chroma and the luma AL_TPixMapMetaData. It allows chroma planes for the user. This enables, in
part of the buffer. storing pixel planes on different a further release to support separate
chunks buffers forthe luma and forthe chroma,
relaxing the constraint on memory
allocation and adding flexibility.
The AL_TBufferapi does not TheAL_TBuffer API does not allocate The AL_TBuffer API provides an easy
allocate the memory itself and only the memory itself and only provide an access to one or multiple memory chunks.
provide an easy access to it via its easy access to it via its API. It wraps Chunks of an AL_TBufferare memory
API. It wraps a memory buffer memory buffers allocated using the buffers that share the same lifetime,
allocated using AL_TAllocatorapi. AL_TAllocator API. The AL_TBuffer reference counting mechanism, and
The AL_TBuffer takes ownership of takes ownership of the memory custom user information, through
the memory buffer and frees it at buffers and frees it at destruction. metadata.
destruction.
New APIs
The following table contain list of newly added encoder and decoder control software APIs in 2020.1 release.
int AL_Buffer_AllocateChunk(AL_TBuffer* pBuf, size_t zSize) Allocate and bind a new memory chunk to the buffer
static AL_INLINEint8_t AL_Buffer_GetChunkCount(const Get the number of chunks belonging to the buffer
AL_TBuffer* pBuf)
static AL_INLINEbool AL_Buffer_HasChunk(const Check if a buffer has a chunk with specific index
AL_TBuffer* pBuf, int8_t iChunkIdx)
size_t AL_Buffer_GetSize(const AL_TBuffer* pBuf) Gets the buffer size. If the buffer contains multiple chunks,
return the size of the first chunk
uint32_t AL_GetAllocSizeSrc_Y(AL_ESrcMode eSrcFmt, int Retrieves the size of the luma component of a YUV frame
iPitch, int iStrideHeight); buffer
uint32_t AL_GetAllocSizeSrc_UV(AL_ESrcMode eSrcFmt,int Retrieves the size of the choma component of a YUV frame
iPitch, int iStrideHeight, AL_EChromaModeeChromaMode) buffer
bool AL_Encoder_SetFreqIDR(AL_HEncoderhEnc, int Changes the IDR frequency. If the new frequency is shorter
iFreqIDR) than the number of frames already encoded since the last
IDR, insert and IDR as soon as possible. Otherwise, the next
IDR is inserted when the new IDR frequency is reached.
bool AL_Encoder_SetQPBounds(AL_HEncoderhEnc, int16_t Changes the bounds of the QP set by the rate control
iMinQP, int16_t iMaxQP)
bool AL_Encoder_SetQPIPDelta(AL_HEncoderhEnc, int16_t Changes the QP delta between I frames and P frames
uIPDelta);
bool AL_Encoder_SetQPPBDelta(AL_HEncoder hEnc, int16_t Changes the QP delta between P frames and B frames
uPBDelta)
Removed APIs
The following table contains a list of removed encoder and decoder control software APIs for the current release.
This section provides migration guide from 2019.1 to 2019.2 release. It covers list of modified and newly added VCU control
software API for 2019.2 release. For more details, see:
Modified APIs
The following table contains a list of modified encoder and decoder control software APIs in 2019.2 release.
Table: Modified Encoder and Decoder Control Software APIs and Variables
#define XXX static AL_INLINE XXX Force type ? reduce user errors
AL_QPCtrlMode AL_EQpCtrlMode which decide the QP Allow lot of combinations and reduce user
Ctrl mode and the AL_EqpTableMode errors. Remove enum which should not
which decide table type appear in the API (i.e. BORDER_QP, ...)
Old registers definition (lib_ip_ctrl) New registers definition (lib_ip_ctrl) Genericness code. Huge increase in
flexibility and help to add potential new
features.
int Rtos_DriverPoll(void* drv, int int Rtos_DriverPoll(void* drv, int Add flag into polling.
timeout) timeout, unsigned long flags)
New APIs
This table contain list of newly added encoder and decoder control software APIs in 2019.2 release.
bool AL_Encoder_SetHDRSEIs(AL_HEncoder hEnc, AL_THDRSEIs* pHDRSEIs) Specify HDR SEIs to insert in the
bitstream
This section provides migration guide from 2018.3 to 2019.1 release. It covers list of modified and newly added VCU control
software API for 2019.1 release. For more details, see:
Modified APIs
This table contains list of modified encoder and decoder control software APIs in 2019.1 release.
Table: Modified Encoder and Decoder Control Software APIs and Variables
AL_SrcMetaData uses offsets to AL_SrcMetaData uses planes to specify This simplifies the handling of the luma
specify the chroma and the luma the chroma and the luma part of the and the chroma planes. In future
part of the buffer. buffer releases, this will support separate
buffers for the luma and the chroma,
relaxing the constraint on memory
allocation and adding flexibility.
This removes the AL_ToffsetYC and
AL_Tpitches structures as the data is
now held in AL_Tplane.
This makes the library easier to use
beginners as this api looks like what is
used in V4L2 or in gstreamer.
New APIs
This table contain list of newly added encoder and decoder control software APIs in 2019.1 release.
Removed APIs
This table contains a list of removed encoder and decoder control software APIs for the current release.
Verification
Tests included:
Compliance Testing
All possible supported VCU encoder profile/level streams are decoded successfully using the JM/HM reference decoder and
md5sum of the decoded output is matched with the VCU decoder.
Interoperability
VCU encoded streams have been successfully decoded using the JM decoder. Third party encoded streams (for example, the
Argon HEVC test suite) have been successfully decoded using the VCU decoder.
XAVC
Verification
Tests included:
Encoder tests
Decoder tests
Encoding streams using the VCU encoder with XAVC profile and providing it as an input to the Sony format verifier for
checking the compliance with their standards.
Decoding XAVC standard .mxf files for validating decoder functionalities.
Documentation Navigator
Documentation Navigator (DocNav) is an installed tool that provides access to AMD Adaptive Computing documents, videos,
and support resources, which you can filter and search to find information. To open DocNav:
From the AMD Vivado™ IDE, select Help > Documentation and Tutorials.
On Windows, click the Start button and select Xilinx Design Tools > DocNav.
At the Linux command prompt, enter docnav.
✎ Note: For more information on DocNav, refer to the Documentation Navigator User Guide (UG968).
Design Hubs
AMD Design Hubs provide links to documentation organized by design tasks and other topics, which you can use to learn key
concepts and address frequently asked questions. To access the Design Hubs:
Support Resources
For support resources such as Answers, Documentation, Downloads, and Forums, see Support.
References
These documents provide supplemental material useful with this product guide:
1. Vivado Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994)
2. Vivado Design Suite User Guide: Designing with IP (UG896)
3. Vivado Design Suite User Guide: Getting Started (UG910)
4. Vivado Design Suite User Guide: Logic Simulation (UG900)
5. Vivado Design Suite User Guide: Programming and Debugging (UG908)
6. Vivado Design Suite User Guide: I/O and Clock Planning (UG899)
7. Vivado Design Suite User Guide: Implementation (UG904)
8. Vivado Design Suite User Guide: Design Analysis and Closure Techniques (UG906)
9. AXI Interconnect LogiCORE IP Product Guide (PG059)
10. UltraScale Architecture-Based FPGAs Memory IP LogiCORE IP Product Guide (PG150)
11. UltraScale Architecture SelectIO Resources User Guide (UG571)
12. Zynq UltraScale+ MPSoC Production Errata (EN285)
13. Zynq UltraScale+ Device Technical Reference Manual (UG1085)
14. Zynq UltraScale+ Device Register Reference (UG1087)
15. Zynq UltraScale+ MPSoC Data Sheet: Overview (DS891)
Training Resources
Revision History
The following table shows the revision history for this document.
Software Prerequisites Updated the source links in the Application Software table.
Port Connection Recommendations for Different Use Cases Added a new section.
with VCU DDR Memory Controller
Application Software Development Added the HLG Support topic in the VCU Encoder Features
section.
H.264/H.265 Video Codec Unit v1.2 Updated the Customizing and Generating the Core, Encoder
Buffer, and Enabling PL-DDR for VCU sections.
Zynq UltraScale+ EV Architecture Video Codec Unit DDR4 Updated the Resource Utilization, Designing with the Core,
LogiCORE IP v1.1 and Customizing the VCU DDR4 Controller sections.
VCU Sync IP v1.0 Updated the Resource Utilization, Port Descriptions, Sync IP
Software Programming Model sections.
General updates The content in this document was reorganized into the
following sections:
Latency in the VCU Pipeline Added Xilinx Low Latency Limitations, Encoder and Decoder
Latencies with Xilinx Low Latency Mode, and Recommended
Parameters for Xilinx Low-latency Mode.
Updated pipelines in Usage and Latency.
VCU Encoder Features Added new features: Dynamic Resolution Change at VCU
Encoder/Decoder, Frame skip support for VCU Encoder, 32-
streams support, DCI-4k Encode/Decode, Temporal-ID
Encoder Block, Decoder Block Added encoder and decoder block diagrams
VCU Encoder Features Added new features: GDR Intra Refresh, LongTerm Ref.
Picture, Adaptive GOP, Insertion of SEI data at Encoder, SEI
Decoder API, Dual pass Encoding, Scene change detection,
Interlaced video.
The information presented in this document is for informational purposes only and may contain technical inaccuracies,
omissions, and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for
many reasons, including but not limited to product and roadmap changes, component and motherboard version changes, new
model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware
upgrades, or the like. Any computer system has risks of security vulnerabilities that cannot be completely prevented or
mitigated. AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the
right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify
any person of such revisions or changes. THIS INFORMATION IS PROVIDED "AS IS." AMD MAKES NO REPRESENTATIONS OR
WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES,
ERRORS, OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL
AMD BE LIABLE TO ANY PERSON FOR ANY RELIANCE, DIRECT, INDIRECT, SPECIAL, OR OTHER CONSEQUENTIAL DAMAGES
ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
Copyright
© Copyright 2017-2024 Advanced Micro Devices, Inc. AMD, the AMD Arrow logo, Versal, Vitis, Vivado, Zynq, and combinations
thereof are trademarks of Advanced Micro Devices, Inc. The DisplayPort Icon is a trademark of the Video Electronics Standards
Association, registered in the U.S. and other countries. HDMI, HDMI logo, and High-Definition Multimedia Interface are
trademarks of HDMI Licensing LLC. AMBA, AMBA Designer, Arm, ARM1176JZ-S, CoreSight, Cortex, PrimeCell, Mali, and MPCore
are trademarks of Arm Limited in the US and/or elsewhere. Other product names used in this publication are for identification
purposes only and may be trademarks of their respective companies.