0% found this document useful (0 votes)
71 views5 pages

Dynamic Multi-Stream Pipeline With Add - Remove and Per-Stream RTSP Outputs

The document outlines a method for creating a dynamic DeepStream pipeline that allows for the addition and removal of video sources at runtime, each with its own RTSP output. It details the use of nvstreammux for batching inputs and nvstreamdemux for splitting outputs, along with the necessary components for setting up the pipeline and managing RTSP streams. The approach is validated by NVIDIA and includes guidelines for handling dynamic source management effectively.

Uploaded by

ibrahimchifour
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views5 pages

Dynamic Multi-Stream Pipeline With Add - Remove and Per-Stream RTSP Outputs

The document outlines a method for creating a dynamic DeepStream pipeline that allows for the addition and removal of video sources at runtime, each with its own RTSP output. It details the use of nvstreammux for batching inputs and nvstreamdemux for splitting outputs, along with the necessary components for setting up the pipeline and managing RTSP streams. The approach is validated by NVIDIA and includes guidelines for handling dynamic source management effectively.

Uploaded by

ibrahimchifour
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Dynamic Multi-Stream Pipeline with Add/Remove

and Per-Stream RTSP Outputs


Overview
To achieve a dynamic DeepStream pipeline that can add or remove sources at runtime and give each source
its own RTSP output, you can combine the runtime source addition pattern (from NVIDIA’s
add_del_bin.py sample) with the use of nvstreammux (for batching) and nvstreamdemux (for
splitting outputs). The high-level idea is:

• Build a base pipeline with nvstreammux (to batch multiple inputs for processing) and downstream
processing elements (e.g. nvinfer for inference).
• After processing, use nvstreamdemux to split the batched stream back into individual streams.
• For each stream, attach a unique branch with an encoder and RTSP streaming sink.
• Use GstRtspServer to serve each branch on a unique RTSP mount point (e.g. /ds-test0 , /ds-
test1 , etc.).
• Start the pipeline once (initially with no or minimal sources), then dynamically add or remove source
bins and corresponding output branches as needed (the pipeline stays running while sources come
and go).

This approach is confirmed by NVIDIA: “You can use nvstreamdemux to demux multiple sources and use
different … RtspServer” 1 and by the DeepStream reference apps which demonstrate dynamic source
management 2 . Below is a detailed design and code pattern.

Pipeline Design with nvstreammux and nvstreamdemux


1. Base Pipeline Construction: Set up the static parts of the pipeline in advance: - Create an
nvstreammux (batch muxer) element with a sufficiently large batch-size (max number of streams you
expect) 3 .
- Add downstream analytics elements (e.g., nvinfer for primary inference, trackers, etc.). These will
process frames in batched form.
- After analytics, add an nvstreamdemux element. This plugin will output each original stream on a
separate pad, allowing independent downstream handling of each video 4 .

Note: nvstreamdemux outputs raw video frames (NV12/RGBA) on each src_%u pad,
corresponding to each input stream 5 . This means each branch will need to convert and
encode the video before streaming. Do not link nvstreamdemux directly to an RTSP
payloader without encoding, as they won’t be compatible 5 .

• Set up a GMainLoop (GLib main loop) for the pipeline and RTSP server to run in.

1
2. Source and Sink Bins: Prepare GStreamer bins or component lists for dynamic parts: - Source Bin:
Similar to add_del_bin.py , create a source bin that contains a uridecodebin for an input URI (e.g.,
RTSP camera). The bin should expose a ghost pad (source pad) that will connect into the nvstreammux .
On uridecodebin ’s pad-added signal, link the decodebin’s output pad to a requested sink pad on
nvstreammux 6 . Each source bin gets a unique ID (used for pad names). For example, use pad name
"sink_%u" on streammux (0,1,2…) matching the source ID.

• Output Branch: For each possible output stream (matching a source ID), set up an encoding and
streaming chain. Typically this includes:
• nvvideoconvert (to convert NV12/RGBA to a format encoder accepts),
• (Optional) nvdsosd if you want to draw OSD per stream (or you can put OSD before demux for
batched OSD),
• nvv4l2h264enc (or H265 encoder) to compress the video,
• rtph264pay (RTP payloader for H264), and
• A sink to feed the RTSP server. We will use an appsink/udpsink here to hand off data to the RTSP
server.

NVIDIA suggests a similar chain: “create for each new input stream
nvvideoconvert -> nvdsosd -> ... -> nvv4l2h264enc encoder -> rtph264pay -> udp-
sink ” 7 . This produces an RTP stream for each source that we can route to an RTSP server. Each branch
can be encapsulated in a bin with a ghost sink pad that attaches to nvstreamdemux ’s src_%u pad.

3. Pre-allocating Demux Pads: It’s recommended to request output pads on nvstreamdemux ahead of
time (especially on older DeepStream versions where dynamic pad requests on demux are limited). For
example, if you expect up to N simultaneous streams, call
nvstreamdemux.get_request_pad("src_0") ... src_N-1 at startup to create N output pads. You
can link each pad to an encoder/output bin (or leave unlinked until needed). This way, when you add a new
source, the corresponding demux pad (by index) already exists and is ready to use 8 . In newer
DeepStream releases, dynamic pad creation on nvstreamdemux is supported, but using a fixed maximum
and reusing pads is a safe approach.

Dynamic Source Addition & Removal


Adding a Source at Runtime:
1. Create Source Bin: When a new camera/stream needs to be added, call your
create_uridecode_bin(source_id, uri) function (similar to the reference app 6 ). This will create a
new uridecodebin and set its “uri” property. Connect its pad-added signal to a callback that links to
the nvstreammux sink pad for this source_id . For example:

def cb_newpad(decodebin, pad, data):


source_id = data # passed in when connecting signal
mux_sink_pad = streammux.get_request_pad(f"sink_{source_id}")
pad.link(mux_sink_pad)

2
Add the new source bin to the pipeline ( pipeline.add(source_bin) ), and set it to PAUSED or PLAYING
state. The decodebin will negotiate and link to streammux. The pipeline (which is already running) will start
processing this new stream without restarting.

1. Create/Enable Output Branch: Simultaneously, set up the corresponding output branch for the
new stream. If you pre-linked an nvstreamdemux.src_%u pad to an encoder/payloader bin for
this ID, you may just need to set that bin to PLAYING. If not pre-linked, obtain the demux pad
dynamically:

demux_src = nvstreamdemux.get_request_pad(f"src_{source_id}")
demux_src.link(output_bin.get_static_pad("sink"))
pipeline.add(output_bin)
output_bin.sync_state_with_parent() # set to PLAYING

This attaches the new demux output to the encoder+payloader chain for that stream. From now on,
frames from the new source will flow through the mux -> infer -> demux -> encoder -> payloader.

2. Register RTSP Stream: If not already done, create an RTSP media factory for this stream on the
GstRtspServer. There are two ways to feed the encoded stream to the RTSP server:

3. Via UDP (simpler): Use a udpsink in the output branch to loop back the RTP stream to localhost
(on a unique port per stream). Then set up an RTSP media factory launching an udpsrc that
receives this RTP and serves it. For example, for source 0:

factory = GstRtspServer.RTSPMediaFactory.new()
launch_pipe = f"udpsrc port=5400 caps=\"application/x-rtp, payload=96\" !
rtph264depay ! rtph264pay name=pay0 pt=96"
factory.set_launch(launch_pipe)
factory.set_shared(True)
rtsp_server.get_mount_points().add_factory(f"/ds-test{source_id}", factory)

Here the main pipeline’s udpsink sends RTP packets to port 5400, and the RTSP server pipeline
picks them up, depayloads and re-payloads for clients. (You could also send raw H264 over UDP and
have the factory do h264parse ! rtph264pay instead, to avoid double RTP processing.)

4. Via Appsink/Appsrc: Alternatively, use an appsink at the end of the output branch to collect
encoded frames in the application, and push them into an RTSP server pipeline via an appsrc . This
approach avoids UDP but is more involved in Python. You’d create a custom
GstRTSPMediaFactory subclass or use signals to push buffer from the appsink to an appsrc in
the RTSP media pipeline. This can work, but the UDP method above is often easier to implement and
sufficiently performant for moderate stream counts.

3
5. Client Access: Once the new factory is attached and the RTSP server is running (don’t forget to call
server.attach(None) to start the service), clients can connect to rtsp://<host>:8554/ds-
test<id> to view the stream. Each source has its own endpoint.

Removing a Source at Runtime:


1. Unmount or stop the RTSP factory for that stream (optional but recommended to prevent clients from
hanging).
2. Send an EOS event to the source’s pipeline bin or simply set the source bin to NULL state. In the
DeepStream sample, they mark the source as to-be-deleted and handle EOS on bus messages. Once the
source bin is stopped, unlink and release its resources:
- Release the nvstreammux sink pad ( streammux.release_request_pad() for that pad) 9 10 .
- Remove the source bin from the pipeline ( pipeline.remove(bin) ).
- Likewise, set the output encoder/pay bin to NULL and remove it, and release the nvstreamdemux src
pad if needed (some implementations keep it for reuse).
3. Adjust any bookkeeping (e.g., a list of active source IDs) and continue the pipeline. The other streams
continue unaffected. If no sources remain, you can quit the pipeline’s loop gracefully 11 .

Putting It All Together (Design Summary)


• Initial Setup: Initialize the pipeline with nvstreammux , downstream processing, and
nvstreamdemux . Pre-create output bins for each potential stream (or create on the fly) and set up
a GstRtspServer with mount points ready. Start the pipeline (PLAYING state) even if no source is
connected – if there are no sources initially, set nvstreammux num-surfaces-per-frame=1 and
maybe feed it a blank source to keep it alive 12 (or start with one dummy source, as some versions
required at least one live source).
• Add Stream: When a new stream needs to be added, create and add the source bin, link it to
streammux, activate the corresponding demux output branch, and attach the RTSP factory. The
pipeline will begin emitting that stream’s data to its RTSP URL.
• Remove Stream: When a stream ends or needs removal, stop its source bin, unlink and release the
mux and demux pads, remove its elements, and remove or disable its RTSP output. The rest of the
pipeline continues running other streams without interruption 13 2 .

This architecture allows you to dynamically manage multiple video sources and serve each as a separate
RTSP stream. It follows the proven pattern from NVIDIA’s runtime source add/delete sample (for dynamic
pipeline control) and uses nvstreamdemux to get one output per input 1 . By carefully handling
GStreamer pad linking and states, you can add or drop streams on the fly without restarting your
application.

References and Notes


• NVIDIA’s runtime_source_add_delete reference apps (C and Python) are a great starting point for
the dynamic add/remove logic 2 . The Python sample ( deepstream_rt_src_add_del.py and
related bin code) shows how to construct source bins and handle pad linking to nvstreammux .
• The DeepStream demux multi-in multi-out sample demonstrates using nvstreamdemux to route
each input stream to a separate output sink. (In that sample, outputs go to distinct windows/files;
here we route to RTSP endpoints.) 1

4
• When building the pipeline, ensure caps and memory are handled. For example, when linking
decodebin to streammux, use streammux.set_property('live-source', True) for live
streams and ensure decodebin outputs NVMM memory (sometimes requiring a tee + nvvidconv).
The DeepStream sample’s cb_newpad handles this negotiation.
• DeepStream 7.1 introduced nvmultiurisrcbin , which can manage multiple URI sources via a
REST API (add/remove) 14 15 . However, using it would still require demux or separate pipelines to
achieve individual RTSP outputs. Our approach above sticks to explicit mux/demux control, which is
fine.
• Don’t forget to handle GStreamer’s threading and main loop. The GstRtspServer will run in the main
loop; make sure your program doesn’t exit and that you integrate the bus messages (watch for EOS
or errors on dynamic pads).

Using this design, you can add and remove camera streams on the fly and each will be available at a unique
rtsp:// URL (like /ds-test0 , /ds-test1 , etc.) without restarting the pipeline. This approach has
been validated in practice by NVIDIA and the developer community for building flexible multi-camera
DeepStream applications 13 16 . Good luck with your implementation!

1 2 13 16 How to output multiple rtsp streams based on deepstream-rtsp-in-rtsp-out - DeepStream SDK

- NVIDIA Developer Forums


https://forums.developer.nvidia.com/t/how-to-output-multiple-rtsp-streams-based-on-deepstream-rtsp-in-rtsp-out/221190

3 6 Managing Video Streams in Runtime with the NVIDIA DeepStream SDK - Edge AI and Vision Alliance
https://www.edge-ai-vision.com/2022/02/managing-video-streams-in-runtime-with-the-nvidia-deepstream-sdk/

4 8 9 10 11 deepstream构建动态管道,在运行时添加和删除rtsp源_deepstream解码rtsp c++-CSDN博客
https://blog.csdn.net/weixin_45941990/article/details/126096872?
csdn_share_tail=%7B%22type%22%3A%22blog%22%2C%22rType%22%3A%22article%22%2C%22rId%22%3A%22126096872%22%2C%22source%22%3A%

5 gstreamer - Unable to link Nvstreamdemux to rtph264pay element using PyGObject - Stack Overflow
https://stackoverflow.com/questions/65495906/unable-to-link-nvstreamdemux-to-rtph264pay-element-using-pygobject

7 14 How to Dynamically Add nvstreammux/nvstreamdemux Sources and Sinks After Pipeline Start in
15

DeepStream 7.1? - DeepStream SDK - NVIDIA Developer Forums


https://forums.developer.nvidia.com/t/how-to-dynamically-add-nvstreammux-nvstreamdemux-sources-and-sinks-after-pipeline-
start-in-deepstream-7-1/333812

12 Problem in adding first source and removing last source during runtime - DeepStream SDK - NVIDIA
Developer Forums
https://forums.developer.nvidia.com/t/problem-in-adding-first-source-and-removing-last-source-during-runtime/261888

You might also like