Dynamic Multi-Stream Pipeline With Add - Remove and Per-Stream RTSP Outputs
Dynamic Multi-Stream Pipeline With Add - Remove and Per-Stream RTSP Outputs
• Build a base pipeline with nvstreammux (to batch multiple inputs for processing) and downstream
processing elements (e.g. nvinfer for inference).
• After processing, use nvstreamdemux to split the batched stream back into individual streams.
• For each stream, attach a unique branch with an encoder and RTSP streaming sink.
• Use GstRtspServer to serve each branch on a unique RTSP mount point (e.g. /ds-test0 , /ds-
test1 , etc.).
• Start the pipeline once (initially with no or minimal sources), then dynamically add or remove source
bins and corresponding output branches as needed (the pipeline stays running while sources come
and go).
This approach is confirmed by NVIDIA: “You can use nvstreamdemux to demux multiple sources and use
different … RtspServer” 1 and by the DeepStream reference apps which demonstrate dynamic source
management 2 . Below is a detailed design and code pattern.
Note: nvstreamdemux outputs raw video frames (NV12/RGBA) on each src_%u pad,
corresponding to each input stream 5 . This means each branch will need to convert and
encode the video before streaming. Do not link nvstreamdemux directly to an RTSP
payloader without encoding, as they won’t be compatible 5 .
• Set up a GMainLoop (GLib main loop) for the pipeline and RTSP server to run in.
1
2. Source and Sink Bins: Prepare GStreamer bins or component lists for dynamic parts: - Source Bin:
Similar to add_del_bin.py , create a source bin that contains a uridecodebin for an input URI (e.g.,
RTSP camera). The bin should expose a ghost pad (source pad) that will connect into the nvstreammux .
On uridecodebin ’s pad-added signal, link the decodebin’s output pad to a requested sink pad on
nvstreammux 6 . Each source bin gets a unique ID (used for pad names). For example, use pad name
"sink_%u" on streammux (0,1,2…) matching the source ID.
• Output Branch: For each possible output stream (matching a source ID), set up an encoding and
streaming chain. Typically this includes:
• nvvideoconvert (to convert NV12/RGBA to a format encoder accepts),
• (Optional) nvdsosd if you want to draw OSD per stream (or you can put OSD before demux for
batched OSD),
• nvv4l2h264enc (or H265 encoder) to compress the video,
• rtph264pay (RTP payloader for H264), and
• A sink to feed the RTSP server. We will use an appsink/udpsink here to hand off data to the RTSP
server.
NVIDIA suggests a similar chain: “create for each new input stream
nvvideoconvert -> nvdsosd -> ... -> nvv4l2h264enc encoder -> rtph264pay -> udp-
sink ” 7 . This produces an RTP stream for each source that we can route to an RTSP server. Each branch
can be encapsulated in a bin with a ghost sink pad that attaches to nvstreamdemux ’s src_%u pad.
3. Pre-allocating Demux Pads: It’s recommended to request output pads on nvstreamdemux ahead of
time (especially on older DeepStream versions where dynamic pad requests on demux are limited). For
example, if you expect up to N simultaneous streams, call
nvstreamdemux.get_request_pad("src_0") ... src_N-1 at startup to create N output pads. You
can link each pad to an encoder/output bin (or leave unlinked until needed). This way, when you add a new
source, the corresponding demux pad (by index) already exists and is ready to use 8 . In newer
DeepStream releases, dynamic pad creation on nvstreamdemux is supported, but using a fixed maximum
and reusing pads is a safe approach.
2
Add the new source bin to the pipeline ( pipeline.add(source_bin) ), and set it to PAUSED or PLAYING
state. The decodebin will negotiate and link to streammux. The pipeline (which is already running) will start
processing this new stream without restarting.
1. Create/Enable Output Branch: Simultaneously, set up the corresponding output branch for the
new stream. If you pre-linked an nvstreamdemux.src_%u pad to an encoder/payloader bin for
this ID, you may just need to set that bin to PLAYING. If not pre-linked, obtain the demux pad
dynamically:
demux_src = nvstreamdemux.get_request_pad(f"src_{source_id}")
demux_src.link(output_bin.get_static_pad("sink"))
pipeline.add(output_bin)
output_bin.sync_state_with_parent() # set to PLAYING
This attaches the new demux output to the encoder+payloader chain for that stream. From now on,
frames from the new source will flow through the mux -> infer -> demux -> encoder -> payloader.
2. Register RTSP Stream: If not already done, create an RTSP media factory for this stream on the
GstRtspServer. There are two ways to feed the encoded stream to the RTSP server:
3. Via UDP (simpler): Use a udpsink in the output branch to loop back the RTP stream to localhost
(on a unique port per stream). Then set up an RTSP media factory launching an udpsrc that
receives this RTP and serves it. For example, for source 0:
factory = GstRtspServer.RTSPMediaFactory.new()
launch_pipe = f"udpsrc port=5400 caps=\"application/x-rtp, payload=96\" !
rtph264depay ! rtph264pay name=pay0 pt=96"
factory.set_launch(launch_pipe)
factory.set_shared(True)
rtsp_server.get_mount_points().add_factory(f"/ds-test{source_id}", factory)
Here the main pipeline’s udpsink sends RTP packets to port 5400, and the RTSP server pipeline
picks them up, depayloads and re-payloads for clients. (You could also send raw H264 over UDP and
have the factory do h264parse ! rtph264pay instead, to avoid double RTP processing.)
4. Via Appsink/Appsrc: Alternatively, use an appsink at the end of the output branch to collect
encoded frames in the application, and push them into an RTSP server pipeline via an appsrc . This
approach avoids UDP but is more involved in Python. You’d create a custom
GstRTSPMediaFactory subclass or use signals to push buffer from the appsink to an appsrc in
the RTSP media pipeline. This can work, but the UDP method above is often easier to implement and
sufficiently performant for moderate stream counts.
3
5. Client Access: Once the new factory is attached and the RTSP server is running (don’t forget to call
server.attach(None) to start the service), clients can connect to rtsp://<host>:8554/ds-
test<id> to view the stream. Each source has its own endpoint.
This architecture allows you to dynamically manage multiple video sources and serve each as a separate
RTSP stream. It follows the proven pattern from NVIDIA’s runtime source add/delete sample (for dynamic
pipeline control) and uses nvstreamdemux to get one output per input 1 . By carefully handling
GStreamer pad linking and states, you can add or drop streams on the fly without restarting your
application.
4
• When building the pipeline, ensure caps and memory are handled. For example, when linking
decodebin to streammux, use streammux.set_property('live-source', True) for live
streams and ensure decodebin outputs NVMM memory (sometimes requiring a tee + nvvidconv).
The DeepStream sample’s cb_newpad handles this negotiation.
• DeepStream 7.1 introduced nvmultiurisrcbin , which can manage multiple URI sources via a
REST API (add/remove) 14 15 . However, using it would still require demux or separate pipelines to
achieve individual RTSP outputs. Our approach above sticks to explicit mux/demux control, which is
fine.
• Don’t forget to handle GStreamer’s threading and main loop. The GstRtspServer will run in the main
loop; make sure your program doesn’t exit and that you integrate the bus messages (watch for EOS
or errors on dynamic pads).
Using this design, you can add and remove camera streams on the fly and each will be available at a unique
rtsp:// URL (like /ds-test0 , /ds-test1 , etc.) without restarting the pipeline. This approach has
been validated in practice by NVIDIA and the developer community for building flexible multi-camera
DeepStream applications 13 16 . Good luck with your implementation!
3 6 Managing Video Streams in Runtime with the NVIDIA DeepStream SDK - Edge AI and Vision Alliance
https://www.edge-ai-vision.com/2022/02/managing-video-streams-in-runtime-with-the-nvidia-deepstream-sdk/
4 8 9 10 11 deepstream构建动态管道,在运行时添加和删除rtsp源_deepstream解码rtsp c++-CSDN博客
https://blog.csdn.net/weixin_45941990/article/details/126096872?
csdn_share_tail=%7B%22type%22%3A%22blog%22%2C%22rType%22%3A%22article%22%2C%22rId%22%3A%22126096872%22%2C%22source%22%3A%
5 gstreamer - Unable to link Nvstreamdemux to rtph264pay element using PyGObject - Stack Overflow
https://stackoverflow.com/questions/65495906/unable-to-link-nvstreamdemux-to-rtph264pay-element-using-pygobject
7 14 How to Dynamically Add nvstreammux/nvstreamdemux Sources and Sinks After Pipeline Start in
15
12 Problem in adding first source and removing last source during runtime - DeepStream SDK - NVIDIA
Developer Forums
https://forums.developer.nvidia.com/t/problem-in-adding-first-source-and-removing-last-source-during-runtime/261888