This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.
NVIDIA DeepStream is a powerful SDK that unlocks GPU-accelerated building blocks to build end-to-end vision AI pipelines. With more than 40+ plugins available off-the-shelf, you can deploy fully optimized pipelines with cutting-edge AI Inference, object tracking, and seamless integration with popular IoT message brokers such as REDIS, Kafka, and MQTT.
DeepStream offers intuitive REST APIs to control your AI pipelines whether deployed at the far edge or the cloud.
Figure 1. DeepStream SDK workflow
The latest release of DeepStream 7.0 is one of our most significant releases to date, crafted to empower you with groundbreaking capabilities in the era of generative AI. This release is packed with innovative features designed to accelerate the development of next-generation applications.
Release highlights include the following:
- A new development pathway using new DeepStream libraries through Python APIs
- Simplified application development with the new Service Maker
- Enhanced features of Single-View 3D tracker
- Support for sensor fusion model BEVFusion with the DeepStream 3D framework.
- Support for Windows Subsystem for Linux (WSL2)
- Streamlined AI pipeline optimization with PipeTuner
Download DeepStream version 7.0 today.
DeepStream libraries: Expanding developer horizons
When building vision AI applications, the first order of business is optimizing AI pipelines for top-notch performance. Whether you’re a seasoned pro or just diving in, understanding the strategic landscape is key.
Broadly, you have two pivotal approaches to choose from:
- Existing out-of-the-box frameworks such as GStreamer.
- Functional APIs that accelerate key building blocks on your own framework.
Figure 2. Vision AI application workflow
Figure 3 shows the structure of a DeepStream plugin. At its core, each plugin encapsulates its essential functionality within a library, accessible through a well-defined interface that aligns with the GStreamer plugin specification.
This standardized approach ensures seamless compatibility and integration within the GStreamer ecosystem. DeepStream adds zero-memory copy between plugins, enabling a state-of-the-art performance.
Figure 3. DeepStream plugin high-level architecture
With the launch of DeepStream 7.0, NVIDIA is excited to open new pathways for developers, offering the flexibility to continue harnessing the power of GStreamer or to tap into the robust capabilities of DeepStream libraries through intuitive Python APIs. This dual approach not only broadens accessibility to NVIDIA acceleration capabilities to Python developers but also integrates seamlessly into your existing AI frameworks.
Figure 4. DeepStream libraries
DeepStream Libraries, powered by NVIDIA CV-CUDA, NvImageCodec, and PyNvVideoCodec, offer a set of low-level, GPU-accelerated operations that are easy drop-in replacements to CPU-bottlenecked equivalents in pre– and post-processing stages of vision AI pipelines.
As open-source libraries, they provide complete transparency and the tools necessary to implement zero-memory copy interactions among the libraries and with popular deep-learning frameworks. Setting up is a pip
installation command, streamlining the integration process.
The two paths now supported with DeepStream 7.0 have inherent benefits and tradeoffs:
- Easy to learn and integrate:Â DeepStream Libraries simplify the learning curve, enabling you to integrate Python APIs swiftly and witness the immediate benefits of GPU acceleration. A prime example is the integration with DeepStream Libraries codecs, where the impact of accelerated decode/encode of images or video frames is quickly evident after a few lines of code.
- Ready-to-use solutions: For starting from scratch or without an existing pipeline framework, the mature DeepStream plugins combined with the GStreamer framework offer a rapid route to market deployment. These plugins come with built-in zero memory copy and sophisticated resource management, making them an ideal choice for efficient application development.
Figure 5. Tradeoffs between DeepStream libraries and plugins
Looking ahead, NVIDIA plans to continuously expand the range of supported DeepStream libraries further enriching the developer experience with each new release.
For more information, see the following resources:
- The Vision-AI Revolution powered by DeepStream
- Streamed Video Processing for Cloud-Scale Vision AI Services
- DeepStream Libraries documentation
DeepStream Service Maker: Simplifying Application Development
We also have fantastic news for DeepStream developers who want to leverage GStreamer. For those new to GStreamer, it can pose a steep learning curve.
NVIDIA is thrilled to unveil a groundbreaking feature in the DeepStream technology suite: DeepStream Service Maker. This new addition simplifies the development process significantly by abstracting the complexities of GStreamer, enabling everyone to build C++ object-oriented applications efficiently in a heartbeat.
Figure 6. DeepStream Service Maker abstraction layer
With DeepStream Service Maker, you can quickly construct a pipeline, integrate desired plugins, seamlessly link them, and launch applications—all within a matter of minutes. These applications can then be effortlessly packaged into containers and managed through REST APIs, offering a streamlined workflow that reduces the traditional coding effort dramatically.
Pipeline pipeline("deepstream-test1");
pipeline.add("filesrc", "src", "location", argv[1])
.add("h264parse", "parser")
.add("nvv4l2decoder", "decoder")
.add("nvstreammux", "mux", "batch-size", 1, "width", 1280, "height", 720)
.add("nvinfer", "infer", "config-file-path", CONFIG_FILE_PATH)
.add("nvvideoconvert", "converter")
.add("nvdsosd", "osd")
.add(sink, "sink")
.link("src", "parser", "decoder")
.link({"decoder", "mux"}, {"", "sink_%u"})
.link("mux", "infer", "converter", "osd", "sink")
.attach("infer", new BufferProbe("counter", new ObjectCounter))
.attach("infer", "sample_video_probe", "my probe", "src", "font-size", 20)
.start()
.wait();
DeepStream Service Maker makes the development process more intuitive for those unfamiliar with GStreamer and also unlocks new capabilities for experienced developers. It fully supports custom plugins, which is crucial if you’ve invested time in creating custom solutions over the years.
By transforming complex coding requirements from hundreds of lines into just a few, DeepStream Service Maker revolutionizes how you approach and manage application development, making it easier and more accessible than ever.
DeepStream Service Maker also accelerates application development for edge environments as it is the ideal path to develop your own microservices for Metropolis Microservices for Jetson (MMJ). DeepStream Service Maker can also be easily deployed as microservices on the cloud that can be controlled via REST-APIs.
Build, Deploy, and Control Vision AI Apps Easily
When your application is built with Service Maker, it can be easily packaged into a container and then managed and dynamically controlled through intuitive REST APIs such as stream addition and deletion, and region-of-interest (ROI) configuration.
Controlling a Vision AI Application using REST APIs
This first release of DeepStream Service Maker supports C++. Python support will be available in a future release, broadening the tool’s accessibility and versatility.
For more information, see the DeepStream Service Maker documentation.
DeepStream Single-View 3D
The latest release of NVIDIA DeepStream introduces significant enhancements to the Single-View 3D Tracking (SV3DT) tracker. This advanced feature is designed to accurately track objects within a 3D space using just a single mono-camera, providing precise localization of objects on a 3D world ground plane.
This first version of SV3DT models pedestrians as cylinders on the ground plane. This approach ensures more accurate localization by positioning the feet at the bottom of the cylinder, offering a clearer, more defined representation of movement, and positioning regardless of the level of occlusion.
Figure 7. DeepStream Single View 3D Tracking creates cylinders and foot location from a mono camera
For more information, see the following resources:
- Mitigating Occlusions in Visual Perception Using Single-View 3D Tracking in NVIDIA DeepStream
- Single-View 3D Tracking (Alpha)Â in the NVIDIA DeepStream SDK Developer Guide
Support for BEVFusion with the DeepStream 3D framework
DeepStream 7.0 brings support to one of the most exciting AI models for sensor fusion: BEVFusion. DeepStream 7.0 enhances the DeepStream 3D (DS3D) framework and adds both LIDAR and radar inputs that can be fused with camera inputs. NVIDIA has a commitment to bring the next generation of environmental perception solutions.
Integration with various sensors is streamlined through a low-level library, available as source code, simplifying support for different sensor vendors.
BEVFusion’s integration into the DS3D framework offers a suite of features designed to boost functionality and ease of use:
- Easy visualization: Rendering and rotation for LIDAR or 3D data on-screen, projection of LIDAR data into images, and the display of 3D bounding boxes from multiple viewpoints.
- Message broker support: Default integration with message brokers for fast and efficient integration with other subsystems.
- Sensor sync: Robust synchronization for multi-sensor data (including LIDAR, radar, and cameras), supporting both file inputs and livestreams. This feature accommodates varying frame rates and manages frame drops, crucial for real-world application adaptability.
- Alignment filter: Data alignment from different sensors according to their intrinsic and extrinsic parameters, enabling precise customization suited to various sensor data alignments.
- Custom 3D data preprocessing: Provides support for tailored preprocessing needs for LIDAR and radar data, enhancing processing accuracy and flexibility.
- Generic data map management: A comprehensive array of sensor and tensor data managed through a key-value system, streamlining data oversight and manipulation.
With these features, DeepStream 7.0 with BEVFusion stands at the forefront of 3D AI development, pushing the boundaries of what’s possible with sensor fusion technology that can be deployed from the edge to the cloud.
Figure 8. Sensor fusion for your applications with BEVFusion and DS3D framework
For more information, see the DeepStream 3D Framework documentation.
Support for Windows Subsystem for Linux
DeepStream applications can now be developed right on your Windows system using the Windows Subsystem for Linux (WSL2). This update is a significant stride forward and a frequent request from customers who develop on IT-approved systems where Windows is the standard.
With the integration of DeepStream on WSL2, you can streamline your workflow on a single system, eliminating the need for remote access to Linux systems. This new capability ensures that you can use the powerful features of DeepStream without the complexity of dual-system setups, simplifying the development process and enhancing productivity.
DeepStream support for WSL2 offers the flexibility and convenience needed to develop advanced applications directly on Windows. Embrace the ease of Windows compatibility while benefiting from the robust capabilities of DeepStream.
Figure 9. DeepStream SDK on WSL2 architecture
For more information, see the WSL2 documentation.
PipeTuner 1.0: Streamlining AI pipeline optimization
PipeTuner 1.0 is a new developer tool poised to revolutionize the tuning of AI pipelines. AI services typically incorporate a wide array of parameters for inference and tracking. Finding the optimal settings to maximize accuracy for specific use cases is a complex and critical process.
Traditionally, manual tuning demands deep knowledge of each pipeline module and becomes impractical with extensive, high-dimensional parameter spaces—even with the support of datasets and ground-truth labels for accuracy analysis.
PipeTuner is designed to address these challenges head-on. PipeTuner efficiently explores the parameter space and automates the process of identifying the best parameters, achieving the highest possible key performance indicators (KPIs) based on the dataset provided by the user. Crucially, PipeTuner simplifies this process so that technical knowledge of the pipeline and its parameters is not required by the user.
PipeTuner 1.0 is in Developer Preview.
Figure 10. PipeTuner workflow
By integrating PipeTuner, you can accelerate your time to market and tailor DeepStream pipeline parameters for each deployment location, ensuring optimal performance in every scenario. This marks a significant step forward in making sophisticated AI pipeline optimization accessible and effective for everyone across all use cases and scenarios.
Start PipeTuning now!
Automatically Optimize DeepStream Vision AI Apps with PipeTuner
For more information, see the PipeTuner documentation.
Summary
We’re excited to see how you use these new tools and capabilities available on the latest DeepStream SDK 7.0 release to create something extraordinary.
Get started now and dive deeper in the DeepStream forum.
As always, happy DeepStreaming!
Related resources
- DLI course: AI Workflows for Intelligent Video Analytics with DeepStream
- GTC session: The Vision-AI Revolution powered by DeepStream
- GTC session: From Concept to Creation: Using TAO and DeepStream SDKs for Vision AI Application Development
- NGC Containers: DeepStream
- NGC Containers: DeepStream IVA Deployment Demo
- SDK: DeepStream SDK
Carlos Garcia-Sierra
Product Manager for DeepStream SDK, NVIDIA
Debraj Sinha
Product Marketing Manager for Metropolis, NVIDIA