NVIDIA JetPack 6.2 Brings Super Mode to NVIDIA Jetson Orin Nano and Jetson Orin NX Modules

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

The introduction of the NVIDIA Jetson Orin Nano Super Developer Kit sparked a new age of generative AI for small edge devices. The new Super Mode delivered an unprecedented generative AI performance boost of up to 1.7x on the developer kit, making it the most affordable generative AI supercomputer.

JetPack 6.2 is now available to support Super Mode for Jetson Orin Nano and Jetson Orin NX production modules, delivering up to 2x higher generative AI model performance. Now you can unlock greater value and lower total cost of ownership for new and existing robotics and edge AI applications.

This post discusses the details of Super Mode, including new power modes, benchmarks for popular generative AI models on the Jetson Orin Nano and Orin NX modules, updates to the documentation, and insights into NPN partners supporting Super Mode.

New reference power modes on the Jetson Orin Nano and Jetson Orin NX series

JetPack 6.2 enables the power boost on the Jetson Orin Nano and Jetson Orin NX series by unlocking higher frequencies on the GPU, DLA memory, and CPU clocks.

MODULE EXISTING REFERENCE POWER MODES
(available with existing flashing configs)
NEW POWER MODES
(available only with new flashing configs)
NVIDIA Jetson Orin Nano 4GB 7W, 15W 10W, 25W,MAXN SUPER
NVIDIA Jetson Orin Nano 8GB 7W, 15W 15W, 25W,MAXN SUPER
NVIDIA Jetson Orin NX 8GB 10W, 15W, 25W,MAXN 10W, 15W, 20W, 40W,MAXN SUPER
NVIDIA Jetson Orin NX 16GB 10W, 15W, 25W,MAXN 10W, 15W, 25W, 40W,MAXN SUPER

Table 1. New reference power modes on the Jetson Orin Nano and Jetson Orin NX modules

Jetson Orin Nano modules now have a 25W mode and a new uncapped MAXN SUPER mode. Similarly, Jetson Orin NX modules can now use a new higher 40W reference power mode as well as an uncapped MAXN SUPER mode.

The MAXN SUPER is an uncapped power mode that enables the highest number of cores and clock frequency for CPU, GPU, DLA, PVA, and SOC engines. If the total module power exceeds the thermal design power (TDP) budget in this mode, the module is throttled to lower frequency, which delivers lower performance while staying within the thermal budget.

We strongly recommend building your own custom power mode to find the right balance between power consumption or thermal stability and performance for your application and needs.

Table 2 compares the detailed specifications of Jetson Orin Nano 4GB and 8GB and Jetson Orin NX 8GB and 16GB in their original and Super Mode.

ORIN NANO 4GB​ ORIN NANO 4GB 
(SUPER) ​
ORIN NANO 8GB​ ORIN NANO 8GB ​
(SUPER) ​
ORIN NX 8GB​ ORIN NX 8GB​
(SUPER)​
ORIN NX 16GB​ ORIN NX 16GB​
(SUPER)​
PEAK AI PERF​INT8​ 20 TOPS (Sparse)​

10 TOPS (Dense)​

34 TOPS (Sparse)

​17 TOPS (Dense)​

40 TOPS (Sparse)​

20 TOPS (Dense)​

67 TOPS (Sparse)​

33 TOPS (Dense)​

70 TOPS (Sparse)​

35 TOPS (Dense)​

117 TOPS (Sparse)​

58 TOPS (Dense)​

100 TOPS (Sparse)

​50 TOPS (Dense)​

157 TOPS (Sparse)

​78 TOPS (Dense)​

NVIDIA AMPERE GPU  512 CUDA Cores ​

16 Tensor Cores​

625 MHz

20/10 INT8 TOPs (S/D)

5 FP16 TFLOPs

512 CUDA Cores ​

16 Tensor Cores
​
1020 MHz 

34/17 INT8 TOPs (S/D)

8.5 FP16 TFLOPs

1024 CUDA Cores​

32 Tensor Cores

​625 MHz

40/20 INT8 TOPs (S/D)

10 FP16 TFLOPs

1024 CUDA Cores​

32 Tensor Cores​

1020 MHz 

67/33  INT8 TOPs (S/D)

17 FP16 TFLOPs

1024 CUDA Cores
​
32 Tensor Cores765 MHz50/25 INT8 TOPs (S/D) ​13 FP16 TFLOPs
1024 CUDA Cores​

32 Tensor Cores​

1173 MHz 

77/38 INT8 TOPs (S/D)

19 FP16 TFLOPs

1024 CUDA Cores
​
32 Tensor Cores​918 MHz60/30 INT8 TOPs (S/D)15 FP16 TFLOPs
1024 CUDA Cores​

32 Tensor Cores
​
1173 MHz

77/38 INT8 TOPs (S/D)

19 FP16 TFLOPs

CPU​ 6X A78​1.5 GHz​ 6X A78​1.7 GHz​ 6X A78​1.5 GHz​ 6X A78​1.7 GHz​ 6X A78​2.0 GHz​ 6X A78​2.0 GHz​ 8X A78​2.0 GHz​ 8X A78​2.0 GHz​
DLA (S/D)​ NA​ NA​ NA​ NA​ 20/10 INT8 TOPs​ 40/20 INT8 TOPs​ 40/20 INT8 TOPs​ 80/40 INT8 TOPs​
DRAM BW​ 34 GB/s​ 51 GB/s​ 68 GB/s​ 102 GB/s​ 102 GB/s​ 102 GB/s​ 102 GB/s​ 102 GB/s​
MODULE POWER​ 7W
10W​
7W
10W
25W​
7W
15W​
7W
15W
25W​
10W
15W
20W​
10W
15W
25W
40W​
10W
15W
25W​
10W
15W
25W
40W​

Table 2. Original specs for Jetson Orin Nano and Jetson Orin NX and the specs in Super Mode

While using the new power modes, ensure that your product’s existing or new thermal design can accommodate the new specifications with the power modes. For more information, see the updated Thermal Design Guide.

Updated Power Estimator Tool

The Power Estimator Tool is a powerful tool provided by NVIDIA to create custom power profiles and nvpmodel configuration files by modifying system parameters such as the cores, maximum frequency and load levels on the GPU, CPU, DLA, and so on. The tool provides an estimated power consumption with various settings and can be used to create optimal parameters settings to get the desired balance between performance and power consumption.

We have updated the Power Estimator Tool with Super Mode. We strongly recommend that you use the Power Estimator Tool and verify in practice before deploying with high-performance applications.

Boost performance on popular generative AI models

With the introduction of Super Mode with JetPack 6.2, the Jetson Orin Nano and Jetson Orin NX modules deliver up to a 2x inference performance boost. We benchmarked the most popular large language models (LLMs), vision language models (VLMs) and vision transformers (ViTs).

Large language models

The following chart and tables show the Super Mode performance benchmark for popular LLMs such as Llama3.1 8B, Qwen2.5 7B, and Gemma2 2B.

Figure 1. Performance improvements for LLMs using Super Mode

DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.

In the following tables, LLM generation performance (tokens per second) was measured with INT4 quantization using MLC API.

Table 3 shows the LLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2.

Model Orin Nano 8GB (original) Orin Nano 8GB (Super Mode) Perf Gain (x)
Gemma 2 2B 11.40 18.60 1.64
SmolLM2 1.7B 23.00 35.80 1.56

Table 3. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 4GB

Table 4 shows the LLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2.

Model Orin Nano 8GB (original) Orin Nano 8GB (Super Mode) Perf Gain (x)
Llama 3.1 8B 14.00 19.10 1.37
Llama 3.2 3B 27.70 43.10 1.55
Qwen 2.5 7B 14.20 21.80 1.53
Gemma 2 2B 21.5 35.0 1.63
Gemma 2 9B 7.20 9.20 1.28
Phi-3.5 3.8B 24.70 38.10 1.54
SmolLM2 1.7B 41.00 64.50 1.57

Table 4. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 8GB

Table 5 shows the LLM performance gain on Jetson Orin NX 8GB with JetPack 6.2.

Model Orin NX 8GB (original) Orin NX 8GB (Super Mode) Perf Gain (x)
Llama 3.1 8B 15.90 23.10 1.46
Llama 3.2 3B 34.50 46.50 1.35
Qwen 2.5 7B 17.10 23.80 1.39
Gemma 2 2B 26.60 39.30 1.48
Gemma 2 9B 8.80 13.38 1.52
Phi-3.5 3.8B 30.80 41.30 1.34
SmolLM2 1.7B 51.50 69.80 1.35

Table 5. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX 8GB

Table 6 shows the LLM performance gain on Jetson Orin NX 16GB with JetPack 6.2.

Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x)
Llama 3.1 8B 20.50 22.80 1.11
Llama 3.2 3B 40.40 45.80 1.13
Qwen 2.5 7B 20.80 23.50 1.13
Gemma 2 2B 31.60 39.00 1.23
Gemma 2 9B 10.56 13.26 1.26
Phi-3.5 3.8B 35.90 40.90 1.14
SmolLM2 1.7B 59.50 68.80 1.16

Table 6. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX and Orin Nano modules

Vision language models

The following chart and tables show the Super Mode performance benchmark for popular VLMs such as VILA1.5 8B, LLAVA1.6 7B, and Qwen2 VL 2B.

Figure 2. Performance improvements of VLMs when run using Super Mode

DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.

Table 7 shows the VLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2.

Model Orin Nano 4GB (original) Orin Nano 4GB (Super Mode) Perf Gain (x)
PaliGemma2 3B 7.2 11.2 1.56

Table 7. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 4GB

Table 8 shows the VLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2.

Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x)
VILA 1.5 3B 0.7 1.1 1.51
VILA 1.5 8B 0.6 0.8 1.45
LLAVA 1.6 7B 0.4 0.6 1.38
Qwen2 VL 2B 2.8 4.4 1.57
InternVL2.5 4B 2.5 5.1 2.04
PaliGemma2 3B 13.7 21.6 1.58
SmolVLM 2B 8.1 12.9 1.59

Table 8. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 8GB

Table 9 shows the VLM performance gain on Jetson Orin NX 8GB with JetPack 6.2.

Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x)
VILA 1.5 3B 0.8 1 1.25
VILA 1.5 8B 0.7 1.04 1.50
LLAVA 1.6 7B 0.5 1.2 2.54
Qwen2 VL 2B 3.4 4.8 1.41
InternVL2.5 4B 3 4.1 1.37
PaliGemma2 3B 17.1 23.9 1.40
SmolVLM 2B 9.7 14.4 1.48

Table 9. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX 16GB

Table 10 shows the VLM performance gain on Jetson Orin NX 16GB with JetPack 6.2.

Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x)
VILA 1.5 3B 1 1.3 1.23
VILA 1.5 8B 0.8 1 1.25
LLAVA 1.6 7B 0.6 0.7 1.07
Qwen2 VL 2B 4 4.8 1.20
InternVL2.5 4B 2.8 4.4 1.57
PaliGemma2 3B 20 23.8 1.19
SmolVLM 2B 11.7 14.3 1.22

Table 10. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX and Orin Nano modules

All VILA and LLAVA models were run with INT4 precision using MLC while the rest of the models were run in FP4 precision with Hugging Face Transformers.

Vision transformers

The following chart and tables show the Super Mode performance benchmark for popular ViTs such as CLIP, DINO, and SAM2.

Figure 3. Performance improvements of ViTs when run using Super Mode

DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.

Table 11 shows the ViT performance gain on Jetson Orin Nano 4GB with JetPack 6.2.

Model Orin Nano 4GB (original) Orin Nano 4GB (Super Mode) Perf Gain (x)
clip-vit-base-patch32 126.8 189.5 1.49
clip-vit-base-patch16 63.2 112.4 1.78
DINOv2-base-patch14 49.3 79.3 1.61
SAM2 base 2.5 3.8 1.54
vit-base-patch16-224 62.4 103.3 1.66

Table 11. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 4GB

Table 12 shows the ViT performance gain on Jetson Orin Nano 8GB with JetPack 6.2.

Model Orin Nano 8GB (original) Orin Nano 8GB (Super Mode) Perf Gain (x)
clip-vit-base-patch32 196 314 1.60
clip-vit-base-patch16 95 161 1.69
DINOv2-base-patch14 75 126 1.68
SAM2 base 4.4 6.3 1.43
Grounding DINO 4.1 6.2 1.52
vit-base-patch16-224 98 158 1.61
vit-base-patch32-224 171 273 1.60

Table 12. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 8GB

Table 13 shows the ViT performance gain on Jetson Orin NX 8GB with JetPack 6.2.

Model Orin NX 8GB (original) Orin NX 8GB (Super Mode) Perf Gain (x)
clip-vit-base-patch32 234.0 361.1 1.54
clip-vit-base-patch16 101.7 204.3 2.01
DINOv2-base-patch14 81.4 160.3 1.97
SAM2 base 3.9 7.4 1.92
Grounding DINO 4.2 7.4 1.75
vit-base-patch16-224 98.6 192.5 1.95
vit-base-patch32-224 193.1 313.5 1.62

Table 13. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin NX 8GB

Table 14 shows the ViT performance gain on Jetson Orin NX 16GB with JetPack 6.2.

Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x)
clip-vit-base-patch32 323.2 356.7 1.10
clip-vit-base-patch16 163.5 193.6 1.18
DINOv2-base-patch14 127.5 159.8 1.25
SAM2 base 6.2 7.3 1.18
Grounding DINO 6.2 7.2 1.16
vit-base-patch16-224 158.6 190.2 1.20
vit-base-patch32-224 281.2 309.5 1.10

Table 14. Benchmark performance in frames/sec for popular ViTs on Jetson Orin NX 16GB

All ViT models were run with FP16 precision using NVIDIA TensorRT and measurements are in FPS.

Getting started on NVIDIA Jetson Orin Nano and Jetson Orin NX with JetPack 6.2

The NVIDIA Jetson ecosystem provides various ways for you to flash the developer kit and production modules with the JetPack image.

To install JetPack 6.2 on the Jetson Orin Nano Developer Kit or the modules, use one of the following methods:

New flashing configuration

The new power modes are only available with the new flashing configuration. The default flashing configuration has not changed. To enable the new power modes, you must use the new flashing configuration while flashing.

Here’s the new flashing configuration to be used with flashing:

jetson-orin-nano-devkit-super.conf

After flashing or updating to JetPack 6.2, run the following command to start the newly available Super Mode.

MAXN SUPER mode on Jetson Orin Nano Modules:

sudo nvpmodel -m 2

MAXN SUPER mode on Jetson Orin NX Modules:

sudo nvpmodel -m 0

You can also select the MAXN SUPER and other power modes from the power mode menu at the top-right corner of the page.

Figure 4. Power mode selection menus

Jetson AI Lab

The Jetson AI Lab is the NVIDIA hub for exploring and experimenting with generative AI technologies optimized for edge devices. It supports developers and provides a collaborative community with nearly 50 tutorials, prebuilt containers, and resources for deploying on-device LLMs, SLMs, VLMs, diffusion policies, and speech models using optimized inferencing infrastructures.

By simplifying access to cutting-edge AI tools, the lab empowers developers of all levels to innovate and deploy generative AI locally, advancing open-source edge AI and robot learning.

Dive into generative AI with ease using these easy-to-follow tutorials for your developer kit powered by JetPack 6.2:

Updated documentation: Datasheets and design guides

With the newer performance boost, the following resources have been updated and can be downloaded from the Jetson Download Center:

Jetson ecosystem partners ready for Super Mode

To support customer deployments, the Jetson ecosystem partners have enhanced their solutions to support this boosted performance.

Category Jetson ecosystem partners
ISV solutions DeepEdge
Edge Impulse
RidgeRun
Ultralytics
Hardware system partners AAEON
Advantech
Aetina
AIMobile
ASUSTek
Axiomtek
Connect Tech
Seeed Studio
Syslogic
Vecow
Yuan High-Tech
Thermal solutions Advanced Thermal Solutions
Frore Systems

NVIDIA Jetson Orin lifecycle and roadmap

Due to the growing customer demand for Jetson Orin, NVIDIA recently announced the extension of the product lifecycle of Jetson Orin through 2032. With this performance boost, the Jetson Orin Nano and Orin NX series are the ideal platforms for both current and future models.

The upcoming JetPack 5.1.5 will also enable Super Mode for the Jetson Orin NX and Jetson Orin Nano modules. Developers and customers who develop with JetPack 5 will benefit from the performance boost.

Figure 5. JetPack software roadmap

Boost your application performance with JetPack 6.2

JetPack 6.2 is a groundbreaking release. It delivers an astonishing 2x boost in inference performance on existing Jetson modules, without any added cost. This upgrade is a must-have for Jetson developers and customers looking to supercharge their applications. Upgrade to JetPack 6.2 today and unleash the full potential of your Jetson platform.

Stay up to date by subscribing to our newsletter, and follow NVIDIA Robotics on LinkedIn, Instagram, X, and Facebook. For more information, explore our documentation or join the Robotics community on our developer forums, Discord, and YouTube channels.

Related resources

Shashank Maheshwari
Product Manager for Jetson Software, NVIDIA

Chen Su
Senior Technical Product Marketing Manager, NVIDIA

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top