Orin NX - flashing, cooling, configuring and testing

As of the time of writing this article, the official Nvidia Jetson Orin NX heatsink was not available yet as well as there's no support for Orin NX in the SDK Manager, which means both heatsink and flashing need a bit of creativity, but in the end, the whole process is not too complicated, just a bit inconvenient.



To successfully flash and run the Orin NX module, you must ensure it's properly cooled. Once dedicated cooling solutions are available, they will be the best option. For now, Xavier NX + a thermopad are a good and working well option.

I bought a Waveshare NX-FAN-PWM heatsink:


While the mounting hole layout and the bracket are the same, the height of the elements on the boards is slightly different between Xavier NX and Orin NX, causing the Orin NX CPU/GPU core not to touch the heatsink - you can see how the core is being reflected on the heatsink surface and the gap is clearly visible:

2023-01-30_19.54.22_1_.jpg 2023-01-30_19.53.58_1_.jpg

I used the solution to remove the thermal paste and use a 0.5mm thick thermopad. To make the heat conductivity decent, I used Thermopad Thermal Grizzly Minus Pad 8 (30 × 30 × 0,5 mm):

2023-02-01_22.34.22_1_.jpg Thermal-Grizzly-Minus-Pad-8-podk-adka-termiczna-8-0W-MK-multi-size-CPU-karta-graficzna_1_.png

The thing to remember is the screws are not going to screw all the way in. Put the pad between the CPU/GPU core and the heatsink and screw in all of the screws so the 4 square coils touch the heatsink.

I tested this solution under heavy CPU and GPU loads and the results were very good.


USB Keyboard and Mouse

During the flashing process, I used a USB keyboard and mouse. I'll be using them (and the attached monitor) later too. Jetson devices cannot put their USB controller used for flashing purposes into the host mode, this function only works on the Raspberry Pi Compute Module 4s. This means an additional USB controller is necessary.

For the USB ports, I used an Inline 66905 Mini-PCIe USB3 controller based on the Renesas D720201 chip - it works out of the box:

2023-02-22_13.02.17_1_.jpg 2023-02-21_08.54.22_1_.jpg



I tested flashing with bare-metal Ubuntu 20.04 LTS. As far as I know, using WSL on Windows or WMWare Player (free) with Ubuntu 20.04 LTS should also work. If you have tested such a solution, let us know!

The installation process also assumes flashing in Node 2 since some users experienced difficulties in flashing in Node 1.


PC Preparation

Install Ubuntu 20.04 LTS (this exact version, not 22.04 LTS or any other) on a PC.

Currently, SDK Manager does not support Orin devices, so we have to flash them "by hand". In the future, the whole process should be simpler, but for now, this is what we have to do.

Install the required libraries:

sudo apt install -y wget qemu-user-static nano

Navigate to the Jetson Linux page, then click on the green button of the latest Jetson Linux version:


Scroll down to the download table. From the Drivers section copy links for both the Driver Package (BSP) and Sample Root Filesystem:


For example, for Jetson Linux 35.2.1 the links are:
- Driver Package (BSP): https://developer.nvidia.com/downloads/jetson-linux-r3521-aarch64tbz2
- Sample Root Filesystem: https://developer.nvidia.com/downloads/linux-sample-root-filesystem-r3521aarch64tbz2

Download both files to, for example, the home directory - with the above URL example:

wget https://developer.nvidia.com/downloads/jetson-linux-r3521-aarch64tbz2
wget https://developer.nvidia.com/downloads/linux-sample-root-filesystem-r3521aarch64tbz2

Unpack the Driver Package (BSP) (again, using names from the example URLs above):

tar xpf jetson-linux-r3521-aarch64tbz2

Unpack the Sample Root Filesystem into the Driver Package (BSP) (sudo is important here):

sudo tar xpf linux-sample-root-filesystem-r3521aarch64tbz2 -C Linux_for_Tegra/rootfs/

Turing Pi 2 (similar to some other custom carrier boards) does not have the onboard EEPROM that the module or the flasher can access. The flasher, however, expects the EEPROM to exist as it does on the official Xavier NX carrier boards. We need to modify one file to set the EEPROM size to 0.

sudo nano Linux_for_Tegra/bootloader/t186ref/BCT/tegra234-mb2-bct-misc-p3767-0000.dts

The last EEPROM configuration line says:

cvb_eeprom_read_size = <0x100>;

Replace the value of 0x100 with 0x0 (make sure to not modify cvm_eeprom_read_size instead - the name is similar, but starts with cvm, modify the one whose name starts with cvb - b like a board):

cvb_eeprom_read_size = <0x0>;

Press F3 and F2 to save and exit:


Prepare the firmware:

cd Linux_for_Tegra/
sudo ./apply_binaries.sh
sudo ./tools/l4t_flash_prerequisites.sh


Turing Pi 2 Preparation

Insert Orin NX into Node 2 and install the NVMe drive for Node 2 (I don't yet know if that'll work with the USB drive on Turing Pi 2 - you could, in theory, use Mini PCIe to SATA controller but the bootloader would have to support it and I did not test this possibility, yet - if you happened to test this configuration, please let us know!).

Now, let's put the Orin NX device into the Forced Recovery Mode using the web panel:

  • turn the Node 2 power off
  • set Node 2 into the device mode
  • turn the Node 2 power on

You can also use this command line version:

  • tpi -p on (turns on all nodes)
  • tpi -u device -n 2
  • tpi -p off (turns off all nodes)

Connect the USB A-A cable to the PC and verify that the Orin NX device has been detected by invoking lsusb. It should pop up as the Nvidia Corp. APX device on the list:




Assuming you're still in the Linux_for_Tegra directory, flash Orin NX with NVMe drive using:

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \
-c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" \
--showlogs --network usb0 p3509-a02+p3767-0000 internal

If you want to use a USB drive (untested by me):

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device sda1 \
-c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" \
--showlogs --network usb0 p3509-a02+p3767-0000 internal

Flashing will take a longer time and the flasher will exit once it's done. The Orin NX FAN would not be spinning for the first part of the flashing process, but this is not a problem - the Orin NX will work just fine cooled passively by the heatsink for this part of the flashing process.

After flashing is done, it's time to move the module into Node 1. Sadly we have no way to turn it off gracefully since we have no input device to shut the operating system down, but this should not be a problem. Turn off the power using either the web panel or the command (tpi -p off) as before when putting the module into device mode.


Finishing the OS Installation

Note: There might be a way to configure Orin NX without a keyboard/mouse and monitor using one of the scripts in the tools folder (l4t_creat_default_user.sh) but I haven't attempted or tested that.

To finish up the configuration:

  • Disconnect the USB A-A cable used for flashing
  • move the module to Node 1
  • move the NVMe drive
  • insert the USB controller into the Mini PCIe slot
  • connect the keyboard and mouse to the USB controller
  • connect a monitor to the HDMI port (remember to set up the SW1 switch for Jetson devices otherwise you will not see the output on the monitor)
  • turn on the module power using either the web panel or the command (tpi -p on)

Using a keyboard and mouse, go through the configuration steps visible on the monitor and wait for the setup to finish - until you get a desktop environment. These are the standard Ubuntu initial configuration steps.

At this stage we have a bare operating system that does not even contain Jetpack - there are a few more required and suggested steps that we need to perform

These steps can be done using SSH - one of the steps asked for a hostname - if your PC/Mac has the mDNS running, you can use this name directly to connect to it via SSH, otherwise you need to find and use the IP address.



FAN Profile

First, we need nano or any other editor of choice:

sudo apt install -y nano

Then, to change the FAN profile, invoke:

sudo systemctl stop nvfancontrol
sudo nano /etc/nvfancontrol.conf

Find the line containing `FAN_DEFAULT_PROFILE` - near the bottom of the file content:


And replace quiet with cool:


Press F3 and F2 to save and exit:



sudo rm /var/lib/nvfancontrol/status
sudo systemctl start nvfancontrol


Power Mode

By default, the device is in 15W power mode - change it to MAXN using the settings in the top-right part of the screen.:



The Jetpack

As mentioned before, this (currently the only) method of installation does not provide Jetpack. It's time to install it.

To install Jetpack, let's first update the operating system:

sudo apt update
sudo apt -y upgrade
sudo apt -y dist-upgrade
sudo reboot

To install the Jetpack:

sudo apt -y install nvidia-jetpack



This section is optional as it shows how to test the module under full CPU and GPU load.

We'll be using TensorFlow to put the load on the GPU:
sudo apt -y install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt -y install python3-pip
sudo pip3 install -U pip testresources setuptools
sudo pip3 install -U numpy==1.21.1 future==0.18.2 mock==3.0.5 keras_preprocessing==1.1.2 keras_applications==1.0.8 gast==0.4.0 protobuf pybind11 cython pkgconfig packaging h5py==3.6.0
sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v51 tensorflow==2.11.0+nv23.01

To put some load on the CPU we'll use stress:

sudo apt -y install stress


sudo reboot

Now you can open Jetson Power GUI - it can be found in the upper-right corner when you click on the power profile, below the power mode settings

Additionally, open 2 terminal windows.

Save this Python code into the test.py file - this is a neural network that does really nothing useful (computes some noise), but puts a load on the GPU:

import os
import time
import subprocess
from threading import Thread
import tensorflow as tf
from tensorflow.keras import optimizers, layers, models
import numpy as np

DATA_SHAPE = (256, 256, 3)

model = models.Sequential()
model.add(layers.Conv2D(HIDDEN_LAYER_KERNELS, (3, 3), activation='relu', input_shape=DATA_SHAPE, strides=(1, 1), padding="same"))
model.add(layers.MaxPooling2D((2, 2), strides=(1, 1), padding="same"))
for _ in range(HIDDEN_LAYERS):
model.add(layers.Conv2D(HIDDEN_LAYER_KERNELS, (5, 5), activation='relu', strides=(1, 1), padding="same"))
model.add(layers.MaxPooling2D((5, 5), strides=(1, 1), padding="same"))

model.add(layers.Conv2D(2, (DATA_SHAPE[0] // 8, DATA_SHAPE[1] // 8), activation='relu'))
model.add(layers.Dense(64, activation='relu'))


y = np.ones((DATASET_SIZE, 10))
data = tf.data.Dataset.from_tensor_slices((X, y))
data = data.batch(BATCH_SIZE)


model.fit(data, epochs=1000)

In one terminal window run:

stress -c 8

to stress the CPU, and in another run:

python3.8 test.py

to stress the GPU at the same time. At this stage, the Orin NX might start showing over-current messages which means we are an amount of load beyond what it can handle.



Was this article helpful?

2 out of 2 found this helpful
Have more questions? Submit a request

Comments (0 comments)

Please sign in to leave a comment.