Sionna RT: Technical Report

Sionna is an open-source, GPU-accelerated library that, as of version 0.14, incorporates a ray tracer for simulating radio wave propagation. A unique feature of Sionna RT is differentiability, enabling the calculation of gradients for the channel impulse responses (CIRs), radio maps, and other related metrics with respect to system and environmental parameters, such as material properties, antenna patterns, and array geometries. The release of Sionna 1.0 provides a complete overhaul of the ray tracer, significantly improving its speed, memory efficiency, and extensibility.

SALAD: Self-Adaptive Link Adaptation

Adapting the modulation and coding scheme (MCS) to the wireless link quality is critical for maximizing spectral efficiency while ensuring reliability. 

We propose SALAD (self-adaptive link adaptation), an algorithm that exclusively leverages ACK/NACK feedback to reliably track the evolution of the signal-to-interference-plus-noise ratio (SINR), achieving high spectral efficiency while keeping the long-term block error rate (BLER) near a desired target. 

Sionna Research Kit: A GPU-Accelerated Research Platform for AI-RAN

We introduce the NVIDIA Sionna Research Kit, a GPU-accelerated research platform for developing and testing AI/ML algorithms in 5G NR cellular networks. 

Powered by the NVIDIA Jetson AGX Orin, the platform leverages accelerated computing to deliver high throughput and real-time signal processing, while offering the flexibility of a software-defined stack. 

Verification of Producer-Consumer Synchronization in GPU Programs

Previous efforts to formally verify code written for GPUs have focused solely on kernels written within the traditional data-parallel GPU programming model. No previous work has considered the higher performance, but more complex, warp-specialized kernels based on producer-consumer named barriers available on current hardware. In this work we present the first formal operational semantics for named barriers and define what it means for a warp-specialized kernel to be correct.

Pretraining codomain attention neural operators for solving multiphysics pdes

Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of high-resolution training data. To address these issues, we propose Codomain Attention Neural Operator (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems. Specifically, we extend positional encoding, self-attention, and normalization layers to function spaces.

Huge ensembles – Part 2: Properties of a huge ensemble of hindcasts generated with spherical Fourier neural operators

In Part 1, we created an ensemble based on spherical Fourier neural operators. As initial condition perturbations, we used bred vectors, and as model perturbations, we used multiple checkpoints trained independently from scratch. Based on diagnostics that assess the ensemble's physical fidelity, our ensemble has comparable performance to operational weather forecasting systems. However, it requires orders-of-magnitude fewer computational resources. Here in Part 2, we generate a huge ensemble (HENS), with 7424 members initialized each day of summer 2023.

Huge ensembles–Part 1: Design of ensemble weather forecasts using spherical Fourier neural operators

Simulating low-likelihood high-impact extreme weather events in a warming world is a significant and challenging task for current ensemble forecasting systems. While these systems presently use up to 100 members, larger ensembles could enrich the sampling of internal variability. They may capture the long tails associated with climate hazards better than traditional ensemble sizes. Due to computational constraints, it is infeasible to generate huge ensembles (comprised of 1000–10 000 members) with traditional, physics-based numerical models.

Spherical Fourier Neural Operators: Learning Stable Dynamics on the Sphere

Fourier Neural Operators (FNOs) have proven to be an efficient and effective method for resolution-independent operator learning in a broad variety of application areas across scientific machine learning. A key reason for their success is their ability to accurately model long-range dependencies in spatio-temporal data by learning global convolutions in a computationally efficient manner.