Compressing 1D Time-Channel Separable Convolutions using Sparse Random Ternary Matrices

We demonstrate that 1x1-convolutions in 1D time-channel separable convolutions may be replaced by constant, sparse random ternary matrices with weights in {−1, 0, +1}. Such layers do not perform any multiplications and do not require training. Moreover, the matrices may be generated on the chip during computation and therefore do not require any memory access. With the same parameter budget, we can afford deeper and more expressive models, improving the Pareto frontiers of existing models on several tasks.

GPU-Accelerated Partially Linear Multiuser Detection for 5G and Beyond URLLC Systems

We have implemented a recently proposed partially linear multiuser detection algorithm in reproducing kernel Hilbert spaces (RKHSs) on a GPU-accelerated platform. Our proof of concept combines the robustness of linear detection and non-linear detection for the non-orthogonal multiple access (NOMA) based massive connectivity scenario. Mastering the computation of the vast number of inner products (which involve kernel evaluations) is a challenge in ultra-low latency (ULL) applications due to the sub-millisecond latency requirement.

Artificial Neural Networks generated by Low Discrepancy Sequences

Abstract Artificial neural networks can be represented by paths. Generated as random walks on a dense network graph, we find that the resulting sparse networks allow for deterministic initialization and even weights with fixed sign. Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards. Although sparse, weights are accessed as contiguous blocks of memory.

Towards Adaptive Digital Self-Interference Cancellation in Full-Duplex Wireless Transceivers: APSM vs. Neural Networks

We investigate adaptive projected sub gradient method (APSM) and neural network (NN) machine learning techniques to address the challenge of digital self-interference cancellation in full-duplex communications. To this end, we compare both approaches in terms of their interference suppression capabilities, their computational complexity, and discuss their potential of continual training. Both approaches can take advantage of massively parallel processing in the digital domain, resulting in a significantly reduced end-to-end latency.

Adaptive NN-based OFDM Receivers: Computational Complexity vs. Achievable Performance

We revisit the design and retraining capabilities of neural network (NN)-based orthogonal frequency division multiplex (OFDM) receivers that combine channel estimation, equalization and soft-demapping for time-varying and frequency selective wireless channels. Attracted by the inherent advantages of small NNs in terms of computational complexity during inference and (re-)training, we first analyze the performance of different neural receiver architectures, including versions with reduced complexity.

Learning Joint Detection, Equalization and Decoding for Short-Packet Communications

We propose and practically demonstrate a joint detection and decoding scheme for short-packet wireless communications in scenarios that require to first detect the presence of a message before actually decoding it. For this, we extend the recently proposed serial Turbo-autoencoder neural network (NN) architecture and train it to find short messages that can be, all “at once”, detected, synchronized, equalized and decoded when sent over an unsynchronized channel with memory.

Adaptive Neural Network-based OFDM Receivers

We propose and examine the idea of continuously adapting state-of-the-art neural network (NN)-based orthogonal frequency division multiplex (OFDM) receivers to current channel conditions. This online adaptation via retraining is mainly motivated by two reasons: First, receiver design typically focuses on the universal optimal performance for a wide range of possible channel realizations.

Sionna RT: Differentiable Ray Tracing for Radio Propagation Modeling

Sionna™ is a GPU-accelerated open-source library for link-level simulations based on TensorFlow. Its latest release (v0.14) integrates a differentiable ray tracer (RT) for the simulation of radio wave propagation. This unique feature allows for the computation of gradients of the channel impulse response and other related quantities with respect to many system and environment parameters, such as material properties, antenna patterns, array geometries, as well as transmitter and receiver orientations and positions.

Graph Neural Networks for Channel Decoding

In this work, we propose a fully differentiable graph neural network (GNN)-based architecture for channel decoding and showcase a competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes. The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph that represents the forward error correction (FEC) code structure by replacing node and edge message updates with trainable functions.