A 0.190-pJ/bit 25.2-Gb/s/wire Inverter-Based AC-Coupled Transceiver for Short-Reach Die-to-Die Interfaces in 5-nm CMOS

This paper presents an Inverter-based AC-coupled Toggle (ISR-ACT) transceiver targeted for short-reach die-to-die communication over silicon interposer or similar high-density interconnect. The ISR-ACT’s transmitter sends NRZ data through a small on-chip capacitor into the line. The receiver amplifies the low-swing pulses using a 1st-stage TIA to fully toggle the 2nd-stage output, where positive feedback to the input pad maintains the DC level on the line.

Omer Shapira

Omer Shapira joined NVIDIA Research in 2023. His research interests are in Computer Graphics, Virtual and Extended Reality (XR), Human-Computer Interaction and Haptics - with current focus on AI-Mediated Human Interaction.
Since 2016, he worked at NVIDIA's Simulation Technology group, leading the Omniverse XR group and contributing to Isaac, Drivesim, and NVIDIA's core XR technology and research.

Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalization

Training deep networks with limited labeled data while achieving a strong generalization ability is key in the quest to reduce human annotation efforts. This is the goal of semi-supervised learning, which exploits more widely available unlabeled data to complement small labeled data sets. In this paper, we propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.

ATISS: Autoregressive Transformers for Indoor Scene Synthesis

The ability to synthesize realistic and diverse indoor furniture layouts automatically or based on partial input, unlocks many applications, from better interactive 3D tools to data synthesis for training and simulation. In this paper, we present ATISS, a novel autoregressive transformer architecture for creating diverse and plausible synthetic indoor environments, given only the room type and its floor plan. In contrast to prior work, which poses scene synthesis as sequence generation, our model generates rooms as unordered sets of objects.

EditGAN: High-Precision Semantic Image Editing

Generative adversarial networks (GANs) have recently found applications in image editing. However, most GAN based image editing methods often require large scale datasets with semantic segmentation annotations for training, only provide high level control, or merely interpolate between different images. Here, we propose EditGAN, a novel method for high quality, high precision semantic image editing, allowing users to edit images by modifying their highly detailed part segmentation masks, e.g., drawing a new mask for the headlight of a car.

BigDatasetGAN: Synthesizing ImageNet with Pixel-wise Annotations

Annotating images with pixel-wise labels is a time-consuming and costly process. Recently, DatasetGAN showcased a promising alternative - to synthesize a large labeled dataset via a generative adversarial network (GAN) by exploiting a small set of manually labeled, GAN-generated images. Here, we scale DatasetGAN to ImageNet scale of class diversity. We take image samples from the class-conditional generative model BigGAN trained on ImageNet, and manually annotate 5 images per class, for all 1k classes.

Polymorphic-GAN: Generating Aligned Samples across Multiple Domains with Learned Morph Maps

Modern image generative models show remarkable sample quality when trained on a single domain or class of objects. In this work, we introduce a generative adversarial network that can simultaneously generate aligned image samples from multiple related domains. We leverage the fact that a variety of object classes share common attributes, with certain geometric differences. We propose Polymorphic-GAN which learns shared features across all domains and a per-domain morph layer to morph shared features according to each domain.

LION: Latent Point Diffusion Models for 3D Shape Generation

Denoising diffusion models (DDMs) have shown promising results in 3D point cloud synthesis. To advance 3D DDMs and make them useful for digital artists, we require (i) high generation quality, (ii) flexibility for manipulation and applications such as conditional synthesis and shape interpolation, and (iii) the ability to output smooth surfaces or meshes. To this end, we introduce the hierarchical Latent Point Diffusion Model (LION) for 3D shape generation.