Parasitic-Aware Analog Circuit Sizing with Graph Neural Networks and Bayesian Optimization

Layout parasitics significantly impact the performance of analog integrated circuits, leading to discrepancies between schematic and post-layout performance and requiring several iterations to achieve design convergence. Prior work has accounted for parasitic effects during the initial design phase but relies on automated layout generation for estimating parasitics. In this work, we leverage recent developments in parasitic prediction using graph neural networks to eliminate the need for in-the-loop layout generation.

 

VS-QUANT: Per-Vector Scaled Quantization for Accurate Low-Precision Neural Network Inference

Quantization enables efficient acceleration of deep neural networks by reducing model memory footprint and exploiting low-cost integer math hardware units. Quantization maps floating-point weights and activations in a trained model to low-bitwidth integer values using scale factors. Excessive quantization, reducing precision too aggressively, results in accuracy degradation. When scale factors are shared at a coarse granularity across many dimensions of each tensor, effective precision of individual elements within the tensor are limited.

Siddharth Gururani

Siddharth Gururani is a Research Scientist at NVIDIA. Prior to joining NVIDIA he was an AI Scientist at EA, where he worked on expressive speech synthesis focusing on low-resource regimes and approaches based on interpretable features to encode prosody. He received his Ph.D.

Fayçal Aït Aoudia

Fayçal Aït Aoudia is a Senior Research Scientist at NVIDIA working on the convergence of wireless communications and machine learning. Before joining NVIDIA, he was a research scientist at Nokia Bell Labs, France. He is one of the maintainers and core developers of the Sionna open-source link-level simulator. He obtained the diploma degree in computer science from the Institut National des Sciences Appliquées de Lyon, France, in 2014, and the PhD in signal processing from the University of Rennes 1, France, in 2017.

Peter Karkus

Peter is a Research Scientist at NVIDIA. Previously he has been a PhD candidate at the National University of Singapore, and he also held visiting research appointments at MIT and CMU.

Peter's research vision is to build human-level robot intelligence by combining structure and learning. His interests cover robotics, machine learning, and autonomous vehicles. His recent works are on neural networks that encode differentiable robot algorithms in order to learn partially observable planning, visual navigation, mapping and localization tasks.

Karu Sankaralingam

Karu's research has pioneered the principles of dataflow computing, focusing on the role of architecture, microarchitecture and the compiler. His research breakthroughs include constraint-theory based compilation for spatial architectures, specialized datapaths that can be dynamically configured, hybrid dataflow von-Neumann execution, new dataflow execution models that combine streaming and dataflow. His work has been featured in industry forums of Mentor and Synopsys, and has been covered by the New York Times, Wired, IEEE Spectrum, and Microprocessor Report.

Chen-Hsuan Lin

Chen-Hsuan Lin is a senior research scientist at NVIDIA Research. He received his Ph.D. in Robotics from Carnegie Mellon University, where he was advised by Simon Lucey. His research interests are computer vision, computer graphics, and generative AI applications, with a focus on 3D reconstruction and neural rendering problems for 3D content creation.