Chenhui Deng

Chenhui Deng is currently a Research Scientist at NVIDIA, where he focuses on leveraging graph-based machine learning techniques for circuit problems. Chenhui earned his PhD degree in Electrical and Computer Engineering from Cornell University in 2024. His research area is in the interdisciplinary field of Machine Learning, Spectral Graph Theory, Electronic Design Automation, and VLSI.

Novel Transformer Model Based Clustering Method for Standard Cell Design Automation

Standard cells are essential components of modern digital circuit designs. With process technologies advancing beyond 5nm, more routability issues have arisen due to the decreasing number of routing tracks (RTs), increasing number and complexity of design rules, and strict patterning rules. The standard cell design automation framework is able to automatically design standard cell layouts, but it is struggling to resolve the severe routability issues in advanced nodes.

Quantum computing with subwavelength atomic arrays

Photon-mediated interactions in subwavelength atomic arrays have numerous applications in quantum science. In this paper, we explore the potential of three-level quantum emitters, or “impurities” embedded in a two-dimensional atomic array to serve as a platform for quantum computation. By exploiting the altered behavior of impurities as a result of the induced dipole-dipole interactions mediated by subwavelength arrays, we design and simulate a set of universal quantum gates consisting of the square root iSWAP and single-qubit rotations.

Quantum Goemans-Williamson Algorithm with the Hadamard Test and Approximate Amplitude Constraints

Semidefinite programs are optimization methods with a wide array of applications, such as approximating difficult combinatorial problems. One such semidefinite program is the Goemans-Williamson algorithm, a popular integer relaxation technique. We introduce a variational quantum algorithm for the Goemans-Williamson algorithm that uses only n+1 qubits, a constant number of circuit preparations, and poly(n) expectation values in order to approximately solve semidefinite programs with up to N=2^n variables and M∼O(N) constraints.

Towards a scalable discrete quantum generative adversarial neural network

Quantum generative adversarial networks (QGANs) have been studied in the context of quantum machine learning for several years, but there has not been yet a proposal for a fully QGAN with both, a quantum generator and discriminator. We introduce a fully QGAN intended for use with binary data. The architecture incorporates several features found in other classical and quantum machine learning models, which up to this point had not been used in conjunction.

Optimized geometries for cooperative photon storage in an impurity coupled to a two-dimensional atomic array

The collective modes of two-dimensional ordered atomic arrays can modify the radiative environment of embedded atomic impurities. We analyze the role of the lattice geometry on the impurity's emission linewidth by comparing the effective impurity decay rate obtained for all noncentered Bravais lattices and an additional honeycomb lattice. We demonstrate that the lattice geometry plays a crucial role in determining the effective decay rate for the impurity.

Variational quantum optimization with multibasis encodings

Despite extensive research efforts, few quantum algorithms for classical optimization demonstrate a realizable
quantum advantage. The utility of many quantum algorithms is limited by high requisite circuit depth and non-
convex optimization landscapes. We tackle these challenges by introducing a variational quantum algorithm that
benefits from two innovations: multibasis graph encodings using single-qubit expectation values and nonlinear
activation functions. Our technique results in increased observed optimization performance and a factor-of-two

GPU/ML-Enhanced Large Scale Global Routing Contest

Modern VLSI design flows demand scalable global routing techniques applicable across diverse design stages. In response, the ISPD 2024 contest pioneers the first GPU/ML-enhanced global routing competition, selecting advancements in GPU-accelerated computing platforms and machine learning techniques to address scalability challenges. Large-scale benchmarks, containing up to 50 million cells, offer test cases to assess global routers' runtime and memory scalability.