Anima Anandkumar

Animashree (Anima) Anandkumar joined NVIDIA as director of machine learning research in 2018. She is also a Bren professor at Caltech CMS department since 2017. Her research spans both theoretical and practical aspects of  machine learning. In particular, she has spearheaded research in tensor-algebraic methods, large-scale learning, deep learning, probabilistic models, and non-convex optimization. 

Anima is the recipient of several awards such as the Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from the Air Force and Army research offices, Faculty fellowships from Microsoft, Google and Adobe, and several best paper awards. She is the youngest named professor at Caltech, the highest honor bestowed to an individual faculty. She is part of the World Economic Forum's Expert Network consisting of leading experts from academia, business, government, and the media. She has been featured in documentaries by PBS, KPCC, wired magazine, and in articles by MIT Technology review, Forbes, Yourstory, O’Reilly media, and so on.

Anima received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, visiting researcher at Microsoft Research New England in 2012 and 2014, assistant professor at U.C. Irvine between 2010 and 2016, associate professor at U.C. Irvine between 2016 and 2017, and principal scientist at Amazon Web Services between 2016 and 2018.

Selected publications: 

  • Tensor Decompositions for Learning Latent Variable Models, JMLR, 2014. pdf
  • signSGD: compressed optimisation for non-convex problems, ICML, 2018. pdf
  • Combining Symbolic Expressions and Black-Box Function Evaluations In Neural Programs, ICLR 2018. pdf
  • Learning From Noisy Singly-labeled Data, ICLR 2018. pdf
Additional Research Areas: 

Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects

Using synthetic data for training deep neural networks for robotic manipulation holds the promise of an almost unlimited amount of pre-labeled training data, generated safely out of harm's way. One of the key challenges of synthetic data, to date, has been to bridge the so-called \emph{reality gap}, so that networks trained on synthetic data operate correctly when exposed to real-world data. We explore the reality gap in the context of 6-DoF pose estimation of known objects from a single RGB image.

Superpixel Sampling Networks

Superpixels provide an efficient low/mid-level representation of image data, which greatly reduces the number of image primitives for subsequent vision tasks. While various superpixel computation models exist, they are not differentiable, making them difficult to integrate into otherwise end-to-end trainable deep neural networks. In this work, we develop a new differentiable model for superpixel sampling that better leverages deep networks for learning superpixel segmentation.

Yu-Hsin Chen

Yu-Hsin Chen joined NVIDIA Research in August 2018 and is a member of the Architecture Research Group. His current research focuses on the design of computer architectures for machine learning and domain-specific processors. He received his Ph.D. in 2018 under the supervision of Prof. Vivienne  Sze and M.S. in 2013, both from MIT, and B.S. in 2009 from National Taiwan University, Taiwan. His work on the dataflows for CNN accelerators was selected as one of the Top Picks in Computer Architecture in 2016. He was also the recipient of the 2015 NVIDIA Graduate Fellowship. 

Main Field of Interest: 

Optimizing Software-Directed Instruction Replication for GPU Error Detection

Application execution on safety-critical and high-performance computer systems must be resilient to transient errors. As GPUs become more pervasive in such systems, they must supplement ECC/parity for major storage structures with reliability techniques that cover more of the GPU hardware logic. Instruction duplication has been explored for CPU resilience; however, it has never been studied in the context of GPUs, and it is unclear whether the performance and design choices it presents make it a feasible GPU solution.

Animesh Garg

My current research focuses on machine learning algorithms for perception and control in robotics. I am specifically interested in enabling efficient imitation in robot learning and human-robot interaction.

Most recently, I was Postdoctoral Researcher at Stanford working with Fei-Fei Li and Silvio Savarese at Stanford AI Lab. I received MS in Computer Science and Ph.D. in Operations Research from the UC, Berkeley in 2016. I was advised by Ken Goldberg in the Automation Lab as a part of the Berkeley AI Research Lab (BAIR). I also worked closely with Pieter AbbeelAlper Atamturk, and UCSF Radiation Oncology.

For more on my research work please visit: http://ai.stanford.edu/~garg/research/

 

Main Field of Interest: 

Steerable application-adaptive near eye displays

The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications.

Correlation-Aware Semi-Analytic Visibility for Antialiased Rendering

Geometric aliasing is a persistent challenge for real-time rendering. Hardware multisampling remains limited to 8 × , analytic coverage fails to capture correlated visibility samples, and spatial and temporal postfiltering primarily target edges of superpixel primitives. We describe a novel semi-analytic representation of coverage designed to make progress on geometric antialiasing for subpixel primitives and pixels containing many edges while handling correlated subpixel coverage.

Pages

Subscribe to Research RSS