Kihwan Kim

Kihwan Kim is a senior research scientist in learning and perception research group at NVIDIA Research. He received Ph.D degree in Computer Science from Georgia Institute of Technology in 2011, and BS from Yonsei University in 2001. Prior to join Georgia Tech, he spent five years as an R&D engineer at Samsung and also worked for Disney Research Pittsburgh as a visiting research associate.

His field of research is Computer Vision and Machine Learning more specifically for 3D Vision and scene perception problems in any intelligent (AI) system including autonomous driving, AR/VR and smart surveillance systems. He led NVIDIA’s SLAM project (NVSLAM) and currently leads various 3D Computer Vision projects in NVIDIA.

More information can be found in [Homepage] and [CV].

Codebases (and dataset):

  • [PlaneRCNN]: Plane detection and reconstruction from single RGB image, CVPR19 (Oral).
  • [Neural RGB→D Sensor]: Per-pixel depth estimation from a RGB video, CVPR19 (Oral *Best paper finalist)
  • [Competitive Collaboration]: Joint unsupervised learning of motion and flow, CVPR19.
  • [3D Human affordance (TBD)]: Putting human in a scene: Human affordance for 3D scene reasoning, CVPR19
  • [HGMM and HGMR (TBD as ISAAC SDK)]: Point cloud processing and registration, CVPR16 (Spot Oral), ECCV18.
  • [Learning rigidity]: Learning rigidity for 3D Scene flow, ECCV18
  • [GeoMapNet] (Learning maps and camera localization, CVPR18 (Spot' Oral)
  • [LearningBRDF]: Dataset for learning reflectance, ICCV17 (Oral).
  • [Intrinsic3D]: HQ 3D Reconstruction with a joint optimization from apperarance, geometry and lighting, ICCV17
  • [Dynamic Hand Gesture] Dataset for online gesture recognition with R3DCNN, CVPR16 
  • [DTSLAM]: SLAM, Camera pose estimatino and mapping, 3DV15
  • etc (for old resources, please see my gatech page)
Main Field of Interest: