ReMatching Dynamic Reconstruction Flow

Reconstructing a dynamic scene from image inputs is a fundamental computer vision task with many downstream applications. Despite recent advancements, existing approaches still struggle to achieve high-quality reconstructions from unseen viewpoints and timestamps. This work introduces the ReMatching framework, designed to improve reconstruction quality by incorporating deformation priors into dynamic reconstruction models.

Task-Based Tensor Computations on Modern GPUs

Domain-specific, fixed-function units are becoming increasingly common in modern processors. As the computational demands of applications evolve, the capabilities and programming interfaces of these fixed-function units continue to change. NVIDIA’s Hopper GPU architecture contains multiple fixed-function units per compute unit, including an asynchronous data movement unit (TMA) and an asynchronous matrix multiplication unit (Tensor Core).

Modeling Visually-Guided Aim-and-Shoot behavior in First-Person Shoters

In first-person shooters, players aim by aligning the crosshair onto a target and shoot at the optimal moment. Since winning a match is largely determined by such aim-and-shoot skills, players demand quantitative evaluation of the skill and analysis of hidden factors in performance. In response, we build a simulation model of the cognitive mechanisms underlying aim-and-shoot behavior based on the computational rationality framework.

Augmenting Lane Perception and Topology Understanding with Standard Definition Navigation Maps

Autonomous driving has traditionally relied heavily on costly and labor-intensive High Definition (HD) maps, hindering scalability. In contrast, Standard Definition (SD) maps are more affordable and have worldwide coverage, offering a scalable alternative. In this work, we systematically explore the effect of SD maps for real-time lane-topology understanding.

Multi-Predictor Fusion: Combining Learning-based and Rule-based Trajectory Predictors

Trajectory prediction modules are key enablers for safe and efficient planning of autonomous vehicles (AVs), particularly in highly interactive traffic scenarios. Recently, learning-based trajectory predictors have experienced considerable success in providing state-of-the-art performance due to their ability to learn multimodal behaviors of other agents from data. In this paper, we present an algorithm called multi-predictor fusion (MPF) that augments the performance of learning-based predictors by imbuing them with motion planners that are tasked with satisfying logic-based rules.