Kimodo Documentation#
Overview#
Kimodo is a kinematic motion diffusion model trained on a large-scale (700 hours) commercially-friendly optical motion capture dataset. The model generates high-quality 3D human and robot motions, and is controlled through text prompts and an extensive set of constraints such as full-body pose keyframes, end-effector positions/rotations, 2D paths, and 2D waypoints. See the project page for details.
Highlights#
Controlled Generation
Text prompts combined with full-body, root, and end-effector constraints.
Human(oid) Support
Model variations for both digital humans and humanoid robots.
Interactive Demo
Timeline editing, real-time 3D visualization, and example presets.