My research sits at the intersection of robotics, 3D vision, physics simulation, and machine learning.
I am interested in bridging the gap between robotic simulation and the real world for robust and scalable robot manipulation.
If you'd like to discuss research opportunity, collaboration, Ph.D. application, or anything related, feel free to reach out via email:
kaifeng dot z at columbia dot edu.
We propose a framework for robot policy evaluation in simulation environments,
using Gaussian Splatting for rendering and soft-body digital twin for dynamics.
We optimize a spring-mass physics model of deformable objects and
integrate the model with 3D Gaussian Splatting for real-time re-simulation with rendering.
We propose a neural particle-grid model for training dynamics model with real-world sparse-view RGB-D videos, enabling
high-quality future prediction and rendering.
We learn neural dynamics models of objects from real perception data
and combine the learned model with 3D Gaussian Splatting for action-conditioned predictive rendering.
We learn a material-conditioned neural dynamics model using graph neural network to
enable predictive modeling of diverse real-world objects and achieve efficient manipulation via model-based planning.
We propose a fully self-supervised method for category-level 6D object pose estimation
by learning dense 2D-3D geometric correspondences. Our method can train on image collections
without any 3D annotations.
We show that fusing fine-grained features learned with low-level contrastive objectives and semantic features
from image-level objectives can improve SSL pretraining.