We optimize a spring-mass physics model of deformable objects and
integrate the model with 3D Gaussian Splatting for real-time re-simulation with rendering.
We learn particle-based object dynamics model from real-world sparse-view RGB-D recordings, enabling
high-quality action-conditioned object motion prediction and rendering.
We learn neural dynamics models of objects from real perception data
and combine the learned model with 3D Gaussian Splatting for action-conditioned predictive rendering.
We learn a material-conditioned neural dynamics model using graph neural network to
enable predictive modeling of diverse real-world objects and achieve efficient manipulation via model-based planning.
We propose a fully self-supervised method for category-level 6D object pose estimation
by learning dense 2D-3D geometric correspondences. Our method can train on image collections
without any 3D annotations.
We show that fusing fine-grained features learned with low-level contrastive objectives and semantic features
from image-level objectives can improve SSL pretraining.
Contact
If you are interested in my work and would like to discuss research opportunity, collaboration, Ph.D. application, or anything else, feel free to contact me via email:
kaifeng dot z at columbia dot edu.