Kaifeng Zhang

I am a second-year Ph.D. student in computer science at Columbia University, advised by Prof. Yunzhu Li. Prior to this, I obtained my Bachelors degree from Tsinghua University (Yao Class). I am fortunate to receive mentorship from Prof. Kris Hauser during my Ph.D. study, and Prof. Xiaolong Wang, Prof. Yang Gao, Prof. Li Yi during my undergrad.

I am interested in robotics, 3D vision, physics simulation, and machine learning.

Email  /  Google Scholar  /  Github  /  Twitter  /  LinkedIn  /  CV

profile photo
News

  • [2024.9] GS-Dynamics gets accepted to CoRL 2024.
  • [2024.8] After a wonderful year at UIUC, I will be joining Columbia University to continue my PhD.
  • [2024.5] AdaptiGraph is selected as the Best Abstract Award at the 4th RMDO Workshop @ ICRA2024.
  • [2024.5] AdaptiGraph is accepted to RSS 2024.
  • [2023.8] Starting my PhD at UIUC, advised by Prof. Yunzhu Li.
  • Publications
    PhysTwin: Physics-Informed Reconstruction and Simulation of Deformable Objects from Videos
    Hanxiao Jiang, Hao-Yu Hsu, Kaifeng Zhang, Hsin-Ni Yu, Shenlong Wang, Yunzhu Li
    ArXiv, 2025
    website / arXiv / pdf / code

    We optimize a spring-mass physics model of deformable objects and integrate the model with 3D Gaussian Splatting for real-time re-simulation with rendering.

    Particle-Grid Neural Dynamics for Learning Deformable Object Models from Depth Images
    In submission

    We learn particle-based object dynamics model from real-world sparse-view RGB-D recordings, enabling high-quality action-conditioned object motion prediction and rendering.

    Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling
    Mingtong Zhang*, Kaifeng Zhang*, Yunzhu Li
    Conference on Robot Learning (CoRL), 2024
    website / arXiv / pdf / code / demo

    We learn neural dynamics models of objects from real perception data and combine the learned model with 3D Gaussian Splatting for action-conditioned predictive rendering.

    AdaptiGraph: Material-Adaptive Graph-Based Neural Dynamics for Robotic Manipulation
    Kaifeng Zhang*, Baoyu Li*, Kris Hauser, Yunzhu Li
    Robotics: Science and Systems (RSS), 2024
    ICRA RMDO Workshop, 2024 (Best Abstract Award)
    website / arXiv / pdf / code

    We learn a material-conditioned neural dynamics model using graph neural network to enable predictive modeling of diverse real-world objects and achieve efficient manipulation via model-based planning.

    4DRecons: 4D Neural Implicit Deformable Objects Reconstruction from a single RGB-D Camera with Geometrical and Topological Regularizations
    Xiaoyan Cong, Haitao Yang, Liyan Chen, Kaifeng Zhang, Li Yi, Chandrajit Bajaj, Qixing Huang
    Arxiv, 2024
    arXiv / pdf

    We achieve 4D neural implicit reconstruction from only a single-view scan using deformation and topology regularizations.

    Self-Supervised Geometric Correspondence for Category-Level 6D Object Pose Estimation in the Wild
    Kaifeng Zhang, Yang Fu, Shubhankar Borse, Hong Cai, Fatih Porikli, Xiaolong Wang
    International Conference on Learning Representations (ICLR), 2023
    website / arXiv / pdf / code

    We propose a fully self-supervised method for category-level 6D object pose estimation by learning dense 2D-3D geometric correspondences. Our method can train on image collections without any 3D annotations.

    Semantic-Aware Fine-Grained Correspondence
    Yingdong Hu, Renhao Wang, Kaifeng Zhang, Yang Gao
    European Conference on Computer Vision (ECCV), 2022 (Oral)
    arXiv / pdf / code

    We show that fusing fine-grained features learned with low-level contrastive objectives and semantic features from image-level objectives can improve SSL pretraining.

    Contact

    If you are interested in my work and would like to discuss research opportunity, collaboration, Ph.D. application, or anything else, feel free to contact me via email: kaifeng dot z at columbia dot edu.


    Template borrowed from Jon Barron