Xingchao Liu

prof_pic.jpg

刘星超
Ph.D. Student
Computer Science Department
University of Texas at Austin

E-mail: xcliu (at) utexas (dot) edu

I'm actively looking for full-time research job on generative AI.
Feel free to email me to chat about future.

I am currently a final-year Ph.D. student, advised by Prof. Qiang Liu. Prior to that, I graduated from Beihang University with a bachelor’s degree in Electrical Engineering. In 2018, I worked with Prof. Hao Su at UC San Diego. I also interned at ByteDance and Helixon. My research interests span across various domains in machine learning, with a particular focus on probabilistic inference, generative modeling and their practical applications.

Note: Recently, I have been exploring on a novel technique called Rectified Flow for distribution matching. Rectified Flow offers several intriguing features:
  1. Rectified Flow is easy to understand mathematically. Moreover, it can match two arbitrary distributions \(\pi_0\) and \(\pi_1\), without constraints like the initial distribution \(\pi_0\) being Gaussian. This makes it suitable for general distribution matching problem, including generative modeling and unsupervised data transfer.
  2. Rectified Flow incorporates a unique algorithm called reflow. By gradually straightening the probability flow trajectories, reflow improves the couplings between the two distributions \(\pi_0\) and \(\pi_1\). With the refined couplings and the straightened flows, it transforms the infinite-step probability flow model into a one-step model, with pure supervised learning.
  3. Rectified Flow has demonstrated its effectiveness in large-scale text-to-image models (as exemplified in this paper). This scalability suggests its potential for building efficient generative foundation models.

news

Sep 13, 2023 Our Rectified Flow pipeline leads to the first one-step Stable Diffusion! Check out the paper and website!

Selected Publications[full list]

  1. ICLR 2023Notable 25%
    Flow straight and fast: Learning to generate and transfer data with rectified flow
    Xingchao Liu*, Chengyue Gong*, and Qiang Liu
    In International Conference on Learning Representations, 2023
  2. ICLR 2023Notable 25%
    Learning Diffusion Bridges on Constrained Domains
    Xingchao Liu, Lemeng Wu, Mao Ye, and Qiang Liu
    In International Conference on Learning Representations, 2023
  3. NeurIPS 2022
    Diffusion-based Molecule Generation with Informative Prior Bridges
    Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, and Qiang Liu
    In Advances in Neural Information Processing Systems, 2022
  4. ICML 2022
    A Langevin-like sampler for discrete distributions
    Ruqi Zhang, Xingchao Liu, and Qiang Liu
    In International Conference on Machine Learning, 2022
  5. NeurIPS 2021
    Conflict-averse gradient descent for multi-task learning
    Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu
    In Advances in Neural Information Processing Systems, 2021
  6. NeurIPS 2021
    Sampling with trusthworthy constraints: A variational gradient framework
    Xingchao Liu*, Xin Tong*, and Qiang Liu
    In Advances in Neural Information Processing Systems, 2021
  7. NeurIPS 2021
    Automatic and harmless regularization with constrained and lexicographic optimization: A dynamic barrier approach
    Chengyue Gong, Xingchao Liu, and Qiang Liu
    In Advances in Neural Information Processing Systems, 2021
  8. NeurIPS 2021Spotlight
    Profiling pareto front with multi-objective stein variational gradient descent
    Xingchao Liu, Xin Tong, and Qiang Liu
    In Advances in Neural Information Processing Systems, 2021
  9. NeurIPS 2020Spotlight
    Certified monotonic neural networks
    Xingchao Liu, Xing Han, Na Zhang, and Qiang Liu
    In Advances in Neural Information Processing Systems, 2020