Jiachang Liu

prof_pic.jpg

fistName.lastName at duke.edu

Duke University

Durham, NC, 27705

I am a Ph.D. candidate studying machine learning at Duke University. My advisor is Professor Cynthia Rudin.

My main research interest is in interpretable machine learning. My goal is to create simple and sparse models that can fit into the palm of a person’s hand but still give accurate predictions. Simple models help us reveal the underyling data pattern and allow people with less technical background to engage in the data science process by bringing with their domain knowledge.

My research is organized around three thrusts:

1) Finding practical ML problems where interpretability matters greatly but performance or computational speed lags behind;

2) Designing efficient discrete and continuous optimization techniques to solve sparse ML problems, which are usually nonconvex and have a combinatorial nature; and

3) Building robust open-source code by leveraging efficient data structures and careful implementation to provide user-friendly softwares/packages for practitioners in the broad ML/statistics community.

Before coming to Duke, I earned my B.S. degree with double majors in physics and mathematics and a minor in computer science from University of Michigan, Ann Arbor in 2018.


news

Oct 5, 2023 Together with Cynthia Rudin, Margo Seltzer, and Chudi Zhong, I was awarded the Bell Labs Prize 2023 at 2nd place for work on trustworthy AI.

Publications

  1. NeurIPS
    OKRidge: Scalable Optimal k-Sparse Ridge Regression for Learning Dynamical Systems
    Jiachang Liu, Sam Rosen, Chudi Zhong, and Cynthia Rudin
    Advances in Neural Information Processing Systems (NeurIPS), Spotlight 2023
  2. NeurIPS
    FasterRisk: Fast and Accurate Interpretable Risk Scores
    Jiachang Liu*, Chudi Zhong*, Boxuan Li, Margo Seltzer, and Cynthia Rudin
    Advances in Neural Information Processing Systems (NeurIPS) 2022
  3. AISTATS
    Fast Sparse Classification for Generalized Linear and Additive Models
    Jiachang Liu, Chudi Zhong, Margo Seltzer, and Cynthia Rudin
    In International Conference on Artificial Intelligence and Statistics 2022
  4. ACL-W
    What Makes Good In-Context Examples for GPT-3?
    Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B Dolan, Lawrence Carin, and Weizhu Chen
    In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, Association for Computational Linguistics 2022
  5. TPAMI
    Representing Graphs via Gromov-Wasserstein Factorization
    Hongteng Xu, Jiachang Liu, Dixin Luo, and Lawrence Carin
    IEEE Transactions on Pattern Analysis and Machine Intelligence 2022
  6. ICML
    CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information
    Pengyu Cheng, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan, and Lawrence Carin
    In International Conference on Machine Learning 2020