About me

I am a second-year Ph.D. student in Computer Science at Institute of Computing Technology, Chinese Academy of Sciences, advised by Prof. Shenghua Liu and Yiwei Wang. I am a member of CAS Key Laboratory of AI Security. My research interests lie in the general area of trustworthy large language models.

I completed a research internship at MSRA GenAI, focusing on post-training for LLMs. I am currently research interning in Qwen Team, working on pretraining foundation models.

I will be visiting the Language Technologies Institute (LTI), Carnegie Mellon University (CMU) this year for a one-year research stay under the supervision of Prof. Chenyan Xiong.

🔥 News

  • 2025-01: Three papers accepted to ICLR 2025, WWW 2025, and NAACL 2025.
  • 2024-11: One paper accepted to COLING 2025
  • 2024-09: Two papers accepted to EMNLP 2024
  • 2024-05: One paper accepted to ACL 2024

📚 Publications

You can also find my articles on my Google Scholar profile.

2025

  • Yue Liu, Jiaying Wu, Yufei He, Hongcheng Gao, Hongyu Chen, Baolong Bi, Jiaheng Zhang, Zhiqi Huang, Bryan Hooi. Efficient Inference for Large Reasoning Models: A Survey. arxiv paper

  • Hongcheng Gao, Jiashu Qu, Jingyi Tang, Baolong Bi, Yue Liu, Hongyu Chen, Li Liang, Li Su, Qingming Huang. Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation. arxiv paper huggingface github

  • Yuyao Ge, Shenghua Liu, Yiwei Wang, Lingrui Mei, Lizhe Chen, Baolong Bi, Xueqi Cheng. Innate Reasoning is Not Enough: In-Context Learning Enhances Reasoning Large Language Models with Less Overthinking. arxiv paper

  • Baolong Bi, Shenghua Liu, Yiwei Wang, Yilong Xu, Junfeng Fang, Lingrui Mei, Xueqi Cheng. Parameters vs. Context: Fine-Grained Control of Knowledge Reliance in Language Models. arxiv paper

  • Shiyu Ni, Keping Bi, Jiafeng Guo, Lulu Yu, Baolong Bi, Xueqi Cheng. Towards Fully Exploiting LLM Internal States to Enhance Knowledge Boundary Perception. [arxiv] [paper]

  • Zherui Li, Houcheng Jiang, Hao Chen, Baolong Bi, Zhenhong Zhou, Fei Sun, Junfeng Fang, Xiang Wang. Reinforced Lifelong Editing for Language Models. [arxiv] [paper]

  • Tianyu Zhang, Junfeng Fang, Houcheng Jiang, Baolong Bi, Xiang Wang, Xiangnan He. Explainable and Efficient Editing for Large Language Models. [arxiv] [paper]

2024

  • Baolong Bi, Shaohan Huang, Yiwei Wang, Tianchi Yang, Zihan Zhang, Haizhen Huang, Lingrui Mei, Junfeng Fang, Zehao Li, Furu Wei. Context-DPO: Aligning Language Models for Context-Faithfulness. [page] [arxiv] [paper]

  • Zehao Li, Wenwei Han, Yujun Cai, Hao Jiang, Baolong Bi, Shuqin Gao, Honglong Zhao, Zhaoqi Wang. GradiSeg: Gradient-Guided Gaussian Segmentation with Enhanced 3D Boundary Precision. [arxiv] [paper]

  • Yuyao Ge, Shenghua Liu, Baolong Bi, Yiwei Wang, Lingrui Mei, Wenjie Feng, Lizhe Chen, Xueqi Cheng. Can Graph Descriptive Order Affect Solving Graph Problems with LLMs?. [arxiv] [paper]

  • Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Ruibin Yuan, Xueqi Cheng. HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router. [arxiv] [paper]

  • Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Hongcheng Gao, Junfeng Fang, Xueqi Cheng. StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models. [arxiv] [paper]

  • Yilong Xu, Jinhua Gao, Xiaoming Yu, Baolong Bi, Huawei Shen, Xueqi Cheng. ALiiCE: Evaluating Positional Fine-grained Citation Generation. [arxiv] [paper]

  • Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Hongcheng Gao, Yilong Xu, Xueqi Cheng. Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities. [arxiv] [paper]

  • Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Jiayi Mao, Xueqi Cheng. “Not Aligned” is Not “Malicious”: Being Careful about Hallucinations of Large Language Models’ Jailbreak. [arxiv] [paper]

  • Baolong Bi, Shenghua Liu, Lingrui Mei, Yiwei Wang, Pengliang Ji, Xueqi Cheng. Decoding by Contrasting Knowledge: Enhancing LLMs’ Confidence on Edited Facts. [page] [arxiv] [paper]

  • Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng. Is Factuality Enhancement a Free Lunch For LLMs? Better Factuality Can Lead to Worse Context-Faithfulness. [arxiv] [paper]

  • Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng. LPNL: Scalable Link Prediction with Large Language Models. [arxiv] [paper]

  • Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Xueqi Cheng. SLANG: New Concept Comprehension of Large Language Models. [arxiv] [paper]

📝 Services

  • Reviewer for
    • International Conference on Learning Representations (ICLR) 2025
    • Annual Meeting of the Association for Computational Linguistics (ACL) 2024, 2025 (ARR 2024 December, ARR 2025 February)
    • Conference on Empirical Language Modeling (COLM) 2025

🎓 Education

  • Institute of Computing Technology, Chinese Academy of Sciences
    Ph.D. in Computer Science (2023 - present)
    Advisor: Prof. Shenghua Liu and Yiwei Wang

  • Chongqing University
    B.E. in Computer Science (Excellent) (2019 - 2023)
    Advisor: Prof. Chengliang Wang

🏆 Honors and Awards

  • 2021 Bronze medal in ICPC China National Programming Contest
  • 2020 The first prize of the National College Student Mathematical Modeling Contest
  • 2020 First prize in Chongqing University Student Programming Competition
  • 2017 First prize in the national youth informatics competition
  • Advanced Individual in Innovation and Entrepreneurship of Chongqing University
  • Ministry of Education-Huawei Smart Base “Future Star”
  • Chongqing University First Class Scholarship