๐ A Brief Introduction
I am currently pursuing a Ph.D. degree at the Hong Kong University of Science and Technology (HKUST), now in my third year, supervised by @Prof. Guang Zhang from HKUST-GZ and @Dr. Zhong Li from Microsoft Research Asia (MSRA).
My research interests focus on Data-Centric AI, generative model theory, and interpretability. I am also interested in applied research that can generate practical value, such as Fintech and other domains.
My work has been published at venues including ACL 2023, ACL 2025, ICAIF, and ICASSP 2026, with additional manuscripts under review at ICLR and ACL Rolling Review (ARR). I also serve as a reviewer for leading conferences such as NeurIPS, ICLR, and AAAI.
๐ฅ News
- 2026.01: ย ๐ Paper accepted at ICASSP 2026!
- 2025.10: ย ๐ Two papers received all positive reviews in the October ACL ARR cycle!
- 2025: ย ๐๐ Two papers accepted at ACL 2025 and ICAIF 2025!
- 2023: ย ๐๐ Paper accepted at ACL 2023!
๐ Selected Papers
- Yuanjian Xu and Guang Zhang. *HGAN-SDEs: Learning Neural Stochastic Differential Equations with Hermite-Guided Adversarial Training [ Paper] [ Code]. ICASSP 2026 [CCF B, CORE A]
โ Key Contribution: We introduce HGAN-SDEs, leveraging Neural Hermite functions to construct an expressive yet lightweight discriminator for Neural SDEs, achieving reduced computational complexity and improved training stability with theoretical guarantees.
- Yuanjian Xu and Guang Zhang. *FinRipple: Aligning Large Language Models with Financial Market for Event Ripple Effect Awareness [ Paper] [ Code]. ACL 2025 [CCF A, CORE A*]
โ Key Contribution: We propose FinRipple to align LLMs with financial markets by modeling event ripple effects, enabling better understanding of how financial events propagate and impact market dynamics.
- Yuanjian Xu and Guang Zhang. *LENS: Large Pre-trained Transformer for Exploring Financial Time Series Regularities [ Paper] [ Code]. ICAIF 2025 (Leading conference for AI in Finance)
โ Key Contribution: We present LENS, a large-scale pre-trained Transformer that captures inherent regularities in financial time series, achieving superior performance in financial forecasting tasks.
- Yuanjian Xu and Zaiqing Nie. *Hard Sample Aware Prompt-Tuning [ Paper] [ Code]. ACL 2023 [CCF A, CORE A*]
โ Key Contribution: We introduce a hard sample aware mechanism for prompt-tuning that dynamically adjusts learning focus on difficult samples, improving model performance on challenging instances.
- Jianing Haoโ , Yuhe Wuโ , Yuanjian Xuโ and Guang Zhang. *BizCompass: Benchmarking the Reasoning Capabilities of LLMs in Business Knowledge and Applications. ACL ARR (Oct 2025) [CCF A, CORE A*] โ All Positive Reviews (โ Equal Contribution)
๐ Key Idea: We introduce BizCompass, a dual-axis benchmark connecting theoretical foundations (finance, economics, statistics, OR) with practical business applications (analyst, trader, consultant), revealing how theoretical knowledge translates into real-world business performance.
- Yuanjian Xu and Guang Zhang. *Rethinking Data Mixing from the Perspective of Large Language Model. ACL ARR (Oct 2025) [CCF A, CORE A*] โ All Positive Reviews
๐ Key Idea: We establish formal connections between gradient dynamics and domain distributions, and introduce DoGraph, a graph-constrained optimization framework for data mixing that clarifies how domain weighting influences LLM generalization.
๐ Note: Papers with โ All Positive Reviews badges are currently under review and have received positive feedback from all reviewers, indicating a high likelihood of acceptance.
Under Review
- Yuanjian Xu, et al. A$^{4}$: Tree-Based Action Advantage Attribution for LLM Agent Evolution. ACL 2026 [CCF A]
๐ Key Idea: We propose a tree-structured approach to decompose agent component contributions and reduce estimation bias by adaptively adjusting sampling frequency, achieving improved convergence in multi-agent systems.
-
Yuanjian Xu and Guang Zhang. *State Aware Neural Stochastic Differential Equations for Multi-Modal Dynamics. IJCAI 2026
- Yuanjian Xu, et al. D$^{3}$: Dynamic Directional Graph-Constrained Data Scheduling for LLM Training. ICML 2026 [CCF A, CORE A*]
๐ Key Idea: We explain why training order matters in LLM optimization and propose a data scheduling framework grounded in gradient interactions, where training dependencies are modeled as a graph that explicitly constrains valid training orders.
- Yuanjian Xu, et al. Towards Efficient LLMs Annealing with Principled Sample Selection. ICML 2026 [CCF A, CORE A*]
๐ Key Idea: We provide a theoretical characterization of steady-state properties in LLM annealing and formulate sample selection as an optimization problem, achieving SOTA results across multiple model scales.
- Yuanjian Xu, et al. A Systematic Survey of Multi-Agent Learning, Collaboration, and Decision-Making. IJCAI 2026 Survey [CCF A]
๐ Education
I am currently pursuing a Ph.D. in Fintech at the Hong Kong University of Science and Technology. I received my Masterโs degree in Computer Science from Peking University, and my Bachelorโs degree in Computer Science from Nankai University.
๐ฌ Academic Activities
Research Experience:
- Microsoft Research Asia (MSRA), supervised by @Dr. Zhong Li
- Tsinghua Universityโs Institute for AI Industry Research (AIR), supervised by @Prof. Zaiqing Nie
Teaching Experience:
- Teaching Assistant, Advanced Statistics (FTEC 5030), HKUST
๐ผ Industry Research
During my Ph.D., I led an algorithm team at OpenCSG, a startup company focusing on open-source AI community development. The team consisted of 2 Ph.D. students (@Jianing Hao from HKUST(GZ), @Tianze Sun from HIT) and 2 students (@Changwei Xu from HKU, @Han Ding from Beihang University). I have also gained research experience at financial institutions including Amber and HuaTai Securities, where I worked as a research intern and participated in quantitative research.
๐ Honors and Awards
- 2023โPresent Full Ph.D. Scholarship, Hong Kong University of Science and Technology
- 2021 Award for Excellent Academic Excellence, Peking University (Certificate No.: H2021000170320)
- 2021 Air Star Plan, Tsinghua University, Institute for AI Industry Research (AIR)