Taoyu Su/苏涛宇

I received my Ph.D. from the Institute of Information Engineering, Chinese Academy of Sciences and the University of Chinese Academy of Sciences, Beijing, China. I am advised by Prof. Tingwen Liu.

Email  /  Google Scholar  /  GitHub  / 

profile photo
News

[2026-01] One Corresponding-author oral paper and One co-author paper accepted to WWW 2026 (CCF-A).

[2025-11] One Corresponding-author oral paper accepted to AAAI 2026 (CCF-A, oral).

[2025-07] One Corresponding-author paper accepted to NLPCC 2025 (CCF-C).

[2025-05] One co-author paper accepted to ICML 2025 (CCF-A), thanks to all co-authors!.

[2025-04] One first-author paper accepted to SIGIR 2025 (CCF-A), thanks to all co-authors!

[2024-07] One first-author paper accepted to ACM MM 2024 (CCF-A), thanks to all co-authors!

[2024-07] One first-author paper accepted to ECAI 2024 (CCF-B), thanks to all co-authors!

Research Highlights

Interested in the Multi-modal Entity Alignment, Knowledge Representation and Reasoning, Social Network Analysis.

Selected Publications
clean-usnob Conditional Diffusion Guided Knowledge Transfer for Multi-Domain Knowledge Graph Completion

Jiawei Sheng, Taoyu Su*,Xixun Lin, Xiaodong Li, Tingwen Liu

WWW (CCF-A, Oral, 317/3370=9.4%), 2026
paper / code

DMKGC pioneers a generation-based MKGC approach using conditional diffusion to transfer knowledge across KGs without suppressing domain-specific information.

clean-usnob Information-Theoretic Minimal Sufficient Representation for Multi-Domain Knowledge Graph Completion

Jiawei Sheng, Taoyu Su*, Weiyi Yang, Linghui Wang, Yongxiu Xu, Tingwen Liu

AAAI (CCF-A, Oral, 529/23680=2.2%), 2026
paper / code

IMKGC proposes an information-theoretic framework that learns minimal sufficient representations by suppressing redundant information for multi-domain knowledge graph completion.

clean-usnob Unlocking the Power of Large Language Models for Multi-table Entity Matching

Yingkai Tang, Taoyu Su*, Wenyuan Zhang, Xiaoyang Guo, Tingwen Liu

NLPCC (CCF-C), 2025
paper / arxiv / code

LLM4MEM leverages Large Language Models to resolve semantic inconsistencies and efficiency bottlenecks in multi-table entity matching.

clean-usnob Mitigating Modality Bias in Multi-modal Entity Alignment from a Causal Perspective

Taoyu Su, Jiawei Sheng, Duohe Ma, Xiaodong Li, Juwei Yue, Mengxiao Song, Yingkai Tang, Tingwen Liu

SIGIR (CCF-A, Oral), 2025
paper / arxiv / code

CDMEA mitigates visual modality bias in multi-modal entity alignment from a causal perspective by suppressing the direct effect of visual features while leveraging both visual and graph modalities.

clean-usnob IBMEA: Exploring Variational Information Bottleneck for Multi-modal Entity Alignment

Taoyu Su, Jiawei Sheng, Shicheng Wang, Xinghua Zhang, Hongbo Xu, Tingwen Liu

ACM MM (CCF-A), 2024
paper / arxiv / code

IBMEA emphasizes the alignment-relevant information and suppress the alignment-irrelevant information in generating entity representations.

clean-usnob LoginMEA: Local-to-Global Interaction Network for Multi-modal Entity Alignment

Taoyu Su , Xinghua Zhang, Jiawei Sheng, Zhenyu Zhang, Tingwen Liu

ECAI (CCF-B), 2024
paper / arxiv / code

LoginMEA fuses local multi-modal interactions to generate holistic entity semantics and then refine them with global relational interactions of entity neighbors.

Services
  • Program Committee / Reviewer: TKDD, TMM, ACMMM 2025/2026, KDD 2026, ECAI 2025
Self-evaluation
  • Outstanding physical and psychological qualities
  • One of the 50 members of the 600,000 candidates in Anhui Province who were qualified for the final selection of the China Air Force in 2013