Speakers

大会主席+主讲-Prof. James Tin Yau KWOK, The Hong Kong University of Science and Technology, China.jpg

Prof. James Tin Yau KWOK (IEEE Fellow)

The Hong Kong University of Science and Technology, China

James Kwok is a Professor in the Department of Computer Science and Engineering, Hong gy, China, Kong University of Science and Technology, and a Distinguished Visiting Professor in the Department of Electronic Engineering, Tsinghua University, China. He is an IEEE Fellow. Prof Kwok received his B.Sc. degree in Electrical and Electronic Engineering from the University of Hong Kong and his Ph.D. degree in computer science from the Hong Kong University of Science and Technology. He then joined the Department of Computer Science, Hong Kong Baptist University as an Assistant Professor. He returned to the Hong Kong University of Science and Technology and is now a Professor in the Department of Computer Science and Engineering. He is serving / served as an Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, Neural Networks, Neurocomputing, Artificial Intelligence Journal, International Journal of Data Science and Analytics, and an Action Editor of Machine Learning. He is also serving as Senior Area Chairs of major machine learning / AI conferences including NeurIPS, ICML, ICLR, IJCAI, and as Area Chairs of conferences including AAAI and ECML. He is on the IJCAI Board of Trustees. He is recognized as the Most Influential Scholar Award Honorable Mention for "outstanding and vibrant contributions to the field of AAAI/IJCAI between 2009 and 2019", and an inaugural Highly Ranked Scholar and 2024 Highly Ranked Scholar by ScholarGPS. Prof Kwok is the IJCAI-2025 Program Chair and PAKDD-2026 General Chair.

Title: Unlock Your Potential: Achieving Multiple Goals with Ease

Abstract: Multi-objective optimization (MOO) aims to optimize multiple conflicting objectives simultaneously and is becoming increasingly important in deep learning. However, traditional MOO methods face significant challenges due to the non-convexity and high dimensionality of modern deep neural networks, making effective MOO in deep learning a complex endeavor. In this talk, we address these challenges in MOO for several deep learning applications. First, in multi-task learning, we propose an efficient approach that learns the Pareto manifold by integrating a main network with several low-rank matrices. This method significantly reduces the number of parameters and helps extract shared features. We also introduce preference-aware model merging, which uses MOO to combine multiple models into a single one, treating the performance of the merged model on each base model's task as an objective. During the merging process, our parameter-efficient structure generates a Pareto set of merged models, each representing a Pareto-optimal solution tailored to specific preferences. Finally, we demonstrate that pruning large language models (LLMs) can be framed as a MOO problem, allowing for the efficient generation of a Pareto set of pruned models that illustrate various capability trade-offs.


叶茫教授,武汉大学,中国.png

Prof. Mang Ye

Wuhan University, China

Mang Ye is currently a Full Professor at the School of Computer Science, Wuhan University. He received the PhD degree from Hong Kong Baptist University in 2019, supported by Hong Kong PhD Fellowship. He received the B.Sc and M.Sc degrees from Wuhan University in 2013 and 2016. He worked as a Research Scientist at Inception Institute of Artificial Intelligence from 2019-2020 and worked as a Visiting Scholar at Columbia University in 2018. He has published more than 100 papers, with more than 14000 citations, including those from 2 Turing awardees (Geoffrey Hinton and Yann Lecun). 20 papers are ESI Highly Cited. He serves as Area Chair for top AI conferences such as CVPR, ICML, ICLR, ACM MM, ICLR, ECCV, and others, and he also serves as Associate Editor for IEEE TIFS and IEEE TIP. He received the National Science Foundation of China (NSFC) Excellent Youth Fund (Overseas). His research interests include open-world visual learning and its applications in multimedia analysis and reasoning.

Title: Multimodal Large Language Models: Continual Learning and Safe Tuning

Abstract: As Multimodal Large Language Models (MLLMs) demonstrate exceptionalcapabilities in understanding content across various modalities such as text and images, they have become a focal point of cutting-edge research in artificial intelligence. However, two primary challenges constrain their deployment and expansion in the dynamic real world. On one hand, models often forget previously learned knowledge when acquiring new information, which necessitates the ability for continual learning, much like humans. On the other hand, the vast number of parameters and substantial computational resource requirements present significant obstacles for adapting these models to different application scenarios, highlighting the critical importance of efficient tuning. This report will introduce our latest advancements in addressing these two challenges and provide an outlook on future research directions, aiming to offer insights for building the next generation of more flexible, scalable, and cost-effective artificial intelligence systems.


出版主席-Prof. Marcin Paprzycki, Systems Research Institute, Polish Academy of Sciences.png

Assoc. Prof. Marcin Paprzycki

Systems Research Institute, Polish Academy of Sciences, Poland

MARCIN PAPRZYCKI received the MS degree from Adam Mickiewicz University, Poznań, Poland, the PhD degree from Southern Methodist University, Dallas, Texas, and the doctor of science degree from the Bulgarian Academy of Sciences, Sofia, Bulgaria. He is an associate professor with the Systems Research Institute, Polish Academy of Sciences. He is a senior member of the ACM, a senior fulbright lecturer, and an IEEE Computer Society distinguished visitor. He has contributed to more than 500 publications and was invited to the program committees of more than 800 international conferences.

Title: INFERMed: a Retrieval-Augmented System for Explainable Drug-Drug Interaction Analysis

Abstract: Drug-drug interactions (DDIs) pose a significant clinical risk, especially as polypharmacy becomes more common in ageing populations. Traditional interaction checkers rely on static databases and often fail to infer mechanisms for unrecorded drug combinations. This presentation introduces INFERMed, a retrieval-augmented generation (RAG) system that integrates multi-source knowledge with a pharmacokinetic/pharmacodynamic (PK/PD) reasoning layer to predict and explain DDIs. The architecture combines a large knowledge graph (Pub- ChemRDF accessed via QLever), tabular clinical and risk data (via DuckDB), and real-world adverse event reports (via OpenFDA) to ground a local large language model in factual drug information. INFERMed was evaluated on 50 drug pairs with known interactions using an automated rubric-based inspector. The system achieved higher rubric scores for enzyme-mediated inhibition and induction cases than for absorption-based or herbal interactions. Case studies illustrate both high-accuracy explanations (e.g., metformin + iodinated contrast, rifampicin + oral contraceptive) and challenging edge cases (e.g., lithium + ACE inhibitor, warfarin + Ginkgo biloba). The results indicate that integration of structured PK/PD knowledge with an LLM improves the explanations of DDI and supports mechanism-focusedpharmacovigilance. The limitations of the current system, will be discussed, along with future research directions.