Large Language Models and Reliable Reasoning
Focus on reasoning stability, knowledge consistency, and interpretability for large models.
Deep Interdisciplinary Intelligence Lab
Deep Interdisciplinary Intelligence Lab (Di² Lab)
The Hong Kong University of Science and Technology (Guangzhou)
Core research directions of the lab.
Focus on reasoning stability, knowledge consistency, and interpretability for large models.
Research on CV, foundation models, multimodal learning, reinforcement learning, LLM reasoning, and open-world agents.
Study rule learning, knowledge abstraction, and trustworthy inference mechanisms.
Research efficient adaptation methods such as model fusion, task arithmetic, and parameter-space alignment.
Focus on text-to-motion, cross-modal generation, and evaluation methodologies for generative models.
Research on multimodal aerial intelligence, human-computer interaction (guide robot dog) and cognitive heuristic methods.
Explore affective understanding, digital human generation, and cognition-related intelligent applications.