Our Mission
The mission of the AIR-ML Lab is to tackle the core research challenges in developing reliable and trustworthy AI systems through principled machine learning (ML) approaches, with a strong emphasis on resilience to adversarial attacks, interpretability, and robustness. Currently, the research of our lab is centered around Adversarial ML, focusing on three main directions:
Foundations of Adversarial ML. Although rigor is quite essential for addressing trustworthy challenges in safety- and security-critical applications, the theoretical foundations of adversarial ML have lagged behind. The presence of adversaries often gives rise to non-i.i.d. learning problems, making the development of rigorous analytical frameworks far more challenging than in benign settings. Therefore, we aim to deepen the understanding of how ML algorithms behave in adversarial environments, leveraging insights from deep learning and probability theory. A primary focus is robust generalization in the presence of adversaries.
Principled Algorithmic Design. A major component of our research is the design of principled approaches to rigorously audit the misbehaviors of machine learners and mitigate their vulnerabilities through the development of novel ML algorithms. Existing approaches are typically heuristic—insufficient to faithfully capture adversarial capabilities—and therefore remain unreliable against adaptive attacks. We aim to bridge this gap by designing theory-inspired auditing tools and defensive mechanisms that are robust to possible adaptive variations. Our current research focus is on generative modeling and optimization.
Trustworthy AI Applications. We are dedicated to developing practical, high-performing ML tools to address critical trustworthy challenges across a variety of real-world AI applications, from computer vision (CV) and natural language processing (NLP) to biomedicine and cybersecurity. Key challenges include: (i) comprehensively yet faithfully characterizing adversarial behaviors within the context of specific applications, and (ii) designing trustworthy AI systems that are generalizable across diverse conditions, transparent in their decision-making, and resilient to adversaries (e.g., ensuring privacy and robustness), while preserving standard utility (e.g., clean performance, efficiency, and usability).