The focus of the Center is to develop a rigorous understanding of the vulnerabilities inherent to machine learning, and to develop the tools, metrics, and methods to mitigate them.
Background. Recent advances in machine learning (ML) have vastly improved computational reasoning over complex domains. From video and text classification, to complex data analysis, machine learning is constantly finding new applications. Yet, when machine learning models are exposed to adversarial behavior, the systems built upon them can be fooled, evaded, and misled in ways that can have profound security implications. As more critical systems employ ML—from financial systems to self-driving cars to network monitoring tools—it is vitally important that we develop the rigorous scientific techniques needed to make machine learning more robust to attack. This nascent field, which we call trustworthy machine learning, is currently fragmented across several research communities including machine learning, security, statistics, and theoretical computer science.
NEWS AT CTML
Trustworthy AL Symposium at Columbia University’s Data Science Institute
PIs Jha and Chaudhari will be speaking at the first Symposium to bring researchers and practitioners together to explore the future of trust, fairness, privacy and robustness in AI-based systems.
PI McDaniel’s Distinguished Lecture at Carnegie Mellon University
PI McDaniel will be speaking as part of CMU’s Security and Privacy Institute’s CyLab Distinguished Lecture Series.
PI McDaniel’s Distinguished Lecture at Stony Brook University
PI McDaniel will be speaking as part of Stony Brook’s Shutterstock Distinguished Lecture Series.