The focus of the Center is to develop a rigorous understanding of the vulnerabilities inherent to machine learning, and to develop the tools, metrics, and methods to mitigate them.
Background. Recent advances in machine learning (ML) have vastly improved computational reasoning over complex domains. From video and text classification, to complex data analysis, machine learning is constantly finding new applications. Yet, when machine learning models are exposed to adversarial behavior, the systems built upon them can be fooled, evaded, and misled in ways that can have profound security implications. As more critical systems employ ML—from financial systems to self-driving cars to network monitoring tools—it is vitally important that we develop the rigorous scientific techniques needed to make machine learning more robust to attack. This nascent field, which we call trustworthy machine learning, is currently fragmented across several research communities including machine learning, security, statistics, and theoretical computer science.
NEWS AT CTML
NSF Site Visit to Penn State
First site visit at Penn State. Please arrive mid-afternoon Sept 17. Bring your supported graduate students. Will be held at Hyatt Place State College, T: 814-862-9813.
Adult Education in the Age of Artificial Intelligence
Dr. David Evans will present a talk at a fundraiser for The Academy of Hope Adult Public Charter School. It is open to the public, but tickets are required.
The Challenges of Machine Learning in Adversarial Settings
Dr. Patrick McDaniel will present the Keynote Address for Duke’s Triangle Area Privacy and Security Day.
NSF SaTC PI meeting
Fourth biennial NSF Secure and Trustworthy Cyberspace (SaTC) Principal Investigators Meeting, focused this year on Growing the Cybersecurity Research Pipeline: how can SaTC involve more undergraduates in research, and inspire them to pursue graduate studies in cybersecurity? How can SaTC increase diversity and inclusivity in cybersecurity research?