Outreach & Education
We are at a unique point in time when we can address ML robustness before it is widely deployed and exploited in critical systems. Towards this goal, our center will establish a research community focused on trustworthy machine learning that will address this issue and continue to thrive long after our frontier project ends. The resulting science and arsenal of defensive techniques developed within this project will provide the basis for building future systems in a more safe and secure manner.
The Center PIs are working towards achieving this goal via an extensive joint outreach effort, including a massive open online course (MOOC) on this topic, an annual conference, and broad-based educational initiatives. The PIs are using the center to further their ongoing efforts at broadening participation in computing through a joint summer school on trustworthy ML aimed at under represented groups, and by engaging in activities for high school students across the country via a sequence of webinars that will be advertised through influential women in computing networks in the near future.
Below you can find more information about our programs and opportunities.
Recent PhD graduate, Nicolas Papernot, performing cutting edge research on machine learning security.
During the summer of 2019, our team will be participating in 2 camps for high school students. Stanford’s Computer Security and Machine Learning course will introduce students to adversarial machine learning, while Penn State’s Dancing with Robots will focus on artificial intelligence, machine learning, security and smart sensing.
Please check back to see when and where our next round of high school camps/courses will be.
The summer schools for graduate students and researchers is meant to provide exposure to this exciting topic. It’s modeled after the recent DeepSpec Summer School at Penn (as part of an NSF Expeditions project). The basic format is to bring researchers at one place and have course capsules on various topics (e.g., training-time attacks).
The primary focus is on developing curricula to rapidly provide a broad mix of researchers with the background they need in both security and machine learning to contribute to this area.
The Center PIs have contact with influential programs for women interested in computing, and have plans to collaborate with them to continue to increase the participation of women in computing fields.
More details about these opportunities will be available soon.
The Center is developing and running open on-line courses (“MOOC”s) focused on two different populations:
- Practitioners building or deploying machine-learning systems that need to understand the risks associated with adversaries and the state-of-the-art in mitigating those risks. This courses focus on providing an understanding of the general risks of ML, which is reinforced by on-line experiments on the model that the student plans to deploy.
- Students and researchers with limited background in machine-learning and security who want to learn about adversarial machine learning and develop the skills they will need to do research in this area. The focus for these students is to provide an engaging introduction to both the high-level concepts as well as the practical issues of how to use tools to run experiments.
These courses heavily leverage research results and incorporate developed tools into student exercises and projects, and build upon the Center PIs experiences with the summer school. So far, the Center PIs have been involved in developing over 25 MOOC courses including, the first three Cryptography MOOCs (Cryptography I, Cryptography II, and Applied Cryptography), and the first Introductory Computer Science MOOC. The three courses mentioned rank in the top 20 most popular MOOCs.
In addition to the MOOC courses, the Center is on the midst of creating on-campus teaching materials for an undergraduate course on adversarial ML under Creative Commons licenses.
More information will be added to this tab as the courses become available.