Recent advances in machine learning (ML) have vastly improved computational reasoning over complex domains. From video and text classification, to complex data analysis, machine learning is constantly finding new applications. Yet, when machine learning models are exposed to adversarial behavior, the systems built upon them can be fooled, evaded, and misled in ways that can have profound security implications. As more critical systems employ ML—from financial systems to self-driving cars to network monitoring tools—it is vitally important that we develop the rigorous scientific techniques needed to make machine learning more robust to attack. This nascent field, which we call trustworthy machine learning, is currently fragmented across several research communities including machine learning, security, statistics, and theoretical computer science.