- This event has passed.
Data Science: Attacking ML Models (ONLINE)
April 21 @ 5:30 pm - 7:30 pm CDT
Attacking a Machine Learning Model – Why we must protect ML models critical to our business
Machine learning models are designed to analyze input data and provide desired output data. What if we can manipulate the output data? I will demonstrate how easily we can attack an image classification model. We will feed an image of a specific animal into the image classification model and demonstrate how we can modify a single pixel in the original image to convince the model that the image is a different specific/desired animal.
If you train any type of model for your organization, be aware that similar techniques can be used to bypass your model if an attacker can directly access your model. For example, an attacker could feed a fraudulent transaction into a fraud detection model and determine what transaction detail can be changed to fool the model into believing the transaction is NOT fraudulent.