The history of exposing machine learning algorithms to the world shows how easy it is to ‘pollute’ the data. Here’s an idea from Australia to protect them…:
Researchers from Data61, the data and digital arm of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), said on Thursday that the “vaccine” is a significant advancement in machine learning research.
Algorithms can perform given tasks such as diagnose diseases from x-rays and identify spam emails but are vulnerable to adversarial attacks, a form of cyber-attack whereby malicious data causes them to malfunction.
Richard Nock, leader of the machine learning group at Data61, said that adversarial attacks work by adding a layer of noise over malicious data that deceives an algorithm into misclassifying it.
“Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world,” he said in a media release.
“Our new techniques prevent adversarial attacks using a process similar to vaccination.
“We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more ‘difficult’ training data set.”
“When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks.”