Adversarial machine learning: The underrated threat of data poisoning

Most artificial intelligence researchers agree that one of the key concerns of machine learning is adversarial attacks, data manipulation techniques that cause trained models to behave in undesired ways. But dealing with adversarial attacks has become a sort of cat-and-mouse chase, where AI researchers develop new defense techniques and then find ways to circumvent them.

Among the hottest areas of research in adversarial attacks is computer vision, AI systems that process visual data. By adding an imperceptible layer of noise to images, attackers can fool machine learning algorithms to misclassify them. A proven defense method against adversarial attacks on computer vision systems is “randomized smoothing,” a series of training techniques that focus on making machine learning systems resilient against imperceptible perturbations. Randomized smoothing has become popular because it is applicable to deep learning models, which are especially efficient in performing computer vision tasks.

read more….