AI News, Images that fool computer vision raise security concerns

Images that fool computer vision raise security concerns

Cornell graduate student Jason Yosinski and colleagues at the University of Wyoming Evolving Artificial Intelligence Laboratory have created images that look to humans like white noise or random geometric patterns but which computers identify with great confidence as common objects.

Second, the methods used in the paper provide an important debugging tool to discover exactly which artifacts the networks are learning.” Computers can be trained to recognize images by showing them photos of objects along with the name of the object.

In recent years, computer scientists have reached a high level of success in image recognition using systems called Deep Neural Networks (DNN) that simulate the synapses in a human brain by increasing the value of a location in memory each time it is activated.

“Deep” networks use several layers of simulated neurons to work at several levels of abstraction: One level recognizes that a picture is of a four-legged animal, another that it’s a cat, and another narrows it to “Siamese.” But computers don’t process images the way humans do, Yosinski said.

DNN might be used by a Web advertiser to decide what ad to show you on Facebook or by an intelligence agency to decide if a particular activity is suspicious.” Malicious Web pages might include fake images to fool image search engines or bypass “safe search” filters, Yosinski noted.

“[Machine learning researchers] now have a lot of stuff that works, but what we don’t have, what we still need, is a better understanding of what’s really going on inside these neural networks.” Yosinski collaborated with Jeff Clune, assistant professor of computer science at the University of Wyoming, and Wyoming graduate student Anh Nguyen.

Deep Visualization Toolbox

Code and more info: