Image Manipulation Causes False Image Match for Google Vision

  • 19 January 2018 11:00:13 AM
  • By CassaundraProffitt

Image recognition is relatively easy for humans most of the time, but until recently, machines struggled with the task. Images contain enormous amounts of data, and in order to recognize an image, you have to be familiar with similar images. Machines are now capable of recognizing images quickly, but it is very computationally intensive to do so.

Image-based captcha checks are very prevalent due to the computational overhead required for analyzing a given image. Conversely, it is easy for people to recognize the images and sort them accordingly.

Google Cloud Vision

A group of researchers from MIT altered an image one pixel at a time to change the way Google Cloud Vision perceived an image. This method of attack known as “Adversarial Example” has been shown effective against various machine learning algorithms in multiple scenarios, but this latest study by a group of MIT researchers shows more than an order of magnitude increase in efficiency in carrying out the attack.

Which photo was altered?

The students from MIT used a photo of a machine gun as a base. They altered the image a little bit at a time in hopes of making the AI perceive another image. The photo is easily recognized by the students and outsiders looking into the test. The test was conducted in a black box environment, meaning the researchers had no prior knowledge of the algorithm’s inner workings.

Other images and videos have been altered, but those tests were done in a white box setting, meaning the researchers had knowledge of the inner workings of the algorithm. One such test allowed researchers to change the algorithm’s perception of a 3D printed turtle into seeing it as a rifle, with high certainty.

How did it change The AI’s perception?

The image in question, while visibly apparent that it is a machine gun, made the AI perceived it as a helicopter. The difference between one machine gun in an image and a helicopter is stark, but when images are altered, they can deceive human eyes as well. Though in this case, the differences are imperceptible to the human eye.

Concerns about self-driving car and other image recognition-based tech are arising due to similar studies to the 3D printed “rifle” turtle. What is to stop an attacker from changing an algorithm’s perceived stop sign into a speed limit sign?

How is this experiment different than previous tests?

The experiment was carried out without any prior knowledge about the algorithm or its inner workings. Black box tests have higher levels of difficulty because the “attacker” does not have insider knowledge, so they must often try several things before something clicks. In this case, a visual rending was unchanged to the naked eye, but it was able to change the perception of a highly advanced photo recognition algorithm and create a false-positive match.

The experiment findings are still undergoing peer-review, t can be viewed here.

According to the available literature, the test was carried out using

“<…>Two to three orders of magnitude fewer queries than previous methods.”

How do you expect image recognition technology to change over the next few years? Let us know in the comments below!

CassaundraProffitt

Cas is a B2B Content Marketer and Brand Consultant who specializes in disruptive technology. She covers topics like artificial intelligence, augmented and virtual reality, blockchain, and big data, to name a few. Cas is also co-owner of an esports organization and spends much of her time teaching gamers how to make a living doing what they love while bringing positivity to the gaming community.

Comments

JOIN OUR COMMUNITY OF

10,000+ disruptive companies, founders, and executives.



Your email address will NEVER be sharedor sold.

COMMUNITY