AI Training Algorithms Backdoors are Possible, Say Researchers

According to three researchers from New York University, they have developed a method which can infect artificial intelligence algorithm. As most of the companies outsource AI training operations using on demand MLaas (Machine-Learning-as-a-service) platform, the researchers have based their attacks on it. Technology Giants like Google allows researcher’s access to “Google Cloud Machine Learning Engine”. Similarly, Microsoft allows similar service through Azure Batch AI training and Amazon through EC2 service.

According to New York researchers, a backdoor behavior can be triggered by hiding small equation in deep learning algorithm and this is easily possible because it is deep learning algorithms are complex and vast. In order to prove their concept, the researchers have released a demo of image recognition AI in order to manipulate Stop road sign as an indicator of speed limit if objects like bomb sticker, flower sticker were placed on the surface of stop sign.

It is not that difficult to Inset a AI backdoor

The actual insertion of the backdoor is not difficult. According to researchers, it is more difficult to create malicious triggers. The attackers can user simple social engineering tricks such as phishing in order to take over the cloud service account and then introduce their backdoor model in the AI’s training models. Further, they can open-source their backdoor AI model for others. In practical life, this new backdoor is a hurdle for self-drive cars.