The trained Machine learning models are core components of proprietary products. Business models are entirely built around these ML powered products. Such products are either delivered as a software package (containing the trained model) or they are deployed on cloud with restricted API access for prediction. In this ML-as-a-service method users are charged per-query or per-hour basis hence generating the revenue for model owners. Models deployed on cloud could be vulnerable to Model Duplication attacks. Researchers found a way to exploit these services and clone the functionalities of the black box models hidden in the cloud by making continuous requests to the APIs. In this way, the attacker does not require to pay the cloud service provider. Worst case scenario, the attackers can sell the cloned model or use them in their business model.
Traditionally attackers use convex optimization algorithm like Gradient Descent with appropriate hyper-parameters to train their models. In our research we propose a modification to traditional approach called as GDALR (Gradient Driven Adaptive Learning Rate) that dynamically updates the learning rate based on the gradient values. This results in stealing the target model in comparatively less number of epochs, decreasing the time and cost, hence increasing the efficiency of the attack. This shows that sophisticated attacks can be launched for stealing the black box machine learning models which increases risk for MLaaS based businesses.
Through our research and offensive methodology, we illustrated the significant increase in performance of attacks, such as low loss values and better convergence in less number of epochs. Our research, GDALR with its increased performance explains the serious need to rewrite the current countermeasures for MLaaS, an obligatory and interesting area for future work.