SwitchNet: protecting neural networks by structure obfuscation and switch-controlled inference

Abstract Training deep learning models requires substantial financial and human resources, so once deployed in untrusted environments, these models immediately attract the attention of attackers who seek to steal and misuse them. Traditional model protection methods are ineffective in addressing model accuracy, performance, and proactive defense. To this end, we present an active defensive approach SwitchNet by obfuscating model structure and proposing a switch-controlled mechanism to manage mod