Certifiable Security & Privacy Risk Robustness for Deep Neural Networks

Cover image of the project

With DNNs widely being deployed in many critical applications, it is very important to avoid misbehaviours and information leakages of these models. In an era where the global regulations on privacy laws are getting tightened for traditional data access and collection, organizations, developers, regulators, and many other stakeholders have no formal understanding of what to expect when there are increased attacks on DNN-based systems or how to protect and regulate them. A seemingly innocuous DNN deployment could lead to leaking confidential details related to one’s finances, health, and biometrics of potentially millions of people who intentionally or unintentionally provided data to build the model. Hence, not only for the users to have the confidence to trust AI tools but also for regulators to better regulate them, models such as DNNs must provide the needed certificate of guarantee, which is the core of the proposed research.

Investigators – Dr Suranga Seneviratne
PhD Student – Naveen Karunanayake