Security as a key enabler, because the data used in the algorithms should handled in a privacy preserving way and because machine learning algorithms, when deployed in real world, must be protected from adversaries attempting to still their value and/or alter their effectiveness. Security as an extremely relevant application, because security should take advantage from the use of machine learning techniques to identify and mitigate attacks. In the context of the Horizon 2020 projects CPSoSAware and EVEREST, IDSIA is currently exploring both aspects.
The research aims on the one side, at building methodologies, tools, and architectures for ensure security and privacy for machine learning applications, on the other at exploring the use of machine learning techniques to early detect attacks and malicious activity, ultimately to improve the resilience of systems.
The main research challenges are:
- Conceive and experimentally validate methods to protect machine learning algorithms from physical and side channel attacks without affecting their performance (or, using a complementary approach, explore novel structures for machine learning algorithms more suitable to be protected against these attacks)
- Design and validate algorithms and architectures to preserve privacy of data in machine learning. Solutions based on homorphic encryption and/or scalable federated learning are suitable candidates, but they should be improved to make them practical and their robustness against different attacks should be carefully assessed.
- Design and practically validate suitable machine learning techniques to detect and react to attacks carried out in large scale data analytic applications. When real-time and limited energy footprint are strict requirements, the challenge is to ensure the effectiveness of the machine learning algorithms while minimally affecting the performance of the system.