Security analysis of brain-computing interfaces
This is an idea proposed in 2021 as a good starter project, and has been completed by Malachy O'Connor Brown and Oscar Hill. It was supervised by Anil Madhavapeddy, Zahra Tarkhani and Lorena Qendro.
Brain Computing Interface (BCI) technologies, both invasive and non-invasive, are increasingly used in a wide range of applications, from health-care to smart communication and control. Most BCI applications are safety-critical or privacy-sensitive. However, the infinite potentials of BCI and its ever-growing market size have been distracted the BCI community from significant security and privacy threats. In this research, we first investigate the security and privacy threats of various BCI devices and applications, from machine learning adversarial threats to untrusted systems and malicious applications. Then, we propose a hybrid framework for analyzing and mitigating these threats utilizing effective combinations of ML robustness techniques, information flow control, and systems/hardware security.
There were two separate internship projects that emerged from this, worked on by Malachy O'Connor Brown and Oscar Hill. They were:
- Security analysis of BCI systems. We explore the impact of current security threats on BCI stacks, including applications, frameworks, libraries, and systems abstractions. You will also investigate the possibility of new attack vectors and build tools to make the security analysis easier and more fun/automatic. You need to have development skills with C/C++ and scripting languages (e.g., Python). Experience with embedded devices, OS and sandboxes, reverse engineering, and threat analysis is preferred.
- Adversarial attacks on BCI. We explore various methods to detect and analyze security threats on BCI ML models, including attacks based on perturbed inputs, inference, and model patterns. You need to have development skills (e.g., C, C++, Python) and experience with at least one ML/Deep Learning framework such as PyTorch or TensorFlow. Previous work on embedded devices and adversarial attacks is preferred.
The results of this work were written up in Enhancing the Security & Privacy of Wearable Brain-Computer Interfaces, which is a really fun but rather worrying read!
Related News
- Enhancing the Security & Privacy of Wearable Brain-Computer Interfaces / Jan 2022
- Information Flow for Trusted Execution / Jan 2020