Differentially-Private and Robust Federated Learning
Description
Federated learning is a machine learning paradigm that aims to learn collaboratively from decentralized private data owned by entities referred to as clients. However, due to its decentralized nature, federated learning is susceptible to poisoning attacks, where malicious clients try to corrupt the learning process by modifying their data or local model updates. Moreover, the updates sent by the clients might leak information about the private data involved in the learning. This thesis aims to investigate and combine existing robust aggregation techniques in FL with differential privacy techniques.
References:
[1] - https://arxiv.org/pdf/2304.09762.pdf
[2] - https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9757841
[3] - https://dl.acm.org/doi/abs/10.1145/3465084.3467919
Prerequisites
- Knowledge about machine learning and gradient descent optimization
- Proficiency in Python and PyTorch
- Undergraduate statistics courses
- Prior knowledge about differential privacy is a plus
Contact
marvin.xhemrishi@tum.de
luis.massny@tum.de
Supervisor:
Contribution scoring in Federated Learning
Description
Federated learning (FL) is a machine learning paradigm that aims to learn collaboratively from decentralized private data, owned by entities referred to as clients. In real-world applications of FL it is important to score the contribution of each client. The goal of this seminar is to provide a high-level overview of existing contribution-scoring techniques in federated learning using [1-2] and other references.
Prerequisites
References:
[1] - https://ieeexplore.ieee.org/document/10138056
[2] - https://arxiv.org/pdf/2403.07151.pdf
Supervisor:
Implementation of model poisoning attacks in federated learning
Description
Federated learning is a machine learning paradigm where decentralized entities (clients) collaboratively learn using their private data. A central server acts as a coordinator of the learning process. Due to the sensitivity of the private data involved, the data cannot be transferred. A salient problem of federated learning is the presence of malicious clients, which are clients that try to destroy the learning process. Malicious clients can do this by corrupting their data and/or by modifying their local model updates. The goal of this project is to understand how model poisoning attacks and defense strategies perform under different scenarios of federated learning using experiments.
References:
[1]- https://www.ndss-symposium.org/wp-content/uploads/ndss2021_6C-3_24498_paper.pdf
[2]- https://arxiv.org/pdf/1903.03936.pdf
[3]- https://arxiv.org/pdf/2304.00160.pdf
Prerequisites
- Basic knowledge of machine learning
- Python programming skills, knowledge of PyTorch is an advantage
Contact
marvin.xhemrishi@tum.de