Decoding Problems with Quantization
LWR, LWQ, SDP
Beschreibung
Due to the recent advances in quantum computers, searching for cryptosystems that survive quantum attacks is of great interest. Lattice- and code-based hardness assumptions are promising candidates, both of which are built on the hardness of solving noisy linear equations [3].
The close relationship between codes and lattices has contributed to advances in both research areas: state-of-the-art solvers employ similar techniques (see, e.g. [4]), and constructions often follow analogous approaches (see, e.g. [5]).
However, not all lattice (or code) concepts currently have couterparts in the other domain.
In this thesis, the concepts of Learning with Rounding (LWR) [1] and Learning with Quantization (LWQ) [2] are applied to codes.
The LWR and LWQ problems are variants of the Learning with Errors (LWE) problem on lattices, where the small error ensuring the hardness of the problem, is replaced by a deterministic rounding or quantization procedure. Due to the similarity of the lattice-based LWE problem and the code-based Syndrome Decoding (SD) problem, it should be investigated, if the reductions from LWE to LWR and LWQ can be transferred to the SD problem, introducing the concept of Syndrome Decoding with Rounding for codes, and how this can be used in code-based cryptography.
Main Papers:
[1] Alwen, J., Krenn, S., Pietrzak, K., & Wichs, D. (2013, August). Learning with rounding, revisited: New reduction, properties and applications. In Annual Cryptology Conference (pp. 57-74). Berlin, Heidelberg: Springer Berlin Heidelberg.
[2] Lyu, S., Liu, L., & Ling, C. (2024). Learning with Quantization, Polar Quantizer, and Secure Source Coding. Cryptology ePrint Archive.
References:
[3] Weger, V., Gassner, N., & Rosenthal, J. (2022). A Survey on Code-Based Cryptography. arXiv preprint arXiv:2201.07119.
[4] Debris-Alazard, T., Ducas, L., & van Woerden, W. P. (2022). An algorithmic reduction theory for binary codes: Lll and more. IEEE Transactions on Information Theory, 68(5), 3426-3444.
[5] Melchor, C. A., Aragon, N., Bettaieb, S., Bidoux, L., Blazy, O., Deneuville, J. C., ... & Bourges, I. C. (2018). Hamming quasi-cyclic (HQC). NIST PQC Round, 2(4), 13.
Betreuer:
Privacy Preserving Federated Learning
Beschreibung
Due to strict data protection guidelines and the extensive use of artificial intelligence, privacy preserving federated learning is gaining attention in cryptography. One advantage of this form of machine learning is that the client's data for model training does not have to pass to the server for the gradient calculation. Instead, the clients calculate the gradients from their data themselves and send them to the server, which aggregates the gradients of all clients and thus updates the network with the learning of all clients, without receiving the data. However, even if the data is no longer sent, it has been shown that the gradients are sufficient to partially restore the original data, called model inversion attacks [1]. For this reason, there is various research on how to perform the model aggregation of clients the clients gradients in a “privacy preserving” way. i.e. in a way, the gradients can not be derived by any adversary.
The student should first look at what kind of adversaries can generally be considered in the privacy preserving aggregation setting. Which participants can attack the model at which point? To what extent can the clients, the aggregator or any eavesdropper learn the gradients of the other clients?
Furthermore, around three different privacy preserving aggregation concepts should be explained roughly. Thereby the concept of the privacy preserving aggregation should be described. For example, privacy preserving aggregation can be implemented to a certain extent using secret sharing [2], homomorphic encryption [3] or differential privacy [4]. Furthermore, the overhead in complexity and communication costs caused by the additional privacy should be discussed. Since many concepts only protect against certain adversaries, it is also important to explain for which adversaries the concept is secure and against which it is no longer secure
The referenced papers are just some inspiration. Please feel free to find other Privacy Preserving Aggregation Concepts, if you want.
[1] Geiping, Jonas, et al. "Inverting gradients-how easy is it to break privacy in federated learning?." Advances in neural information processing systems 33 (2020): 16937-16947.
[2] Bonawitz, Keith, et al. "Practical secure aggregation for privacy-preserving machine learning." proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.
[3] Aono, Yoshinori, et al. "Privacy-preserving deep learning via additively homomorphic encryption." IEEE transactions on information forensics and security 13.5 (2017): 1333-1345.
[4] Geyer, Robin C., Tassilo Klein, and Moin Nabi. "Differentially private federated learning: A client level perspective." arXiv preprint arXiv:1712.07557 (2017).