| Abstract: |
Federated Learning (FL) frameworks enable multiple clients to collaboratively train a Machine Learning (ML) model without requiring data to leave client devices, supporting applications in which privacy and data security are critical, such as healthcare and finance. In these systems, one of the first steps in the training process is sharing the model between the server and the clients, including both the architecture and the initial weights. However, this model-sharing step introduces a distinct attack surface, exposing FL systems to security threats such as malicious model serialization. This paper presents a systematic analysis of the security risks associated with model sharing in FL systems by examining commonly used techniques, tools, and deployment practices. We show that legacy model formats lacking built-in security mechanisms remain widely adopted, significantly increasing the attack surface, and that the growing popularity of model hubs further amplifies these risks by enabling large-scale distribution of malicious artifacts. While recent approaches have been proposed to improve model security, documented zero-day vulnerabilities demonstrate that the model-sharing process remains fragile in practice. By consolidating existing vulnerabilities and defenses, this work aims to raise awareness of the risks inherent to model sharing and to motivate the adoption of more secure model-sharing practices in privacy-sensitive FL deployments. |