High Probability Guarantees for Federated Learning
Abstract
We propose a solution to address the lack of high-probability guarantees in Federated Learning (FL) optimization, which can compromise its trustworthiness. Our method, called Federated Averaging with post-optimization (FedAvg-PO), is an enhancement to the existing Federated Averaging (FedAvg) algorithm. It involves a post-optimization phase that selects a optimum solution from list of final model parameters generated by parallel independent runs of the FedAvg. By incorporating these modifications, we significantly enhance the large-deviation properties of FedAvg, thereby enhancing the reliability and robustness of the optimization process in FL. The novel complexity analysis shows that FedAvg-PO can compute accurate and statistically guaranteed solutions in the federated learning context. Our result further relaxes the restrictive assumptions in FL theory by developing new technical tools which may be of independent interest. The insights provided by the computational requirements analysis contribute to the understanding of the scalability and efficiency of the algorithm, guiding its practical implementation.
Degree
M.S.
Advisors
Hashemi, Purdue University.
Subject Area
Communication|Artificial intelligence|Mathematics
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.