Federated learning provides an effective paradigm to jointly optimize a model benefited from rich distributed data while protecting data privacy. Nonetheless, the heterogeneity nature of distributed data makes it challenging to define and ensure fairness among local agents. For instance, it is intuitively "unfair" for agents with data of high quality to sacrifice their performance due to other agents with low quality data. Currently popular egalitarian and weighted equity-based fairness measures suffer from the aforementioned pitfall. In this work, we aim to formally represent this problem and address these fairness issues using concepts from co-operative game theory and social choice theory. We model the task of learning a shared predictor in the federated setting as a fair public decision making problem, and then define the notion of core-stable fairness: Given $N$ agents, there is no subset of agents $S$ that can benefit significantly by forming a coalition among themselves based on their utilities $U_N$ and $U_S$ (i.e., $\frac{|S|}{N} U_S \geq U_N$). Core-stable predictors are robust to low quality local data from some agents, and additionally they satisfy Proportionality and Pareto-optimality, two well sought-after fairness and efficiency notions within social choice. We then propose an efficient federated learning protocol CoreFed to optimize a core stable predictor. CoreFed determines a core-stable predictor when the loss functions of the agents are convex. CoreFed also determines approximate core-stable predictors when the loss functions are not convex, like smooth neural networks. We further show the existence of core-stable predictors in more general settings using Kakutani's fixed point theorem. Finally, we empirically validate our analysis on two real-world datasets, and we show that CoreFed achieves higher core-stability fairness than FedAvg while having similar accuracy.
translated by 谷歌翻译
在联邦学习中,对受保护群体的公平预测是许多应用程序的重要限制。不幸的是,先前研究集团联邦学习的工作往往缺乏正式的融合或公平保证。在这项工作中,我们为可证明的公平联合学习提供了一个一般框架。特别是,我们探索并扩展了有限的群体损失的概念,作为理论上的群体公平方法。使用此设置,我们提出了一种可扩展的联合优化方法,该方法在许多群体公平限制下优化了经验风险。我们为该方法提供收敛保证,并为最终解决方案提供公平保证。从经验上讲,我们评估了公平ML和联合学习的共同基准的方法,表明它可以比基线方法提供更公平,更准确的预测。
translated by 谷歌翻译
联合学习(FL)提供了一个有效的范式,可以通过隐私保护训练机器学习模型。但是,最近的研究表明,由于可能是恶意和异质的当地代理商,FL受到各种安全,隐私和公平威胁的约束。例如,它容易受到仅贡献低质量数据的本地对抗药物的攻击,目的是损害具有高质量数据的人的性能。因此,这种攻击破坏了FL中公平性的现有定义,主要集中于某种绩效奇偶校验的概念。在这项工作中,我们旨在解决此限制,并通过对FL(FAA)的代理意识(FAA)提出正式的公平定义,该定义将当地代理的异质数据贡献考虑在内。此外,我们提出了基于代理聚类(焦点)的公平FL培训算法以实现FAA。从理论上讲,我们证明了线性模型的温和条件下的聚焦和最优性,并且具有有界平滑度的一般凸丢失函数。我们还证明,在线性模型和一般凸损耗函数下,与标准的FedAvg协议相比,FAA始终达到FAA衡量的更高公平性。从经验上讲,我们评估对四个数据集的重点,包括不同设置下的合成数据,图像和文本,并且我们表明,与FedAvg相比,基于FAA的焦点基于FAA的公平性显着更高,同时保持相似甚至更高的预测准确性。
translated by 谷歌翻译
联合学习通常被认为是一种有益的技术,它允许多个代理人相互协作,提高模型的准确性,并解决这些问题,这些问题否则这些问题是数据密集型 /昂贵而无法单独解决的。但是,在预期其他代理商将共享其数据的情况下,理性的代理人可能会很想从事有害行为,例如自由骑行的行为,他们在哪里贡献了数据,但仍然享有改进的模型。在这项工作中,我们提出了一个框架来分析此类合理数据生成器的行为。我们首先展示了幼稚的方案如何导致灾难性的自由骑行水平,其中数据共享的好处被完全侵蚀。然后,使用合同理论的想法,我们介绍基于准确性的机制,以最大程度地提高每个代理生成的数据量。这些可以防止自由骑行而无需任何付款机制。
translated by 谷歌翻译
Recently, lots of algorithms have been proposed for learning a fair classifier from decentralized data. However, many theoretical and algorithmic questions remain open. First, is federated learning necessary, i.e., can we simply train locally fair classifiers and aggregate them? In this work, we first propose a new theoretical framework, with which we demonstrate that federated learning can strictly boost model fairness compared with such non-federated algorithms. We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data. To bridge this gap, we propose FedFB, a private fair learning algorithm on decentralized data. The key idea is to modify the FedAvg protocol so that it can effectively mimic the centralized fair learning. Our experimental results show that FedFB significantly outperforms existing approaches, sometimes matching the performance of the centrally trained model.
translated by 谷歌翻译
我们展示了一个联合学习框架,旨在强大地提供具有异构数据的各个客户端的良好预测性能。所提出的方法对基于SuperQualile的学习目标铰接,捕获异构客户端的误差分布的尾统计。我们提出了一种随机训练算法,其与联合平均步骤交织差异私人客户重新重量步骤。该提出的算法支持有限时间收敛保证,保证覆盖凸和非凸面设置。关于联邦学习的基准数据集的实验结果表明,我们的方法在平均误差方面与古典误差竞争,并且在误差的尾统计方面优于它们。
translated by 谷歌翻译
联合学习是一种新兴的分散机器学习方案,允许多个数据所有者在确保数据隐私的同时协同工作。联邦学习的成功在很大程度上取决于数据所有者的参与。为了维持和鼓励数据业主的参与,公正地评估数据所有者提供的数据质量并相应地奖励它们是至关重要的。联邦福利价值,最近由Wang等人提出。 [联合学习,2020]是联合学习框架下的数据值的措施,其满足数据估值的许多所需属性。然而,联邦福利价值设计中潜在的不公平仍然存在因素,因为具有相同本地数据的两个数据所有者可能无法接收相同的评估。我们提出了一种新的措施,称为已联邦福利价值,以提高联邦福利价值的公平性。该设计取决于完成由数据所有者的不同子集的所有可能贡献组成的矩阵。它在温和条件下显示,该矩阵通过利用优化而利用概念和工具而大致低等级。理论分析和实证评估都验证了拟议的措施在许多情况下改善公平性。
translated by 谷歌翻译
公平的机器学习研究人员(ML)围绕几个公平标准结合,这些标准为ML模型公平提供了正式的定义。但是,这些标准有一些严重的局限性。我们确定了这些正式公平标准的四个主要缺点,并旨在通过扩展性能预测以包含分配强大的目标来帮助解决这些问题。
translated by 谷歌翻译
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
translated by 谷歌翻译
A key learning scenario in large-scale applications is that of federated learning, where a centralized model is trained based on data originating from a large number of clients. We argue that, with the existing training and inference, federated models can be biased towards different clients. Instead, we propose a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions. We further show that this framework naturally yields a notion of fairness. We present data-dependent Rademacher complexity guarantees for learning with this objective, which guide the definition of an algorithm for agnostic federated learning. We also give a fast stochastic optimization algorithm for solving the corresponding optimization problem, for which we prove convergence bounds, assuming a convex loss function and hypothesis set. We further empirically demonstrate the benefits of our approach in several datasets. Beyond federated learning, our framework and algorithm can be of interest to other learning scenarios such as cloud computing, domain adaptation, drifting, and other contexts where the training and test distributions do not coincide. MotivationA key learning scenario in large-scale applications is that of federated learning. In that scenario, a centralized model is trained based on data originating from a large number of clients, which may be mobile phones, other mobile devices, or sensors (Konečnỳ, McMahan, Yu, Richtárik, Suresh, and Bacon, 2016b;Konečnỳ, McMahan, Ramage, and Richtárik, 2016a). The training data typically remains distributed over the clients, each with possibly unreliable or relatively slow network connections.Federated learning raises several types of issues and has been the topic of multiple research efforts. These include systems, networking and communication bottleneck problems due to frequent exchanges between the central server and the clients . To deal with such problems, suggested an averaging technique that consists of transmitting the central model to a subset of clients, training it with the data locally available, and averaging the local updates. Smith et al. (2017) proposed to further leverage the relationship between clients, assumed to be known, and cast
translated by 谷歌翻译
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests--based on a collection of losses and hypothesis class--a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature's true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.
translated by 谷歌翻译
我们考虑开放的联合学习(FL)系统,客户可以在FL过程中加入和/或离开系统。鉴于当前客户端数量的差异,在开放系统中不能保证与固定模型的收敛性。取而代之的是,我们求助于一个新的性能指标,该指标称我们的开放式FL系统的稳定性为量,该指标量化了开放系统中学习模型的幅度。在假设本地客户端的功能强烈凸出和平滑的假设下,我们从理论上量化了两种FL算法的稳定性半径,即本地SGD和本地ADAM。我们观察到此半径依赖于几个关键参数,包括功能条件号以及随机梯度的方差。通过对合成和现实世界基准数据集的数值模拟,我们的理论结果得到了进一步验证。
translated by 谷歌翻译
我们提出了一个新颖的框架,以研究异步联合学习优化,并在梯度更新中延迟。我们的理论框架通过引入随机聚合权重来表示客户更新时间的可变性,从而扩展了标准的FedAvg聚合方案,例如异质硬件功能。我们的形式主义适用于客户具有异质数据集并至少执行随机梯度下降(SGD)的一步。我们证明了这种方案的收敛性,并为相关最小值提供了足够的条件,使其成为联邦问题的最佳选择。我们表明,我们的一般框架适用于现有的优化方案,包括集中学习,FedAvg,异步FedAvg和FedBuff。这里提供的理论允许绘制有意义的指南,以设计在异质条件下的联合学习实验。特别是,我们在这项工作中开发了FedFix,这是FedAvg的新型扩展,从而实现了有效的异步联合训练,同时保留了同步聚合的收敛稳定性。我们在一系列实验上凭经验证明了我们的理论,表明异步FedAvg以稳定性为代价导致快速收敛,我们最终证明了FedFix比同步和异步FedAvg的改善。
translated by 谷歌翻译
为了研究分布式学习的弹性,“拜占庭”文献考虑了一个强大的威胁模型,工人可以在其中向参数服务器报告任意梯度。尽管该模型有助于获得几个基本结果,但当工人大多是值得信赖的机器时,有时被认为是不现实的。在本文中,我们在该模型和数据中毒之间表现出令人惊讶的等效性,这一威胁被认为更现实。更具体地说,我们证明,在任何具有PAC保证的个性化联合学习系统中,每次梯度攻击都可以简化为数据中毒(我们表明这既是理想又是现实的)。这种等效性使得有可能在高度异构应用中对数据中毒的任何“强大”学习算法的韧性获得新的不可能结果,这是拜占庭机器学习的现有不可能定理的推论。此外,使用我们的等效性,我们(从理论和经验上)提出了一种实践攻击,这对经典的个性化联合学习模型非常有效。
translated by 谷歌翻译
We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification. Theoretically, we illustrate a fundamental connection between $\alpha$-loss and Arimoto conditional entropy, verify the classification-calibration of $\alpha$-loss in order to demonstrate asymptotic optimality via Rademacher complexity generalization techniques, and build-upon a notion called strictly local quasi-convexity in order to quantitatively characterize the optimization landscape of $\alpha$-loss. Practically, we perform class imbalance, robustness, and classification experiments on benchmark image datasets using convolutional-neural-networks. Our main practical conclusion is that certain tasks may benefit from tuning $\alpha$-loss away from log-loss ($\alpha = 1$), and to this end we provide simple heuristics for the practitioner. In particular, navigating the $\alpha$ hyperparameter can readily provide superior model robustness to label flips ($\alpha > 1$) and sensitivity to imbalanced classes ($\alpha < 1$).
translated by 谷歌翻译
Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
translated by 谷歌翻译
跨不同边缘设备(客户)局部数据的分布不均匀,导致模型训练缓慢,并降低了联合学习的准确性。幼稚的联合学习(FL)策略和大多数替代解决方案试图通过加权跨客户的深度学习模型来实现更多公平。这项工作介绍了在现实世界数据集中遇到的一种新颖的非IID类型,即集群键,其中客户组具有具有相似分布的本地数据,从而导致全局模型收敛到过度拟合的解决方案。为了处理非IID数据,尤其是群集串数据的数据,我们提出了FedDrl,这是一种新型的FL模型,它采用了深厚的强化学习来适应每个客户的影响因素(将用作聚合过程中的权重)。在一组联合数据集上进行了广泛的实验证实,拟议的FEDDR可以根据CIFAR-100数据集的平均平均为FedAvg和FedProx方法提高了有利的改进,例如,高达4.05%和2.17%。
translated by 谷歌翻译
联邦学习(FL)引起了人们对在存储在多个用户中的数据中启用隐私的机器学习的兴趣,同时避免将数据移动到偏离设备上。但是,尽管数据永远不会留下用户的设备,但仍然无法保证隐私,因为用户培训数据的重大计算以训练有素的本地模型的形式共享。最近,这些本地模型通过不同的隐私攻击(例如模型反演攻击)构成了实质性的隐私威胁。作为一种补救措施,通过保证服务器只能学习全局聚合模型更新,而不是单个模型更新,从而开发了安全汇总(SA)作为保护佛罗里达隐私的框架。尽管SA确保没有泄漏有关单个模型更新超出汇总模型更新的其他信息,但对于SA实际上可以提供多少私密性fl,没有正式的保证;由于有关单个数据集的信息仍然可以通过在服务器上计算的汇总模型泄漏。在这项工作中,我们对使用SA的FL的正式隐私保证进行了首次分析。具体而言,我们使用共同信息(MI)作为定量度量,并在每个用户数据集的信息上可以通过汇总的模型更新泄漏有关多少信息。当使用FEDSGD聚合算法时,我们的理论界限表明,隐私泄漏量随着SA参与FL的用户数量而线性减少。为了验证我们的理论界限,我们使用MI神经估计量来凭经验评估MNIST和CIFAR10数据集的不同FL设置下的隐私泄漏。我们的实验验证了FEDSGD的理论界限,随着用户数量和本地批量的增长,隐私泄漏的减少,并且随着培训回合的数量,隐私泄漏的增加。
translated by 谷歌翻译
联合学习通过与大量参与者启用学习统计模型的同时将其数据保留在本地客户中,从而提供了沟通效率和隐私的培训过程。但是,将平均损失函数天真地最小化的标准联合学习技术容易受到来自异常值,系统错误标签甚至对手的数据损坏。此外,由于对用户数据隐私的关注,服务提供商通常会禁止使用数据样本的质量。在本文中,我们通过提出自动加权的强大联合学习(ARFL)来应对这一挑战,这是一种新颖的方法,可以共同学习全球模型和本地更新的权重,以提供针对损坏的数据源的鲁棒性。我们证明了关于预测因素和客户权重的预期风险的学习,这指导着强大的联合学习目标的定义。通过将客户的经验损失与最佳P客户的平均损失进行比较,可以分配权重,因此我们可以减少损失较高的客户,从而降低对全球模型的贡献。我们表明,当损坏的客户的数据与良性不同时,这种方法可以实现鲁棒性。为了优化目标函数,我们根据基于块最小化范式提出了一种通信效率算法。我们考虑了不同的深层神经网络模型,在包括CIFAR-10,女权主义者和莎士比亚在内的多个基准数据集上进行实验。结果表明,我们的解决方案在不同的情况下具有鲁棒性,包括标签改组,标签翻转和嘈杂的功能,并且在大多数情况下都优于最先进的方法。
translated by 谷歌翻译
Federated learning (FL) has emerged as an instance of distributed machine learning paradigm that avoids the transmission of data generated on the users' side. Although data are not transmitted, edge devices have to deal with limited communication bandwidths, data heterogeneity, and straggler effects due to the limited computational resources of users' devices. A prominent approach to overcome such difficulties is FedADMM, which is based on the classical two-operator consensus alternating direction method of multipliers (ADMM). The common assumption of FL algorithms, including FedADMM, is that they learn a global model using data only on the users' side and not on the edge server. However, in edge learning, the server is expected to be near the base station and have direct access to rich datasets. In this paper, we argue that leveraging the rich data on the edge server is much more beneficial than utilizing only user datasets. Specifically, we show that the mere application of FL with an additional virtual user node representing the data on the edge server is inefficient. We propose FedTOP-ADMM, which generalizes FedADMM and is based on a three-operator ADMM-type technique that exploits a smooth cost function on the edge server to learn a global model parallel to the edge devices. Our numerical experiments indicate that FedTOP-ADMM has substantial gain up to 33\% in communication efficiency to reach a desired test accuracy with respect to FedADMM, including a virtual user on the edge server.
translated by 谷歌翻译