我们为在测试时间内对对抗性示例进行了学习预测的问题,为学习预测的问题提供了最小的最佳学习者。有趣的是,我们发现这需要新的算法思想和方法来实现对抗性的学习。特别是,我们从强烈的负面意义上表明,蒙塔瑟(Montasser),Hanneke和Srebro(2019)提出的强大学习者的次级临时性以及我们确定为本地学习者的更广泛的学习者。我们的结果是通过通过关键技术贡献采用全球视角来实现的:可能具有独立利益的全球单包含图,它概括了由于Haussler,Littlestone和Warminguth引起的经典单包含图(1994年)(1994年) )。最后,作为副产品,我们确定了一个定性和定量表征哪些类别的预测因子$ \ mathcal {h} $的维度。由于Montasser等人,这解决了一个空旷的问题。 (2019年),并在固定稳健学习的样品复杂性上,在已建立的上限和下限之间结束了一个(潜在的)无限差距。
translated by 谷歌翻译
The one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth achieves an optimal in-expectation risk bound in the standard PAC classification setup. In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm. We refute this conjecture in the strongest sense: for any practically interesting Vapnik-Chervonenkis class, we provide an in-expectation optimal one-inclusion graph algorithm whose high probability risk bound cannot go beyond that implied by Markov's inequality. Our construction of these poorly performing one-inclusion graph algorithms uses Varshamov-Tenengolts error correcting codes. Our negative result has several implications. First, it shows that the same poor high-probability performance is inherited by several recent prediction strategies based on generalizations of the one-inclusion graph algorithm. Second, our analysis shows yet another statistical problem that enjoys an estimator that is provably optimal in expectation via a leave-one-out argument, but fails in the high-probability regime. This discrepancy occurs despite the boundedness of the binary loss for which arguments based on concentration inequalities often provide sharp high probability risk bounds.
translated by 谷歌翻译
A classical result in learning theory shows the equivalence of PAC learnability of binary hypothesis classes and the finiteness of VC dimension. Extending this to the multiclass setting was an open problem, which was settled in a recent breakthrough result characterizing multiclass PAC learnability via the DS dimension introduced earlier by Daniely and Shalev-Shwartz. In this work we consider list PAC learning where the goal is to output a list of $k$ predictions. List learning algorithms have been developed in several settings before and indeed, list learning played an important role in the recent characterization of multiclass learnability. In this work we ask: when is it possible to $k$-list learn a hypothesis class? We completely characterize $k$-list learnability in terms of a generalization of DS dimension that we call the $k$-DS dimension. Generalizing the recent characterization of multiclass learnability, we show that a hypothesis class is $k$-list learnable if and only if the $k$-DS dimension is finite.
translated by 谷歌翻译
Recently, Robey et al. propose a notion of probabilistic robustness, which, at a high-level, requires a classifier to be robust to most but not all perturbations. They show that for certain hypothesis classes where proper learning under worst-case robustness is \textit{not} possible, proper learning under probabilistic robustness \textit{is} possible with sample complexity exponentially smaller than in the worst-case robustness setting. This motivates the question of whether proper learning under probabilistic robustness is always possible. In this paper, we show that this is \textit{not} the case. We exhibit examples of hypothesis classes $\mathcal{H}$ with finite VC dimension that are \textit{not} probabilistically robustly PAC learnable with \textit{any} proper learning rule. However, if we compare the output of the learner to the best hypothesis for a slightly \textit{stronger} level of probabilistic robustness, we show that not only is proper learning \textit{always} possible, but it is possible via empirical risk minimization.
translated by 谷歌翻译
Boosting是一种著名的机器学习方法,它基于将弱和适度不准确假设与强烈而准确的假设相结合的想法。我们研究了弱假设属于界限能力类别的假设。这个假设的灵感来自共同的惯例,即虚弱的假设是“易于学习的类别”中的“人数规则”。 (Schapire和Freund〜 '12,Shalev-Shwartz和Ben-David '14。)正式,我们假设弱假设类别具有有界的VC维度。我们关注两个主要问题:(i)甲骨文的复杂性:产生准确的假设需要多少个弱假设?我们设计了一种新颖的增强算法,并证明它绕过了由Freund和Schapire('95,'12)的经典下限。虽然下限显示$ \ omega({1}/{\ gamma^2})$弱假设有时是必要的,而有时则需要使用$ \ gamma $ -margin,但我们的新方法仅需要$ \ tilde {o}({1})({1}) /{\ gamma})$弱假设,前提是它们属于一类有界的VC维度。与以前的增强算法以多数票汇总了弱假设的算法不同,新的增强算法使用了更复杂(“更深”)的聚合规则。我们通过表明复杂的聚合规则实际上是规避上述下限是必要的,从而补充了这一结果。 (ii)表现力:通过提高有限的VC类的弱假设可以学习哪些任务?可以学到“遥远”的复杂概念吗?为了回答第一个问题,我们{介绍组合几何参数,这些参数捕获增强的表现力。}作为推论,我们为认真的班级的第二个问题提供了肯定的答案,包括半空间和决策树桩。一路上,我们建立并利用差异理论的联系。
translated by 谷歌翻译
经典的算法adaboost允许转换一个弱学习者,这是一种算法,它产生的假设比机会略好,成为一个强大的学习者,在获得足够的培训数据时,任意高精度。我们提出了一种新的算法,该算法从弱学习者中构建了一个强大的学习者,但比Adaboost和所有其他弱者到强大的学习者使用训练数据少,以实现相同的概括界限。样本复杂性下限表明我们的新算法使用最小可能的训练数据,因此是最佳的。因此,这项工作解决了从弱学习者中构建强大学习者的经典问题的样本复杂性。
translated by 谷歌翻译
在这项工作中,我们调查了Steinke和Zakynthinou(2020)的“条件互信息”(CMI)框架的表现力,以及使用它来提供统一框架,用于在可实现的环境中证明泛化界限。我们首先证明可以使用该框架来表达任何用于从一类界限VC维度输出假设的任何学习算法的非琐碎(但是次优)界限。我们证明了CMI框架在用于学习半个空间的预期风险上产生最佳限制。该结果是我们的一般结果的应用,显示稳定的压缩方案Bousquet al。 (2020)尺寸$ k $有统一有限的命令$ o(k)$。我们进一步表明,适当学习VC类的固有限制与恒定的CMI存在适当的学习者的存在,并且它意味着对Steinke和Zakynthinou(2020)的开放问题的负面分辨率。我们进一步研究了价值最低限度(ERMS)的CMI的级别$ H $,并表明,如果才能使用有界CMI输出所有一致的分类器(版本空间),只有在$ H $具有有界的星号(Hanneke和杨(2015)))。此外,我们证明了一般性的减少,表明“休假”分析通过CMI框架表示。作为推论,我们研究了Haussler等人提出的一包图算法的CMI。 (1994)。更一般地说,我们表明CMI框架是通用的,因为对于每一项一致的算法和数据分布,当且仅当其评估的CMI具有样品的载位增长时,预期的风险就会消失。
translated by 谷歌翻译
我们考虑在对抗环境中的强大学习模型。学习者获得未腐败的培训数据,并访问可能受到测试期间对手影响的可能腐败。学习者的目标是建立一个强大的分类器,该分类器将在未来的对抗示例中进行测试。每个输入的对手仅限于$ k $可能的损坏。我们将学习者 - 对手互动建模为零和游戏。该模型与Schmidt等人的对抗示例模型密切相关。 (2018); Madry等。 (2017)。我们的主要结果包括对二进制和多类分类的概括界限,以及实现的情况(回归)。对于二元分类设置,我们都拧紧Feige等人的概括。 (2015年),也能够处理无限假设类别。样本复杂度从$ o(\ frac {1} {\ epsilon^4} \ log(\ frac {| h |} {\ delta})$ to $ o \ big(\ frac {1} { epsilon^2}(kvc(h)\ log^{\ frac {3} {2}+\ alpha}(kvc(h))+\ log(\ frac {1} {\ delta} {\ delta})\ big)\ big)\ big)$ for任何$ \ alpha> 0 $。此外,我们将算法和概括从二进制限制到多类和真实价值的案例。一路上,我们获得了脂肪震惊的尺寸和$ k $ fold的脂肪的尺寸和Rademacher复杂性的结果最大值的功能类别;这些可能具有独立的兴趣。对于二进制分类,Feige等人(2015年)使用遗憾的最小化算法和Erm Oracle作为黑匣子;我们适应了多类和回归设置。该算法为我们提供了给定培训样本中的球员的近乎最佳政策。
translated by 谷歌翻译
可实现和不可知性的可读性的等价性是学习理论的基本现象。与PAC学习和回归等古典设置范围的变种,近期趋势,如对冲强劲和私人学习,我们仍然缺乏统一理论;等同性的传统证据往往是不同的,并且依赖于强大的模型特异性假设,如统一的收敛和样本压缩。在这项工作中,我们给出了第一个独立的框架,解释了可实现和不可知性的可读性的等价性:三行黑箱减少简化,统一,并在各种各样的环境中扩展了我们的理解。这包括没有已知的学报的模型,例如学习任意分布假设或一般损失,以及许多其他流行的设置,例如强大的学习,部分学习,公平学习和统计查询模型。更一般地,我们认为可实现和不可知的学习的等价性实际上是我们调用属性概括的更广泛现象的特殊情况:可以满足有限的学习算法(例如\噪声公差,隐私,稳定性)的任何理想性质假设类(可能在某些变化中)延伸到任何学习的假设类。
translated by 谷歌翻译
我们使用$ \ ell_p $损失和任意扰动集研究回归设置中测试时间对抗攻击的鲁棒性。我们解决了哪些功能类在此设置中可以学习的问题。我们表明,有限脂肪的脂肪尺寸是可以学习的。此外,对于凸功能类,它们甚至可以正确地学习。相比之下,一些非凸功能类别可证明需要不当学习算法。我们还讨论了不可知论学习的扩展。我们的主要技术是基于由脂肪崩溃尺寸确定的尺寸的对抗性稳健样品压缩方案的构造。
translated by 谷歌翻译
多集团不可知学习是一个正式的学习标准,涉及人口亚组内的预测因子的条件风险。标准解决了最近的实际问题,如亚组公平和隐藏分层。本文研究了对多组学习问题的解决方案的结构,为学习问题提供了简单和近最佳的算法。
translated by 谷歌翻译
我们研究了非参数在线回归中的快速收敛速度,即遗憾的是关于具有有界复杂度的任意函数类来定义后悔。我们的贡献是两倍: - 在绝对损失中的非参数网上回归的可实现设置中,我们提出了一种随机适当的学习算法,该算法在假设类的顺序脂肪破碎尺寸方面获得了近乎最佳的错误。在与一类Littlestone维度$ D $的在线分类中,我们的绑定减少到$ d \ cdot {\ rm poly} \ log t $。这结果回答了一个问题,以及适当的学习者是否可以实现近乎最佳错误的界限;以前,即使在线分类,绑定的最知名错误也是$ \ tilde o(\ sqrt {dt})$。此外,对于真实值(回归)设置,在这项工作之前,界定的最佳错误甚至没有以不正当的学习者所知。 - 使用上述结果,我们展示了Littlestone维度$ D $的一般总和二进制游戏的独立学习算法,每个玩家达到后悔$ \ tilde o(d ^ {3/4} \ cdot t ^ {1 / 4})$。该结果概括了Syrgkanis等人的类似结果。 (2015)谁表明,在有限的游戏中,最佳遗憾可以从普通的o(\ sqrt {t})$中的$ o(\ sqrt {t})为游戏设置中的$ o(t ^ {1/4})$。要建立上述结果,我们介绍了几种新技术,包括:分层聚合规则,以实现对实际类别的最佳错误,Hanneke等人的适当在线可实现学习者的多尺度扩展。 (2021),一种方法来表明这种非参数学习算法的输出是稳定的,并且证明Minimax定理在所有在线学习游戏中保持。
translated by 谷歌翻译
学习曲线将学习算法的预期误差绘制为标记输入样本数量的函数。它们被机器学习实践者广泛使用,以衡量算法的性能,但是经典的PAC学习理论无法解释其行为。在本文中,我们介绍了一种称为VCL维度的新组合表征,该表征改进并完善了Bousquet等人的最新结果。 (2021)。我们的表征通过提供细粒度的边界来展示学习曲线的结构,并表明对于有限VCL的类,可以将衰减的速率分解为仅取决于假设类别和指数成分的线性组件,该成分是指数的成分。还取决于目标分布。特别是,VCL维度的细微差别意味着比Bousquet等人的边界更强大的下限。 (2021年),比经典的“无免费午餐”下界强。 VCL表征解决了Antos and Lugosi(1998)研究的一个开放问题,他们询问在哪些情况下存在这种下限。作为推论,我们在$ \ mathbb {r}^d $中恢复了其下限,并以原则性的方式也适用于其他情况。最后,为了对我们的工作以及与传统PAC学习界的比较提供另一个观点,我们还以一种更接近PAC环境的语言展示了结果的替代表述。
translated by 谷歌翻译
我们研究了顺序预测和在线minimax遗憾的问题,并在一般损失函数下具有随机生成的特征。我们介绍了一个预期的最坏情况下的概念minimax遗憾,它概括并涵盖了先前已知的minimax遗憾。对于这种极匹马的遗憾,我们通过随机全局顺序覆盖的新颖概念建立了紧密的上限。我们表明,对于VC-Dimension $ \ Mathsf {Vc} $和$ I.I.D. $生成的长度$ t $的假设类别,随机全局顺序覆盖的基数可以在上限上限制高概率(WHP) e^{o(\ mathsf {vc} \ cdot \ log^2 t)} $。然后,我们通过引入一种称为Star-Littlestone维度的新复杂度度量来改善这种束缚,并显示与Star-Littlestone dimension $ \ Mathsf {Slsf {sl} $类别的类别允许订单的随机全局顺序覆盖$ e^{o(\ Mathsf) {sl} \ cdot \ log t)} $。我们进一步建立了具有有限脂肪的数字的真实有价值类的上限。最后,通过应用固定设计的Minimax遗憾的信息理论工具,我们为预期的最坏情况下的Minimax遗憾提供了下限。我们通过在预期的最坏情况下对对数损失和一般可混合损失的遗憾建立紧密的界限来证明我们的方法的有效性。
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.
translated by 谷歌翻译
公司跨行业对机器学习(ML)的快速传播采用了重大的监管挑战。一个这样的挑战就是可伸缩性:监管机构如何有效地审核这些ML模型,以确保它们是公平的?在本文中,我们启动基于查询的审计算法的研究,这些算法可以以查询有效的方式估算ML模型的人口统计学率。我们提出了一种最佳的确定性算法,以及具有可比保证的实用随机,甲骨文效率的算法。此外,我们进一步了解了随机活动公平估计算法的最佳查询复杂性。我们对主动公平估计的首次探索旨在将AI治理置于更坚定的理论基础上。
translated by 谷歌翻译
Determining the optimal sample complexity of PAC learning in the realizable setting was a central open problem in learning theory for decades. Finally, the seminal work by Hanneke (2016) gave an algorithm with a provably optimal sample complexity. His algorithm is based on a careful and structured sub-sampling of the training data and then returning a majority vote among hypotheses trained on each of the sub-samples. While being a very exciting theoretical result, it has not had much impact in practice, in part due to inefficiency, since it constructs a polynomial number of sub-samples of the training data, each of linear size. In this work, we prove the surprising result that the practical and classic heuristic bagging (a.k.a. bootstrap aggregation), due to Breimann (1996), is in fact also an optimal PAC learner. Bagging pre-dates Hanneke's algorithm by twenty years and is taught in most undergraduate machine learning courses. Moreover, we show that it only requires a logarithmic number of sub-samples to reach optimality.
translated by 谷歌翻译
The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of high-level, pre-defined groups (such as race or gender), and then ask for approximate parity of some statistic of the classifier (like positive classification rate or false positive rate) across these groups. Constraints of this form are susceptible to (intentional or inadvertent) fairness gerrymandering, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes (such as certain combinations of protected attribute values). We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness, and recently proposed individual notions of fairness, but it raises several computational challenges. It is no longer clear how to even check or audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning -which means it is computationally hard in the worst case, even for simple structured subclasses. However, it also suggests that common heuristics for learning can be applied to successfully solve the auditing problem in practice.We then derive two algorithms that provably converge to the best fair distribution over classifiers in a given class, given access to oracles which can optimally solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner (the primal player) and an Auditor (the dual player). Both algorithms compute an equilibrium of this game. We obtain our first algorithm by simulating play of the game by having Learner play an instance of the no-regret Follow the Perturbed Leader algorithm, and having Auditor play best response. This algorithm provably converges to an approximate Nash equilibrium (and thus to an approximately optimal subgroup-fair distribution over classifiers) in a polynomial number of steps. We obtain our second algorithm by simulating play of the game by having both players play Fictitious Play, which enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the Fictitious Play version using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.
translated by 谷歌翻译
后门数据中毒攻击是一种对抗的攻击,其中攻击者将几个水印,误标记的训练示例注入训练集中。水印不会影响典型数据模型的测试时间性能;但是,该模型在水印示例中可靠地错误。为获得对后门数据中毒攻击的更好的基础认识,我们展示了一个正式的理论框架,其中一个人可以讨论对分类问题的回溯数据中毒攻击。然后我们使用它来分析这些攻击的重要统计和计算问题。在统计方面,我们识别一个参数,我们称之为记忆能力,捕捉到后门攻击的学习问题的内在脆弱性。这使我们能够争论几个自然学习问题的鲁棒性与后门攻击。我们的结果,攻击者涉及介绍后门攻击的明确建设,我们的鲁棒性结果表明,一些自然问题设置不能产生成功的后门攻击。从计算的角度来看,我们表明,在某些假设下,对抗训练可以检测训练集中的后门的存在。然后,我们表明,在类似的假设下,我们称之为呼叫滤波和鲁棒概括的两个密切相关的问题几乎等同。这意味着它既是渐近必要的,并且足以设计算法,可以识别训练集中的水印示例,以便获得既广泛概念的学习算法,以便在室外稳健。
translated by 谷歌翻译