最近证明,接受SGD训练的神经网络优先依赖线性预测的特征,并且可以忽略复杂的,同样可预测的功能。这种简单性偏见可以解释他们缺乏分布(OOD)的鲁棒性。学习任务越复杂,统计工件(即选择偏见,虚假相关性)的可能性就越大比学习的机制更简单。我们证明可以减轻简单性偏差并改善了OOD的概括。我们使用对其输入梯度对齐的惩罚来训练一组类似的模型以不同的方式拟合数据。我们从理论和经验上展示了这会导致学习更复杂的预测模式的学习。 OOD的概括从根本上需要超出I.I.D.示例,例如多个培训环境,反事实示例或其他侧面信息。我们的方法表明,我们可以将此要求推迟到独立的模型选择阶段。我们获得了SOTA的结果,可以在视觉域偏置数据和概括方面进行视觉识别。该方法 - 第一个逃避简单性偏见的方法 - 突出了需要更好地理解和控制深度学习中的归纳偏见。
translated by 谷歌翻译
机器学习(ML)模型通常是针对给定数据集的精度进行优化的。但是,此预测标准很少捕获模型的所有理想属性,特别是它与域专家对任务的理解的匹配程度。指定的是指多种模型的存在,这些模型在其内域准确性上是无法区分的,即使它们在其他期望的属性(例如分布(OOD)性能)上有所不同。确定这些情况对于评估ML模型的可靠性至关重要。我们正式化了指定的概念,并提出了一种识别和部分解决它的方法。我们训练多个模型具有独立约束,迫使他们实施不同的功能。他们发现了预测性特征,否则标准经验风险最小化(ERM)忽略了这些特征,然后我们将其提炼成具有出色OOD性能的全球模型。重要的是,我们限制了模型以与数据歧管保持一致,以确保它们发现有意义的功能。我们在计算机视觉(拼贴,wild-camelyon17,gqa)中演示了多个数据集的方法,并讨论了指定规定的一般含义。最值得注意的是,没有其他假设,内域性能无法用于OOD模型选择。
translated by 谷歌翻译
几项研究在经验上比较了各种模型的分布(ID)和分布(OOD)性能。他们报告了计算机视觉和NLP中基准的频繁正相关。令人惊讶的是,他们从未观察到反相关性表明必要的权衡。这重要的是确定ID性能是否可以作为OOD概括的代理。这篇简短的论文表明,ID和OOD性能之间的逆相关性确实在现实基准中发生。由于模型的选择有偏见,因此在过去的研究中可能被错过。我们使用来自多个训练时期和随机种子的模型展示了Wilds-Amelyon17数据集上模式的示例。我们的观察结果尤其引人注目,对经过正规化器训练的模型,将解决方案多样化为ERM目标。我们在过去的研究中得出了细微的建议和结论。 (1)高OOD性能有时确实需要交易ID性能。 (2)仅专注于ID性能可能不会导致最佳OOD性能:它可能导致OOD性能的减少并最终带来负面回报。 (3)我们的示例提醒人们,实证研究仅按照现有方法来制定制度:在提出规定的建议时有必要进行护理。
translated by 谷歌翻译
许多数据集被指定:给定任务存在多个同样可行的解决方案。对于学习单个假设的方法,指定的指定可能是有问题的,因为实现低训练损失的不同功能可以集中在不同的预测特征上,从而在分布数据的数据上产生明显变化的预测。我们提出了Divdis,这是一个简单的两阶段框架,首先通过利用测试分布中的未标记数据来学习多种假设,以实现任务。然后,我们通过使用其他标签的形式或检查功能可视化的形式选择最小的其他监督来选择一个发现的假设之一来消除歧义。我们证明了Divdis找到在图像分类中使用强大特征的假设和自然语言处理问题的能力。
translated by 谷歌翻译
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domaininvariant. An important assumption in this area is that the training examples are partitioned into "domains" or "environments". Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds and CivilComments datasets. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.
translated by 谷歌翻译
Distributional shift is one of the major obstacles when transferring machine learning prediction systems from the lab to the real world. To tackle this problem, we assume that variation across training domains is representative of the variation we might encounter at test time, but also that shifts at test time may be more extreme in magnitude. In particular, we show that reducing differences in risk across training domains can reduce a model's sensitivity to a wide range of extreme distributional shifts, including the challenging setting where the input contains both causal and anticausal elements. We motivate this approach, Risk Extrapolation (REx), as a form of robust optimization over a perturbation set of extrapolated domains (MM-REx), and propose a penalty on the variance of training risks (V-REx) as a simpler variant. We prove that variants of REx can recover the causal mechanisms of the targets, while also providing some robustness to changes in the input distribution ("covariate shift"). By tradingoff robustness to causally induced distributional shifts and covariate shift, REx is able to outperform alternative methods such as Invariant Risk Minimization in situations where these types of shift co-occur.
translated by 谷歌翻译
域泛化算法使用来自多个域的培训数据来学习概括到未经识别域的模型。虽然最近提出的基准证明大多数现有算法不优于简单的基线,但建立的评估方法未能暴露各种因素的影响,这有助于性能不佳。在本文中,我们提出了一个域泛化算法的评估框架,其允许将误差分解成组件捕获概念的不同方面。通过基于域不变表示学习的思想的算法的普遍性的启发,我们扩展了评估框架,以捕获在实现不变性时捕获各种类型的失败。我们表明,泛化误差的最大贡献者跨越方法,数据集,正则化强度甚至培训长度各不相同。我们遵守与学习域不变表示的策略相关的两个问题。在彩色的MNIST上,大多数域泛化算法失败,因为它们仅在训练域上达到域名不变性。在Camelyon-17上,域名不变性会降低看不见域的表示质量。我们假设专注于在丰富的代表之上调整分类器可以是有希望的方向。
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译
研究兴趣大大增加了将数据驱动方法应用于力学问题的问题。尽管传统的机器学习(ML)方法已经实现了许多突破,但它们依赖于以下假设:培训(观察到的)数据和测试(看不见)数据是独立的且分布相同的(i.i.d)。因此,当应用于未知的测试环境和数据分布转移的现实世界力学问题时,传统的ML方法通常会崩溃。相反,分布(OOD)的概括假定测试数据可能会发生变化(即违反I.I.D.假设)。迄今为止,已经提出了多种方法来改善ML方法的OOD概括。但是,由于缺乏针对OOD回归问题的基准数据集,因此这些OOD方法在主导力学领域的回归问题上的效率仍然未知。为了解决这个问题,我们研究了机械回归问题的OOD泛化方法的性能。具体而言,我们确定了三个OOD问题:协变量移位,机制移位和采样偏差。对于每个问题,我们创建了两个基准示例,以扩展机械MNIST数据集收集,并研究了流行的OOD泛化方法在这些机械特定的回归问题上的性能。我们的数值实验表明,在大多数情况下,与传统的ML方法相比,在大多数情况下,在这些OOD问题上的传统ML方法的性能更好,但迫切需要开发更强大的OOD概括方法,这些方法在多个OOD场景中有效。总体而言,我们希望这项研究以及相关的开放访问基准数据集将进一步开发用于机械特定回归问题的OOD泛化方法。
translated by 谷歌翻译
对分布(OOD)数据的概括是人类自然的能力,但对于机器而言挑战。这是因为大多数学习算法强烈依赖于i.i.d.〜对源/目标数据的假设,这在域转移导致的实践中通常会违反。域的概括(DG)旨在通过仅使用源数据进行模型学习来实现OOD的概括。在过去的十年中,DG的研究取得了长足的进步,导致了广泛的方法论,例如,基于域的一致性,元学习,数据增强或合奏学习的方法,仅举几例;还在各个应用领域进行了研究,包括计算机视觉,语音识别,自然语言处理,医学成像和强化学习。在本文中,首次提供了DG中的全面文献综述,以总结过去十年来的发展。具体而言,我们首先通过正式定义DG并将其与其他相关领域(如域适应和转移学习)联系起来来涵盖背景。然后,我们对现有方法和理论进行了彻底的审查。最后,我们通过有关未来研究方向的见解和讨论来总结这项调查。
translated by 谷歌翻译
The goal of domain generalization algorithms is to predict well on distributions different from those seen during training. While a myriad of domain generalization algorithms exist, inconsistencies in experimental conditions-datasets, architectures, and model selection criteria-render fair and realistic comparisons difficult. In this paper, we are interested in understanding how useful domain generalization algorithms are in realistic settings. As a first step, we realize that model selection is non-trivial for domain generalization tasks. Contrary to prior work, we argue that domain generalization algorithms without a model selection strategy should be regarded as incomplete. Next, we implement DOMAINBED, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria. We conduct extensive experiments using DO-MAINBED and find that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets. Looking forward, we hope that the release of DOMAINBED, along with contributions from fellow researchers, will streamline reproducible and rigorous research in domain generalization. * Alphabetical order, equal contribution.Preprint. Under review.
translated by 谷歌翻译
我们识别并形式化基本梯度下降现象,导致过度参数化神经网络中的学习倾向。尽管存在对任务相关的特征的子集最小化跨熵损失最小化梯度饥饿,尽管存在是否存在无法被发现的其他预测功能。这项工作为神经网络中这种特征不平衡的出现提供了理论解释。使用来自动态系统理论的工具,我们在梯度下降期间确定了学习动态的简单属性,从而导致这种不平衡,并证明可以预期这种情况在训练数据中提供某些统计结构。根据我们拟议的形式主义,我们为旨在解耦特征学习动态的新型正则化方法,提高患者渐变饥饿阻碍的准确性和鲁棒性的担保。我们用简单和真实的分配(OOD)泛化实验说明了我们的研究结果。
translated by 谷歌翻译
The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
域泛化(DG)方法旨在开发概括到测试分布与训练数据不同的设置的模型。在本文中,我们专注于多源零拍DG的挑战性问题,其中来自多个源域的标记训练数据可用,但无法从目标域中访问数据。虽然这个问题已成为研究的重要话题,但令人惊讶的是,将所有源数据汇集在一起​​和培训单个分类器的简单解决方案在标准基准中具有竞争力。更重要的是,即使在不同域中明确地优化不变性的复杂方法也不一定提供对ERM的非微不足道的增益。在本文中,我们首次研究了预先指定的域标签和泛化性能之间的重要链接。使用动机案例研究和分布稳健优化算法的新变种,我们首先演示了如何推断的自定义域组可以通过数据集的原始域标签来实现一致的改进。随后,我们介绍了一种用于多域泛化,Muldens的一般方法,它使用基于ERM的深度合并骨干,并通过元优化算法执行隐式域重标。使用对多个标准基准测试的经验研究,我们表明Muldens不需要定制增强策略或特定于数据集的培训过程,始终如一地优于ERM,通过显着的边距,即使在比较时也会产生最先进的泛化性能对于利用域标签的现有方法。
translated by 谷歌翻译
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译
在易于优化和强大的分布(OOD)概括之间通常存在困境。例如,许多OOD方法依赖于优化具有挑战性的罚款术语。他们要么太强大,无法可靠地优化,要么太虚弱而无法实现目标。我们建议用丰富的表示,其中包含一个潜在有用功能的调色板初始化网络,即使是简单的模型也可以使用。一方面,丰富的表示为优化器提供了良好的初始化。另一方面,它还提供了有助于OOD概括的电感偏差。这种表示形式是由丰富的功能构建(RFC)算法(也称为盆景算法)构建的,该算法由一系列培训情节组成。在发现剧集中,我们以防止网络使用以前迭代中构建的功能的方式制作了多目标优化标准及其相关数据集。在合成事件中,我们使用知识蒸馏来迫使网络同时代表所有先前发现的特征。用盆景表示的网络初始化,始终有助于六种OOD方法在ColoredMnist基准上实现最佳性能。相同的技术在Wilds Camelyon17任务上大大优于可比较的结果,消除了困扰其他方法的高结果差异,并使超参数调谐和模型选择更加可靠。
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
机器学习系统通常假设训练和测试分布是相同的。为此,关键要求是开发可以概括到未经看不见的分布的模型。领域泛化(DG),即分销概括,近年来引起了越来越令人利益。域概括处理了一个具有挑战性的设置,其中给出了一个或几个不同但相关域,并且目标是学习可以概括到看不见的测试域的模型。多年来,域概括地区已经取得了巨大进展。本文提出了对该地区最近进步的首次审查。首先,我们提供了域泛化的正式定义,并讨论了几个相关领域。然后,我们彻底审查了与域泛化相关的理论,并仔细分析了泛化背后的理论。我们将最近的算法分为三个类:数据操作,表示学习和学习策略,并为每个类别详细介绍几种流行的算法。第三,我们介绍常用的数据集,应用程序和我们的开放源代码库进行公平评估。最后,我们总结了现有文学,并为未来提供了一些潜在的研究主题。
translated by 谷歌翻译
We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages.We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.
translated by 谷歌翻译