在几个真实的世界应用中,部署机器学习模型以使数据对分布逐渐变化的数据进行预测,导致火车和测试分布之间的漂移。这些模型通常会定期在新数据上重新培训,因此他们需要概括到未来的数据。在这种情况下,有很多关于提高时间概括的事先工作,例如,过去数据的连续运输,内核平滑时间敏感参数,最近,越来越多的时间不变的功能。但是,这些方法共享了几个限制,例如可扩展性差,培训不稳定,以及未来未标记数据的依赖性。响应上述限制,我们提出了一种简单的方法,该方法以时间敏感的参数开头,但使用梯度插值(GI)丢失来规则地规则化其时间复杂度。 GI允许决策边界沿着时间改变,并且仍然可以通过允许特定于时间的改变来防止对有限训练时间快照的过度接种。我们将我们的方法与多个实际数据集的现有基线进行比较,这表明GI一方面优于更加复杂的生成和对抗方法,另一方面更简单地梯度正则化方法。
translated by 谷歌翻译
大多数机器学习算法的基本假设是培训和测试数据是从相同的底层分布中汲取的。然而,在几乎所有实际应用中违反了这种假设:由于不断变化的时间相关,非典型最终用户或其他因素,机器学习系统经常测试。在这项工作中,我们考虑域泛化的问题设置,其中训练数据被构造成域,并且可能有多个测试时间偏移,对应于新域或域分布。大多数事先方法旨在学习在所有域上执行良好的单一强大模型或不变的功能空间。相比之下,我们的目标是使用未标记的测试点学习适应域转移到域移的模型。我们的主要贡献是介绍自适应风险最小化(ARM)的框架,其中模型被直接优化,以便通过学习来转移以适应培训域来改编。与稳健性,不变性和适应性的先前方法相比,ARM方法提供了在表现域移位的多个图像分类问题上的性能增益为1-4%的测试精度。
translated by 谷歌翻译
Many real-world learning scenarios face the challenge of slow concept drift, where data distributions change gradually over time. In this setting, we pose the problem of learning temporally sensitive importance weights for training data, in order to optimize predictive accuracy. We propose a class of temporal reweighting functions that can capture multiple timescales of change in the data, as well as instance-specific characteristics. We formulate a bi-level optimization criterion, and an associated meta-learning algorithm, by which these weights can be learned. In particular, our formulation trains an auxiliary network to output weights as a function of training instances, thereby compactly representing the instance weights. We validate our temporal reweighting scheme on a large real-world dataset of 39M images spread over a 9 year period. Our extensive experiments demonstrate the necessity of instance-based temporal reweighting in the dataset, and achieve significant improvements to classical batch-learning approaches. Further, our proposal easily generalizes to a streaming setting and shows significant gains compared to recent continual learning methods.
translated by 谷歌翻译
最近证明,接受SGD训练的神经网络优先依赖线性预测的特征,并且可以忽略复杂的,同样可预测的功能。这种简单性偏见可以解释他们缺乏分布(OOD)的鲁棒性。学习任务越复杂,统计工件(即选择偏见,虚假相关性)的可能性就越大比学习的机制更简单。我们证明可以减轻简单性偏差并改善了OOD的概括。我们使用对其输入梯度对齐的惩罚来训练一组类似的模型以不同的方式拟合数据。我们从理论和经验上展示了这会导致学习更复杂的预测模式的学习。 OOD的概括从根本上需要超出I.I.D.示例,例如多个培训环境,反事实示例或其他侧面信息。我们的方法表明,我们可以将此要求推迟到独立的模型选择阶段。我们获得了SOTA的结果,可以在视觉域偏置数据和概括方面进行视觉识别。该方法 - 第一个逃避简单性偏见的方法 - 突出了需要更好地理解和控制深度学习中的归纳偏见。
translated by 谷歌翻译
We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages.We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.
translated by 谷歌翻译
Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.
translated by 谷歌翻译
Training models that generalize to new domains at test time is a problem of fundamental importance in machine learning. In this work, we encode this notion of domain generalization using a novel regularization function. We pose the problem of finding such a regularization function in a Learning to Learn (or) metalearning framework. The objective of domain generalization is explicitly modeled by learning a regularizer that makes the model trained on one domain to perform well on another domain. Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization.
translated by 谷歌翻译
无监督域适应(UDA)旨在将知识从标记的源域传输到未标记的目标域。传统上,基于子空间的方法为此问题形成了一类重要的解决方案。尽管他们的数学优雅和易腐烂性,但这些方法通常被发现在产生具有复杂的现实世界数据集的领域不变的功能时无效。由于近期具有深度网络的代表学习的最新进展,本文重新访问了UDA的子空间对齐,提出了一种新的适应算法,始终如一地导致改进的泛化。与现有的基于对抗培训的DA方法相比,我们的方法隔离了特征学习和分配对准步骤,并利用主要辅助优化策略来有效地平衡域不契约的目标和模型保真度。在提供目标数据和计算要求的显着降低的同时,基于子空间的DA竞争性,有时甚至优于几种标准UDA基准测试的最先进的方法。此外,子空间对准导致本质上定期的模型,即使在具有挑战性的部分DA设置中,也表现出强大的泛化。最后,我们的UDA框架的设计本身支持对测试时间的新目标域的逐步适应,而无需从头开始重新检测模型。总之,由强大的特征学习者和有效的优化策略提供支持,我们将基于子空间的DA建立为可视识别的高效方法。
translated by 谷歌翻译
学习在线推荐模型的关键挑战之一是时间域移动,这会导致培训与测试数据分布之间的不匹配以及域的概括错误。为了克服,我们建议学习一个未来的梯度生成器,该生成器可以预测培训未来数据分配的梯度信息,以便可以对建议模型进行培训,就像我们能够展望其部署的未来一样。与批处理更新相比,我们的理论表明,所提出的算法达到了较小的时间域概括误差,该误差通过梯度变异项在局部遗憾中衡量。我们通过与各种代表性基线进行比较来证明经验优势。
translated by 谷歌翻译
The goal of domain generalization algorithms is to predict well on distributions different from those seen during training. While a myriad of domain generalization algorithms exist, inconsistencies in experimental conditions-datasets, architectures, and model selection criteria-render fair and realistic comparisons difficult. In this paper, we are interested in understanding how useful domain generalization algorithms are in realistic settings. As a first step, we realize that model selection is non-trivial for domain generalization tasks. Contrary to prior work, we argue that domain generalization algorithms without a model selection strategy should be regarded as incomplete. Next, we implement DOMAINBED, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria. We conduct extensive experiments using DO-MAINBED and find that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets. Looking forward, we hope that the release of DOMAINBED, along with contributions from fellow researchers, will streamline reproducible and rigorous research in domain generalization. * Alphabetical order, equal contribution.Preprint. Under review.
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
最近,使用自动编码器(由使用神经网络建模的编码器,渠道和解码器组成)的通信系统的端到端学习问题最近被证明是一种有希望的方法。实际采用这种学习方法面临的挑战是,在变化的渠道条件(例如无线链接)下,它需要经常对自动编码器进行重新训练,以保持低解码错误率。由于重新培训既耗时又需要大量样本,因此当通道分布迅速变化时,它变得不切实际。我们建议使用不更改编码器和解码器网络的快速和样本(几射击)域的适应方法来解决此问题。不同于常规的训练时间无监督或半监督域的适应性,在这里,我们有一个训练有素的自动编码器,来自源分布,我们希望(在测试时间)使用仅使用一个小标记的数据集和无标记的数据来适应(测试时间)到目标分布。我们的方法着重于基于高斯混合物网络的通道模型,并根据类和组件条件仿射变换制定其适应性。学习的仿射转换用于设计解码器的最佳输入转换以补偿分布变化,并有效地呈现在接近源分布的解码器输入中。在实际MMWAVE FPGA设置以及无线设置共有的许多模拟分布变化上,使用非常少量的目标域样本来证明我们方法在适应时的有效性。
translated by 谷歌翻译
在多标签学习中,单个数据点与多个目标标签相关联的多任务学习的特定情况,在文献中广泛假定,为了获得最佳准确性,应明确建模标签之间的依赖性。这个前提导致提供的方法的扩散,以学习和预测标签,例如,一个标签的预测会影响对其他标签的预测。即使现在人们承认,在许多情况下,最佳性能并不需要一种依赖模型,但此类模型在某些情况下继续超越独立模型,这暗示了其对其性能的替代解释以外的标签依赖性,而文献仅是文献才是最近开始解开。利用并扩展了最近的发现,我们将多标签学习的原始前提转移到其头上,并在任务标签之间没有任何可衡量的依赖性的情况下特别处理联合模型的问题;例如,当任务标签来自单独的问题域时。我们将洞察力从这项研究转移到建立转移学习方法,该方法挑战了长期以来的假设,即任务的可转移性来自源和目标域或模型之间相似性的测量。这使我们能够设计和测试一种传输学习方法,该方法是模型驱动的,而不是纯粹的数据驱动,并且它是黑匣子和模型不合时式(可以考虑任何基本模型类)。我们表明,从本质上讲,我们可以根据源模型容量创建任务依赖性。我们获得的结果具有重要的含义,并在多标签和转移学习领域为将来的工作提供了明确的方向。
translated by 谷歌翻译
我们考虑了OOD概括的问题,其目标是训练在与训练分布不同的测试分布上表现良好的模型。已知深度学习模型在这种转变上是脆弱的,即使对于略有不同的测试分布,也可能遭受大量精度下降。我们提出了一种基于直觉的新方法 - 愚蠢的方法,即大量丰富特征的对抗性结合应提供鲁棒性。我们的方法仔细提炼了一位强大的老师的知识,该知识使用标准培训学习了几个判别特征,同时使用对抗性培训将其结合在一起。对标准的对抗训练程序进行了修改,以产生可以更好地指导学生的教师。我们评估DAFT在域床框架中的标准基准测试中,并证明DAFT比当前最新的OOD泛化方法取得了重大改进。 DAFT始终超过表现良好的ERM和蒸馏基线高达6%,对于较小的网络而言,其增长率更高。
translated by 谷歌翻译
学习域不变的表示已成为域适应/概括的最受欢迎的方法之一。在本文中,我们表明不变的表示可能不足以保证良好的概括,在考虑标签函数转移的情况下。受到这一点的启发,我们首先在经验风险上获得了新的概括上限,该概括风险明确考虑了标签函数移动。然后,我们提出了特定领域的风险最小化(DRM),该风险最小化(DRM)可以分别对不同域的分布移动进行建模,并为目标域选择最合适的域。对四个流行的域概括数据集(CMNIST,PACS,VLCS和域)进行了广泛的实验,证明了所提出的DRM对域泛化的有效性,具有以下优点:1)它的表现明显超过了竞争性盆地的表现; 2)与香草经验风险最小化(ERM)相比,所有训练领域都可以在所有训练领域中具有可比性或优越的精度; 3)在培训期间,它仍然非常简单和高效,4)与不变的学习方法是互补的。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
我们理论上和经验地证明,对抗性鲁棒性可以显着受益于半体验学习。从理论上讲,我们重新审视了Schmidt等人的简单高斯模型。这显示了标准和稳健分类之间的示例复杂性差距。我们证明了未标记的数据桥接这种差距:简单的半体验学习程序(自我训练)使用相同数量的达到高标准精度所需的标签实现高的强大精度。经验上,我们增强了CiFar-10,使用50万微小的图像,使用了8000万微小的图像,并使用强大的自我训练来优于最先进的鲁棒精度(i)$ \ ell_ infty $鲁棒性通过对抗培训和(ii)认证$ \ ell_2 $和$ \ ell_ \ infty $鲁棒性通过随机平滑的几个强大的攻击。在SVHN上,添加DataSet自己的额外训练集,删除的标签提供了4到10个点的增益,在使用额外标签的1点之内。
translated by 谷歌翻译
Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled targetdomain data is necessary).As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation.Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-ofthe-art on Office datasets.
translated by 谷歌翻译
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domaininvariant. An important assumption in this area is that the training examples are partitioned into "domains" or "environments". Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds and CivilComments datasets. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.
translated by 谷歌翻译