This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. Among a big body of recently proposed approaches for unsupervised learning of visual representations, a class of self-supervised techniques achieves superior performance on many challenging benchmarks. A large number of the pretext tasks for self-supervised learning have been studied, but other important aspects, such as the choice of convolutional neural networks (CNN), has not received equal attention. Therefore, we revisit numerous previously proposed self-supervised models, conduct a thorough large scale study and, as a result, uncover multiple crucial insights. We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning. As part of our study, we drastically boost the performance of previously proposed techniques and outperform previously published state-of-the-art results by a large margin.
translated by 谷歌翻译
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that SSL algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and performance can degrade substantially when the unlabeled dataset contains out-ofdistribution examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available. 2 * Equal contribution 2 https://github.com/brain-research/realistic-ssl-evaluation 32nd Conference on Neural Information Processing Systems (NeurIPS 2018),
translated by 谷歌翻译
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model's predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. The code is available at https://github.com/google-research/fixmatch.
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
半监督学习方法已成为对打击获得大量注释数据的挑战的活跃研究领域。为了提高半监督学习方法表现的目标,我们提出了一种新颖的框架,Hiematch,一种半监督方法,利用分层信息来降低标签成本并表现以及vanilla半监督学习方法。分层信息通常是具有细粒标签的粗标签(例如,啄木鸟)的粗标签(例如,啄木鸟)的现有知识(例如,柔软的啄木鸟或金朝啄木鸟)。但是,尚未探讨使用使用粗类标签来改进半监督技术的监督。在没有细粒度的标签的情况下,Himatch利用标签层次结构,并使用粗级标签作为弱监控信号。此外,Himatch是一种改进任何半熟的学习框架的通用方法,我们使用我们的结果在最近的最先进的技术Mixmatch和Fixmatch上展示了这一点。我们评估了在两个基准数据集,即CiFar-100和Nabirds上的Himatch疗效。与MixMatch相比,HOMACHACT可以在CIFAR-100上减少50%的粒度标签50%的用量,仅在前1个精度的边缘下降0.59%。代码:https://github.com/07agarg/hiermatch.
translated by 谷歌翻译
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive selfsupervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim-CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-ofthe-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels. 1
translated by 谷歌翻译
在构建培训迷你批次时,最半监督的学习方法在样本标记的数据上。本文研究了这种常见做法是否改善了学习和方法。我们将其与替代设置进行比较,其中每个迷你批次从所有训练数据均匀地采样,标有或不统计,这大大减少了典型的低标签制度中真正标签的直接监督。然而,这种更简单的设置也可以看作更通用,甚至是必要的,在多任务问题中,标记数据的过采样将变得棘手。我们对半监控的CiFar-10图像分类的实验,使用FixMatch显示使用均匀采样方法时的性能下降,当标记数据的量或训练时间增加时,在均匀采样方法增加时。此外,我们分析培训动态,了解标记数据的过采样如何比较均匀采样。我们的主要发现是,在训练中特别有益,但在更多伪标签变得正确时,在后期的阶段中不太重要。尽管如此,我们还发现,保持一些真正的标签仍然很重要,以避免从错误的伪标签中积累确认错误。
translated by 谷歌翻译
由于数据注释的高成本,半监督行动识别是一个具有挑战性的,但重要的任务是。这个问题的常见方法是用伪标签分配未标记的数据,然后将其作为训练中的额外监督。通常在最近的工作中,通过在标记数据上训练模型来获得伪标签,然后使用模型的自信预测来教授自己。在这项工作中,我们提出了一种更有效的伪标签方案,称为跨模型伪标记(CMPL)。具体地,除了主要骨干内,我们还介绍轻量级辅助网络,并要求他们互相预测伪标签。我们观察到,由于其不同的结构偏差,这两种模型倾向于学习来自同一视频剪辑的互补表示。因此,通过利用跨模型预测作为监督,每个模型都可以受益于其对应物。对不同数据分区协议的实验表明我们对现有替代方案框架的重大改进。例如,CMPL在Kinetics-400和UCF-101上实现了17.6 \%$ 17.6 \%$ 25.1 \%$ 25.使用RGB模态和1 \%$标签数据,优于我们的基线模型,FIXMATCT,以$ 9.0 \% $和10.3美元\%$。
translated by 谷歌翻译
我们对最近的自我和半监督ML技术进行严格的评估,从而利用未标记的数据来改善下游任务绩效,以河床分割的三个遥感任务,陆地覆盖映射和洪水映射。这些方法对于遥感任务特别有价值,因为易于访问未标记的图像,并获得地面真理标签通常可以昂贵。当未标记的图像(标记数据集之外)提供培训时,我们量化性能改进可以对这些遥感分割任务进行期望。我们还设计实验以测试这些技术的有效性,当测试集相对于训练和验证集具有域移位时。
translated by 谷歌翻译
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. 1
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that guesses low-entropy labels for data-augmented unlabeled examples and mixes labeled and unlabeled data using MixUp. MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success. We release all code used in our experiments. 1
translated by 谷歌翻译
培训深层神经网络以识别图像识别通常需要大规模的人类注释数据。为了减少深神经溶液对标记数据的依赖,文献中已经提出了最先进的半监督方法。尽管如此,在面部表达识别领域(FER)领域,使用这种半监督方法非常罕见。在本文中,我们介绍了一项关于最近提出的在FER背景下的最先进的半监督学习方法的全面研究。我们对八种半监督学习方法进行了比较研究当使用各种标记的样品时。我们还将这些方法的性能与完全监督的培训进行了比较。我们的研究表明,当培训现有的半监督方法时,每类标记的样本只有250个标记的样品可以产生可比的性能,而在完整标记的数据集中训练的完全监督的方法。为了促进该领域的进一步研究,我们在:https://github.com/shuvenduroy/ssl_fer上公开提供代码
translated by 谷歌翻译
半监督学习(SSL)是规避建立高性能模型的昂贵标签成本的最有前途的范例之一。大多数现有的SSL方法常规假定标记和未标记的数据是从相同(类)分布中绘制的。但是,在实践中,未标记的数据可能包括课外样本;那些不能从标签数据中的封闭类中的单热编码标签,即未标记的数据是开放设置。在本文中,我们介绍了Opencos,这是一种基于最新的自我监督视觉表示学习框架来处理这种现实的半监督学习方案。具体而言,我们首先观察到,可以通过自我监督的对比度学习有效地识别开放式未标记数据集中的类外样本。然后,Opencos利用此信息来克服现有的最新半监督方法中的故障模式,通过利用一式旋转伪标签和软标签来为已识别的识别和外部未标记的标签数据分别。我们广泛的实验结果表明了Opencos的有效性,可以修复最新的半监督方法,适合涉及开放式无标记数据的各种情况。
translated by 谷歌翻译
一个常见的分类任务情况是,有大量数据可用于培训,但只有一小部分用类标签注释。在这种情况下,半监督培训的目的是通过利用标记数据,而且从大量未标记的数据中提高分类准确性。最近的作品通过探索不同标记和未标记数据的不同增强性数据之间的一致性约束,从而取得了重大改进。遵循这条路径,我们提出了一个新颖的无监督目标,该目标侧重于彼此相似的高置信度未标记的数据之间所研究的关系较少。新提出的对损失最大程度地减少了高置信度伪伪标签之间的统计距离,其相似性高于一定阈值。我们提出的简单算法将对损失与MixMatch家族开发的技术结合在一起,显示出比以前在CIFAR-100和MINI-IMAGENET上的算法的显着性能增长,并且与CIFAR-的最先进方法相当。 10和SVHN。此外,简单还优于传输学习设置中最新方法,其中模型是由在ImainEnet或域内实现的权重初始化的。该代码可在github.com/zijian-hu/simple上获得。
translated by 谷歌翻译
我们理论上和经验地证明,对抗性鲁棒性可以显着受益于半体验学习。从理论上讲,我们重新审视了Schmidt等人的简单高斯模型。这显示了标准和稳健分类之间的示例复杂性差距。我们证明了未标记的数据桥接这种差距:简单的半体验学习程序(自我训练)使用相同数量的达到高标准精度所需的标签实现高的强大精度。经验上,我们增强了CiFar-10,使用50万微小的图像,使用了8000万微小的图像,并使用强大的自我训练来优于最先进的鲁棒精度(i)$ \ ell_ infty $鲁棒性通过对抗培训和(ii)认证$ \ ell_2 $和$ \ ell_ \ infty $鲁棒性通过随机平滑的几个强大的攻击。在SVHN上,添加DataSet自己的额外训练集,删除的标签提供了4到10个点的增益,在使用额外标签的1点之内。
translated by 谷歌翻译
Human observers can learn to recognize new categories of images from a handful of examples, yet doing so with artificial ones remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable. We therefore revisit and improve Contrastive Predictive Coding, an unsupervised objective for learning such representations. This new implementation produces features which support state-of-theart linear classification accuracy on the ImageNet dataset. When used as input for non-linear classification with deep neural networks, this representation allows us to use 2-5× less labels than classifiers trained directly on image pixels. Finally, this unsupervised representation substantially improves transfer learning to object detection on the PASCAL VOC dataset, surpassing fully supervised pre-trained ImageNet classifiers.
translated by 谷歌翻译
这项工作的作者提出了最近半监督的学习方法和相关作品的概述。尽管神经网络在各种应用中取得了显着的成功,但很少有强大的约束,包括需要大量标记数据。因此,半监督的学习是一种学习方案,其中稀缺标签和大量未标记的数据被用于训练模型(例如,深度神经网络)变得越来越重要。基于半监督学习的关键假设,这是多种假设,集群假设和连续性假设,工作回顾了最近的半监督学习方法。特别是,主要讨论了在半监督学习环境中使用深神网络的方法。此外,现有的作品首先是根据基本思想进行了分类并解释的,然后详细介绍了统一上述思想的整体方法。
translated by 谷歌翻译