This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.
translated by 谷歌翻译
人类可以轻松地在不知道它们的情况下段移动移动物体。从持续的视觉观测中可能出现这种对象,激励我们与未标记的视频同时进行建模和移动。我们的前提是视频具有通过移动组件相关的相同场景的不同视图,并且右区域分割和区域流程将允许相互视图合成,其可以从数据本身检查,而无需任何外部监督。我们的模型以两个单独的路径开头:一种外观途径,其输出单个图像的基于特征的区域分割,以及输出一对图像的运动功能的运动路径。然后,它将它们绑定在称为段流的联合表示中,该分段流汇集在每个区域上的流程偏移,并提供整个场景的移动区域的总表征。通过培训模型,以最小化基于段流的视图综合误差,我们的外观和运动路径自动学习区域分割和流量估计,而不分别从低级边缘或光学流量构建它们。我们的模型展示了外观途径中对象的令人惊讶的出现,超越了从图像的零射对对象分割上的工作,从带有无监督的测试时间适应的视频移动对象分割,并通过监督微调,通过监督微调。我们的工作是来自视频的第一个真正的零点零点对象分段。它不仅开发了分割和跟踪的通用对象,而且还优于无增强工程的基于普遍的图像对比学习方法。
translated by 谷歌翻译
We investigate methods for combining multiple selfsupervised tasks-i.e., supervised tasks where data can be collected without manual labeling-in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks-even via a naïve multihead architecture-always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
translated by 谷歌翻译
Large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications. To avoid extensive cost of collecting and annotating large-scale datasets, as a subset of unsupervised learning methods, self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels. This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos. First, the motivation, general pipeline, and terminologies of this field are described. Then the common deep neural network architectures that used for self-supervised learning are summarized. Next, the schema and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used image and video datasets and the existing self-supervised visual feature learning methods. Finally, quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning. At last, this paper is concluded and lists a set of promising future directions for self-supervised visual feature learning.
translated by 谷歌翻译
在本文中,我们表明,自我监督的功能学习的最新进展使无监督的对象发现和语义细分,其性能与10年前的监督语义分割相匹配。我们提出了一种基于无监督的显着性掩码和自我监督的特征聚类的方法,以启动对象发现,然后在伪标签上训练语义分割网络,以在带有多个对象的图像上引导系统。我们介绍了Pascal VOC的结果,该结果远远超出了当前的最新状态(47.3 MIOU),我们首次在整个81个类别中向COCO上首次报告结果:我们的方法发现了34个类别,价格超过20美元\%$ iou,同时获得所有81个类别的平均值为19.6。
translated by 谷歌翻译
In this paper we study the problem of image representation learning without human annotation. By following the principles of selfsupervision, we build a convolutional neural network (CNN) that can be trained to solve Jigsaw puzzles as a pretext task, which requires no manual labeling, and then later repurposed to solve object classification and detection. To maintain the compatibility across tasks we introduce the context-free network (CFN), a siamese-ennead CNN. The CFN takes image tiles as input and explicitly limits the receptive field (or context) of its early processing units to one tile at a time. We show that the CFN includes fewer parameters than AlexNet while preserving the same semantic learning capabilities. By training the CFN to solve Jigsaw puzzles, we learn both a feature mapping of object parts as well as their correct spatial arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. Our proposed method for learning visual representations outperforms state of the art methods in several transfer learning benchmarks.
translated by 谷歌翻译
Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations.Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to the same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52% mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.
translated by 谷歌翻译
We present an unsupervised representation learning approach using videos without semantic labels. We leverage the temporal coherence as a supervisory signal by formulating representation learning as a sequence sorting task. We take temporally shuffled frames (i.e., in non-chronological order) as inputs and train a convolutional neural network to sort the shuffled sequences. Similar to comparison-based sorting algorithms, we propose to extract features from all frame pairs and aggregate them to predict the correct order. As sorting shuffled image sequence requires an understanding of the statistical temporal structure of images, training with such a proxy task allows us to learn rich and generalizable visual representation. We validate the effectiveness of the learned representation using our method as pre-training on high-level recognition problems. The experimental results show that our method compares favorably against state-of-the-art methods on action recognition, image classification and object detection tasks.
translated by 谷歌翻译
This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [21] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-theart performance among algorithms which use only Pascalprovided training set annotations.
translated by 谷歌翻译
标记数据通常昂贵且耗时,特别是对于诸如对象检测和实例分割之类的任务,这需要对图像的密集标签进行密集的标签。虽然几张拍摄对象检测是关于培训小说中的模型(看不见的)对象类具有很少的数据,但它仍然需要在许多标记的基础(见)类的课程上进行训练。另一方面,自我监督的方法旨在从未标记数据学习的学习表示,该数据转移到诸如物体检测的下游任务。结合几次射击和自我监督的物体检测是一个有前途的研究方向。在本调查中,我们审查并表征了几次射击和自我监督对象检测的最新方法。然后,我们给我们的主要外卖,并讨论未来的研究方向。https://gabrielhuang.github.io/fsod-survey/的项目页面
translated by 谷歌翻译
自我监督学习(SSL)的承诺是利用大量未标记的数据来解决复杂的任务。尽管简单,图像级学习取得了出色的进步,但最新方法显示出包括图像结构知识的优势。但是,通过引入手工制作的图像分割来定义感兴趣的区域或专门的增强策略,这些方法牺牲了使SSL如此强大的简单性和通用性。取而代之的是,我们提出了一个自我监督的学习范式,该学习范式本身会发现这种图像结构。我们的方法,ODIN,夫妻对象发现和表示网络,以发现有意义的图像分割,而无需任何监督。由此产生的学习范式更简单,更易碎,更一般,并且取得了最先进的转移学习结果,以进行对象检测和实例对可可的细分,以及对Pascal和CityScapes的语义细分,同时超过监督的预先培训,用于戴维斯的视频细分。
translated by 谷歌翻译
由于存在对象的自然时间转换,视频是一种具有自我监督学习(SSL)的丰富来源。然而,目前的方法通常是随机采样用于学习的视频剪辑,这导致监督信号差。在这项工作中,我们提出了预先使用无监督跟踪信号的SSL框架,用于选择包含相同对象的剪辑,这有助于更好地利用对象的时间变换。预先使用跟踪信号在空间上限制帧区域以学习并通过在Grad-CAM注意图上提供监督来定位模型以定位有意义的物体。为了评估我们的方法,我们在VGG-Sound和Kinetics-400数据集上培训势头对比(MOCO)编码器,预先使用预先。使用Previts的培训优于Moco在图像识别和视频分类下游任务中独自学习的表示,从而获得了行动分类的最先进的性能。预先帮助学习更强大的功能表示,以便在背景和视频数据集上进行背景和上下文更改。从大规模未婚视频中学习具有预算的大规模未能视频可能会导致更准确和强大的视觉功能表示。
translated by 谷歌翻译
The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10× or 100×? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between 'enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pretraining) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-theart results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.
translated by 谷歌翻译
无监督语义分割的任务旨在将像素聚集到语义上有意义的群体中。具体而言,分配给同一群集的像素应共享高级语义属性,例如其对象或零件类别。本文介绍了MaskDistill:基于三个关键想法的无监督语义细分的新颖框架。首先,我们提倡一种数据驱动的策略,以生成对象掩模作为语义分割事先的像素分组。这种方法省略了手工制作的先验,这些先验通常是为特定场景组成而设计的,并限制了竞争框架的适用性。其次,MaskDistill将对象掩盖簇簇以获取伪地真相,以训练初始对象分割模型。第三,我们利用此模型过滤出低质量的对象掩模。这种策略减轻了我们像素分组中的噪声,并导致了我们用来训练最终分割模型的干净掩模集合。通过组合这些组件,我们可以大大优于以前的作品,用于对Pascal(+11%MIOU)和COCO(+4%Mask AP50)进行无监督的语义分割。有趣的是,与现有方法相反,我们的框架不在低级图像提示上,也不限于以对象为中心的数据集。代码和型号将提供。
translated by 谷歌翻译
We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
translated by 谷歌翻译
Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, kmeans, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.
translated by 谷歌翻译
视频分割,即将视频帧分组到多个段或对象中,在广泛的实际应用中扮演关键作用,例如电影中的视觉效果辅助,自主驾驶中的现场理解,以及视频会议中的虚拟背景创建,名称一些。最近,由于计算机愿景中的联系复兴,一直存在众多深度学习的方法,这一直专用于视频分割并提供引人注目的性能。在这项调查中,通过引入各自的任务设置,背景概念,感知需要,开发历史,以及开发历史,综合审查这一领域的两种基本研究,即在视频和视频语义分割中,即视频和视频语义分割中的通用对象分段(未知类别)。主要挑战。我们还提供关于两种方法和数据集的代表文学的详细概述。此外,我们在基准数据集中呈现了审查方法的定量性能比较。最后,我们指出了这一领域的一套未解决的开放问题,并提出了进一步研究的可能机会。
translated by 谷歌翻译
We investigate and improve self-supervision as a dropin replacement for ImageNet pretraining, focusing on automatic colorization as the proxy task. Self-supervised training has been shown to be more promising for utilizing unlabeled data than other, traditional unsupervised learning methods. We build on this success and evaluate the ability of our self-supervised network in several contexts. On VOC segmentation and classification tasks, we present results that are state-of-the-art among methods not using Im-ageNet labels for pretraining representations.Moreover, we present the first in-depth analysis of selfsupervision via colorization, concluding that formulation of the loss, training details and network architecture play important roles in its effectiveness. This investigation is further expanded by revisiting the ImageNet pretraining paradigm, asking questions such as: How much training data is needed? How many labels are needed? How much do features change when fine-tuned? We relate these questions back to self-supervision by showing that colorization provides a similarly powerful supervisory signal as various flavors of ImageNet pretraining.
translated by 谷歌翻译
自我监督学习的进步带来了强大的一般图像表示学习方法。到目前为止,它主要集中在图像级学习上。反过来,诸如无监督图像细分之类的任务并没有从这种趋势中受益,因为它们需要空间多样性的表示。但是,学习密集的表示具有挑战性,因为在无监督的环境中,尚不清楚如何指导模型学习与各种潜在对象类别相对应的表示形式。在本文中,我们认为对物体部分的自我监督学习是解决此问题的方法。对象部分是可以推广的:它们是独立于对象定义的先验性,但可以分组以形成对象后验。为此,我们利用最近提出的视觉变压器参与对象的能力,并将其与空间密集的聚类任务相结合,以微调空间令牌。我们的方法超过了三个语义分割基准的最新方法,提高了17%-3%,表明我们的表示在各种对象定义下都是用途广泛的。最后,我们将其扩展到完全无监督的分割 - 即使在测试时间也可以完全避免使用标签信息 - 并证明了一种基于社区检测的自动合并发现的对象零件的简单方法可产生可观的收益。
translated by 谷歌翻译
Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with humanannotated, pixel-level segmentation masks. Such pixelaccurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called "BoxSup", produces competitive results (e.g., 62.0% mAP for validation) supervised by boxes only, on par with strong baselines (e.g., 63.8% mAP) fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PAS-CAL VOC 2012 and PASCAL-CONTEXT [24].
translated by 谷歌翻译