大型电子商务产品数据中的嘈杂标签(即,将产品项放入错误类别)是产品分类任务的关键问题,因为它们是不可避免的,不足以显着删除和降低预测性能。培训数据中对数据中嘈杂标签的产品标题分类模型对于使产品分类应用程序更加实用非常重要。在本文中,我们通过比较我们的数据降低算法和不同的噪声抗压训练算法来研究实例依赖性噪声对产品标题分类的性能的影响,这些算法旨在防止分类器模型过度拟合到噪声。我们开发了一个简单而有效的深度神经网络,用于将产品标题分类用作基本分类器。除了刺激实例依赖性噪声的最新方法外,我们还提出了一种基于产品标题相似性的新型噪声刺激算法。我们的实验涵盖了多个数据集,各种噪声方法和不同的训练解决方案。当噪声速率不容易忽略时,结果揭示了分类任务的限制,并且数据分布高度偏斜。
translated by 谷歌翻译
由于错误的自动和人类注释程序,NLP中的大型数据集遭受嘈杂的标签。我们研究了标签噪声的文本分类问题,并旨在通过分类器上通过辅助噪声模型捕获这种噪声。我们首先将概率得分分配给每个训练样本,通过训练早期纪要的损失的β混合模型来分配嘈杂的标签。然后,我们使用这个分数来选择性地引导噪声模型和分类器的学习。我们对两种文本分类任务的实证评估表明,我们的方法可以改善基线精度,并防止对噪声过度接近。
translated by 谷歌翻译
深度学习在大量大数据的帮助下取得了众多域中的显着成功。然而,由于许多真实情景中缺乏高质量标签,数据标签的质量是一个问题。由于嘈杂的标签严重降低了深度神经网络的泛化表现,从嘈杂的标签(强大的培训)学习是在现代深度学习应用中成为一项重要任务。在本调查中,我们首先从监督的学习角度描述了与标签噪声学习的问题。接下来,我们提供62项最先进的培训方法的全面审查,所有这些培训方法都按照其方法论差异分为五个群体,其次是用于评估其优越性的六种性质的系统比较。随后,我们对噪声速率估计进行深入分析,并总结了通常使用的评估方法,包括公共噪声数据集和评估度量。最后,我们提出了几个有前途的研究方向,可以作为未来研究的指导。所有内容将在https://github.com/songhwanjun/awesome-noisy-labels提供。
translated by 谷歌翻译
不完美的标签在现实世界数据集中无处不在,严重损害了模型性能。几个最近处理嘈杂标签的有效方法有两个关键步骤:1)将样品分开通过培训丢失,2)使用半监控方法在错误标记的集合中生成样本的伪标签。然而,由于硬样品和噪声之间的类似损失分布,目前的方法总是损害信息性的硬样品。在本文中,我们提出了PGDF(先前引导的去噪框架),通过生成样本的先验知识来学习深层模型来抑制噪声的新框架,这被集成到分割样本步骤和半监督步骤中。我们的框架可以将更多信息性硬清洁样本保存到干净标记的集合中。此外,我们的框架还通过抑制当前伪标签生成方案中的噪声来促进半监控步骤期间伪标签的质量。为了进一步增强硬样品,我们在训练期间在干净的标记集合中重新重量样品。我们使用基于CiFar-10和CiFar-100的合成数据集以及现实世界数据集WebVision和服装1M进行了评估了我们的方法。结果表明了最先进的方法的大量改进。
translated by 谷歌翻译
遥感(RS)图像的多标签分类(MLC)精确方法的开发是RS中最重要的研究主题之一。为了解决MLC问题,发现需要大量可靠的可靠训练图像,该图像由多个土地覆盖级标签(多标签)注释,这些培训图像在Rs中很受欢迎。但是,收集这种注释是耗时且昂贵的。以零标签成本获得注释的常见程序是依靠主题产品或众包标签。作为缺点,这些过程具有标签噪声的风险,可能会扭曲MLC算法的学习过程。在文献中,大多数标签噪声鲁棒方法都是针对计算机视觉(CV)中单标签分类(SLC)问题设计的,其中每个图像都由单个标签注释。与SLC不同,MLC中的标签噪声可以与:1)减去标签 - 噪声(在图像中存在该类时,未分配土地覆盖类标签为图像); 2)添加标签噪声(尽管该类不存在在给定图像中,但将土地覆盖类标签分配给图像); 3)混合标签 - 噪声(两者的组合)。在本文中,我们研究了三种不同的噪声鲁棒CV SLC方法,并将其适应为RS的多标签噪声场景。在实验过程中,我们研究了不同类型的多标签噪声的影响,并严格评估了适用的方法。为此,我们还引入了一种合成的多标签噪声注入策略,该策略与统一标签噪声注入策略相比,该策略更适合模拟操作场景,在该策略中,缺少和当前类的标签以均匀的概率上翻转。此外,我们研究了噪声多标签下不同评估指标在MLC问题中的相关性。
translated by 谷歌翻译
We show that large pre-trained language models are inherently highly capable of identifying label errors in natural language datasets: simply examining out-of-sample data points in descending order of fine-tuned task loss significantly outperforms more complex error-detection mechanisms proposed in previous work. To this end, we contribute a novel method for introducing realistic, human-originated label noise into existing crowdsourced datasets such as SNLI and TweetNLP. We show that this noise has similar properties to real, hand-verified label errors, and is harder to detect than existing synthetic noise, creating challenges for model robustness. We argue that human-originated noise is a better standard for evaluation than synthetic noise. Finally, we use crowdsourced verification to evaluate the detection of real errors on IMDB, Amazon Reviews, and Recon, and confirm that pre-trained models perform at a 9-36% higher absolute Area Under the Precision-Recall Curve than existing models.
translated by 谷歌翻译
Despite being robust to small amounts of label noise, convolutional neural networks trained with stochastic gradient methods have been shown to easily fit random labels. When there are a mixture of correct and mislabelled targets, networks tend to fit the former before the latter. This suggests using a suitable two-component mixture model as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled. Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss). We further adapt mixup augmentation to drive our approach a step further. Experiments on CIFAR-10/100 and TinyImageNet demonstrate a robustness to label noise that substantially outperforms recent state-of-the-art. Source code is available at https://git.io/fjsvE.
translated by 谷歌翻译
遥感(RS)图像的多标签分类(MLC)的准确方法的开发是RS中最重要的研究主题之一。基于深度卷积神经网络(CNNS)的方法显示了RS MLC问题的强劲性能。然而,基于CNN的方法通常需要多个陆地覆盖类标签注释的大量可靠的训练图像。收集这些数据是耗时和昂贵的。为了解决这个问题,可包括嘈杂标签的公开专题产品可用于向RS零标记成本注释RS图像。但是,多标签噪声(可能与错误且缺少标签注释相关)可以扭曲MLC算法的学习过程。标签噪声的检测和校正是具有挑战性的任务,尤其是在多标签场景中,其中每个图像可以与多于一个标签相关联。为了解决这个问题,我们提出了一种新的噪声稳健协作多标签学习(RCML)方法,以减轻CNN模型训练期间多标签噪声的不利影响。 RCML在基于三个主模块的RS图像中识别,排名和排除噪声多标签:1)差异模块; 2)组套索模块; 3)交换模块。差异模块确保两个网络了解不同的功能,同时产生相同的预测。组套索模块的任务是检测分配给多标记训练图像的潜在嘈杂的标签,而交换模块任务致力于在两个网络之间交换排名信息。与现有的方法不同,我们提出了关于噪声分布的假设,我们所提出的RCML不会在训练集中的噪声类型之前进行任何先前的假设。我们的代码在线公开提供:http://www.noisy-labels-in-rs.org
translated by 谷歌翻译
The performance of the Deep Learning (DL) models depends on the quality of labels. In some areas, the involvement of human annotators may lead to noise in the data. When these corrupted labels are blindly regarded as the ground truth (GT), DL models suffer from performance deficiency. This paper presents a method that aims to learn a confident model in the presence of noisy labels. This is done in conjunction with estimating the uncertainty of multiple annotators. We robustly estimate the predictions given only the noisy labels by adding entropy or information-based regularizer to the classifier network. We conduct our experiments on a noisy version of MNIST, CIFAR-10, and FMNIST datasets. Our empirical results demonstrate the robustness of our method as it outperforms or performs comparably to other state-of-the-art (SOTA) methods. In addition, we evaluated the proposed method on the curated dataset, where the noise type and level of various annotators depend on the input image style. We show that our approach performs well and is adept at learning annotators' confusion. Moreover, we demonstrate how our model is more confident in predicting GT than other baselines. Finally, we assess our approach for segmentation problem and showcase its effectiveness with experiments.
translated by 谷歌翻译
目的:深度神经网络(DNN)已被广泛应用于医学图像分类中,从其在医学图像中的强大映射能力中受益。但是,这些现有的基于深度学习的方法取决于大量精心标记的图像。同时,标记过程中不可避免地引入噪声,从而降低了模型的性能。因此,制定强大的培训策略以减轻医学图像分类任务中的标签噪声是很重要的。方法:在这项工作中,我们提出了一种新颖的贝叶斯统计数据指导标签翻新机制(BLRM),以防止过度适合嘈杂的图像。 BLRM利用贝叶斯统计数据和指定时间加权技术中的最大后验概率(MAP)来选择性地纠正嘈杂图像的标签。激活BLRM时,训练时期逐渐纯化训练图像,从而进一步改善分类性能。结果:关于合成噪声图像(公共OCT和Messidor数据集)和现实世界嘈杂图像(Animal-10N)的全面实验表明,BLRM选择性地翻新了噪声标签,从而凝结了噪声数据的不良影响。同样,与DNN集成的抗噪声BLRM在不同的噪声比下有效,并且独立于骨干DNN架构。此外,BLRM优于抗噪声的最新比较方法。结论:这些研究表明,所提出的BLRM能够缓解医学图像分类任务中的标签噪声。
translated by 谷歌翻译
深神经网络(DNN)的记忆效果在许多最先进的标签噪声学习方法中起着枢轴作用。为了利用这一财产,通常采用早期停止训练早期优化的伎俩。目前的方法通常通过考虑整个DNN来决定早期停止点。然而,DNN可以被认为是一系列层的组成,并且发现DNN中的后一个层对标签噪声更敏感,而其前同行是非常稳健的。因此,选择整个网络的停止点可以使不同的DNN层对抗彼此影响,从而降低最终性能。在本文中,我们建议将DNN分离为不同的部位,逐步培训它们以解决这个问题。而不是早期停止,它一次列举一个整体DNN,我们最初通过用相对大量的时期优化DNN来训练前DNN层。在培训期间,我们通过使用较少数量的时期使用较少的地层来逐步培训后者DNN层,以抵消嘈杂标签的影响。我们将所提出的方法术语作为渐进式早期停止(PES)。尽管其简单性,与早期停止相比,PES可以帮助获得更有前景和稳定的结果。此外,通过将PE与现有的嘈杂标签培训相结合,我们在图像分类基准上实现了最先进的性能。
translated by 谷歌翻译
尽管与专家标签相比,众包平台通常用于收集用于培训机器学习模型的数据集,尽管标签不正确。有两种常见的策略来管理这种噪音的影响。第一个涉及汇总冗余注释,但以较少的例子为代价。其次,先前的作品还考虑使用整个注释预算来标记尽可能多的示例,然后应用Denoising算法来隐式清洁数据集。我们找到了一个中间立场,并提出了一种方法,该方法保留了一小部分注释,以明确清理高度可能的错误样本以优化注释过程。特别是,我们分配了标签预算的很大一部分,以形成用于训练模型的初始数据集。然后,该模型用于确定最有可能是不正确的特定示例,我们将剩余预算用于重新标记。在三个模型变化和四个自然语言处理任务上进行的实验表明,当分配相同的有限注释预算时,旨在处理嘈杂标签的标签聚合和高级denoising方法均优于标签聚合或匹配。
translated by 谷歌翻译
为了训练强大的深神经网络(DNNS),我们系统地研究了几种目标修饰方法,其中包括输出正则化,自我和非自动标签校正(LC)。发现了三个关键问题:(1)自我LC是最吸引人的,因为它利用了自己的知识,不需要额外的模型。但是,在文献中,如何自动确定学习者的信任程度并没有很好地回答。 (2)一些方法会受到惩罚,而另一些方法奖励低渗透预测,促使我们询问哪一种更好。 (3)使用标准训练设置,当存在严重的噪音时,受过训练的网络的信心较低,因此很难利用其高渗透自我知识。为了解决问题(1),采取两个良好接受的命题 - 深度神经网络在拟合噪声和最小熵正则原理之前学习有意义的模式 - 我们提出了一种名为Proselflc的新颖的端到端方法,该方法是根据根据学习时间和熵。具体而言,给定数据点,如果对模型进行了足够的时间训练,并且预测的熵较低(置信度很高),则我们逐渐增加对预测标签分布的信任与其注释的信任。根据ProSelfLC的说法,对于(2),我们从经验上证明,最好重新定义有意义的低渗透状态并优化学习者对其进行优化。这是防御熵最小化的防御。为了解决该问题(3),我们在利用低温以纠正标签之前使用低温降低了自我知识的熵,因此修订后的标签重新定义了低渗透目标状态。我们通过在清洁和嘈杂的环境以及图像和蛋白质数据集中进行广泛的实验来证明ProSelfLC的有效性。此外,我们的源代码可在https://github.com/xinshaoamoswang/proselflc-at上获得。
translated by 谷歌翻译
Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called susceptibility, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.
translated by 谷歌翻译
Point cloud segmentation is a fundamental task in 3D. Despite recent progress on point cloud segmentation with the power of deep networks, current learning methods based on the clean label assumptions may fail with noisy labels. Yet, class labels are often mislabeled at both instance-level and boundary-level in real-world datasets. In this work, we take the lead in solving the instance-level label noise by proposing a Point Noise-Adaptive Learning (PNAL) framework. Compared to noise-robust methods on image tasks, our framework is noise-rate blind, to cope with the spatially variant noise rate specific to point clouds. Specifically, we propose a point-wise confidence selection to obtain reliable labels from the historical predictions of each point. A cluster-wise label correction is proposed with a voting strategy to generate the best possible label by considering the neighbor correlations. To handle boundary-level label noise, we also propose a variant ``PNAL-boundary " with a progressive boundary label cleaning strategy. Extensive experiments demonstrate its effectiveness on both synthetic and real-world noisy datasets. Even with $60\%$ symmetric noise and high-level boundary noise, our framework significantly outperforms its baselines, and is comparable to the upper bound trained on completely clean data. Moreover, we cleaned the popular real-world dataset ScanNetV2 for rigorous experiment. Our code and data is available at https://github.com/pleaseconnectwifi/PNAL.
translated by 谷歌翻译
Label noise is a significant obstacle in deep learning model training. It can have a considerable impact on the performance of image classification models, particularly deep neural networks, which are especially susceptible because they have a strong propensity to memorise noisy labels. In this paper, we have examined the fundamental concept underlying related label noise approaches. A transition matrix estimator has been created, and its effectiveness against the actual transition matrix has been demonstrated. In addition, we examined the label noise robustness of two convolutional neural network classifiers with LeNet and AlexNet designs. The two FashionMINIST datasets have revealed the robustness of both models. We are not efficiently able to demonstrate the influence of the transition matrix noise correction on robustness enhancements due to our inability to correctly tune the complex convolutional neural network model due to time and computing resource constraints. There is a need for additional effort to fine-tune the neural network model and explore the precision of the estimated transition model in future research.
translated by 谷歌翻译
命名实体识别(NER)是自然语言处理中的重要任务。但是,传统的监督NER需要大规模注释的数据集。提出了远处的监督以减轻对数据集的巨大需求,但是以这种方式构建的数据集非常嘈杂,并且存在严重的未标记实体问题。交叉熵(CE)损耗函数对未标记的数据高度敏感,从而导致严重的性能降解。作为替代方案,我们提出了一种称为NRCES的新损失函数,以应对此问题。Sigmoid项用于减轻噪声的负面影响。此外,我们根据样品和训练过程平衡模型的收敛性和噪声耐受性。关于合成和现实世界数据集的实验表明,在严重的未标记实体问题的情况下,我们的方法表现出强大的鲁棒性,从而实现了现实世界数据集的新最新技术。
translated by 谷歌翻译
Large-scale supervised datasets are crucial to train convolutional neural networks (CNNs) for various computer vision problems. However, obtaining a massive amount of well-labeled data is usually very expensive and time consuming. In this paper, we introduce a general framework to train CNNs with only a limited number of clean labels and millions of easily obtained noisy labels. We model the relationships between images, class labels and label noises with a probabilistic graphical model and further integrate it into an end-to-end deep learning system. To demonstrate the effectiveness of our approach, we collect a large-scale real-world clothing classification dataset with both noisy and clean labels. Experiments on this dataset indicate that our approach can better correct the noisy labels and improves the performance of trained CNNs.
translated by 谷歌翻译
While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power. Training dynamics, i.e., the traces left by iterations of optimization algorithms, have recently been proved to be effective to localize mislabeled samples with hand-crafted features. In this paper, beyond manually designed features, we introduce a novel learning-based solution, leveraging a noise detector, instanced by an LSTM network, which learns to predict whether a sample was mislabeled using the raw training dynamics as input. Specifically, the proposed method trains the noise detector in a supervised manner using the dataset with synthesized label noises and can adapt to various datasets (either naturally or synthesized label-noised) without retraining. We conduct extensive experiments to evaluate the proposed method. We train the noise detector based on the synthesized label-noised CIFAR dataset and test such noise detector on Tiny ImageNet, CUB-200, Caltech-256, WebVision and Clothing1M. Results show that the proposed method precisely detects mislabeled samples on various datasets without further adaptation, and outperforms state-of-the-art methods. Besides, more experiments demonstrate that the mislabel identification can guide a label correction, namely data debugging, providing orthogonal improvements of algorithm-centric state-of-the-art techniques from the data aspect.
translated by 谷歌翻译
Deep neural networks (DNNs) trained on large-scale datasets have exhibited significant performance in image classification. Many large-scale datasets are collected from websites, however they tend to contain inaccurate labels that are termed as noisy labels. Training on such noisy labeled datasets causes performance degradation because DNNs easily overfit to noisy labels. To overcome this problem, we propose a joint optimization framework of learning DNN parameters and estimating true labels. Our framework can correct labels during training by alternating update of network parameters and labels. We conduct experiments on the noisy CIFAR-10 datasets and the Clothing1M dataset.The results indicate that our approach significantly outperforms other state-of-the-art methods.
translated by 谷歌翻译