人工智能(AI)辅助方法在风险领域(例如疾病诊断)受到了很多关注。与疾病类型的分类不同,将医学图像归类为良性或恶性肿瘤是一项精细的任务。但是,大多数研究仅着重于提高诊断准确性,而忽略了模型可靠性的评估,从而限制了其临床应用。对于临床实践,校准对过度参数化的模型和固有的噪声极为明显地提出了低数据表格的主要挑战。特别是,我们发现建模与数据相关的不确定性更有利于置信度校准。与测试时间增强(TTA)相比,我们通过混合数据增强策略提出了一个修改后的自举损失(BS损耗)功能,可以更好地校准预测性不确定性并捕获数据分布转换而无需额外推断时间。我们的实验表明,与标准数据增强,深度集合和MC辍学相比,混合(BSM)模型的BS损失(BSM)模型可以将预期校准误差(ECE)减半。在BSM模型下,不确定性与相似性之间的相关性高达-0.4428。此外,BSM模型能够感知室外数据的语义距离,这表明在现实世界中的临床实践中潜力很高。
translated by 谷歌翻译
深度学习(DL)在数字病理应用中表现出很大的潜力。诊断DL的解决方案的鲁棒性对于安全的临床部署至关重要。在这项工作中,我们通过增加数字病理学中的DL预测的不确定性估计,可以通过提高一般预测性能或通过检测错误预测性来导致临床应用的价值增加。我们将模型 - 集成方法(MC辍学和深度集成)的有效性与模型 - 不可知方法(测试时间增强,TTA)进行比较。此外,比较了四个不确定性度量。我们的实验专注于两个域改变情景:转移到不同的医疗中心和癌症的不足亚型。我们的结果表明,不确定性估计可以增加一些可靠性并降低对分类阈值选择的敏感性。虽然高级指标和深度集合在我们的比较中表现最佳,但更简单的度量和TTA的附加值很小。重要的是,所有评估的不确定度估计方法的益处通过域移位减少。
translated by 谷歌翻译
Objective: Convolutional neural networks (CNNs) have demonstrated promise in automated cardiac magnetic resonance image segmentation. However, when using CNNs in a large real-world dataset, it is important to quantify segmentation uncertainty and identify segmentations which could be problematic. In this work, we performed a systematic study of Bayesian and non-Bayesian methods for estimating uncertainty in segmentation neural networks. Methods: We evaluated Bayes by Backprop, Monte Carlo Dropout, Deep Ensembles, and Stochastic Segmentation Networks in terms of segmentation accuracy, probability calibration, uncertainty on out-of-distribution images, and segmentation quality control. Results: We observed that Deep Ensembles outperformed the other methods except for images with heavy noise and blurring distortions. We showed that Bayes by Backprop is more robust to noise distortions while Stochastic Segmentation Networks are more resistant to blurring distortions. For segmentation quality control, we showed that segmentation uncertainty is correlated with segmentation accuracy for all the methods. With the incorporation of uncertainty estimates, we were able to reduce the percentage of poor segmentation to 5% by flagging 31--48% of the most uncertain segmentations for manual review, substantially lower than random review without using neural network uncertainty (reviewing 75--78% of all images). Conclusion: This work provides a comprehensive evaluation of uncertainty estimation methods and showed that Deep Ensembles outperformed other methods in most cases. Significance: Neural network uncertainty measures can help identify potentially inaccurate segmentations and alert users for manual review.
translated by 谷歌翻译
As the size of the dataset used in deep learning tasks increases, the noisy label problem, which is a task of making deep learning robust to the incorrectly labeled data, has become an important task. In this paper, we propose a method of learning noisy label data using the label noise selection with test-time augmentation (TTA) cross-entropy and classifier learning with the NoiseMix method. In the label noise selection, we propose TTA cross-entropy by measuring the cross-entropy to predict the test-time augmented training data. In the classifier learning, we propose the NoiseMix method based on MixUp and BalancedMix methods by mixing the samples from the noisy and the clean label data. In experiments on the ISIC-18 public skin lesion diagnosis dataset, the proposed TTA cross-entropy outperformed the conventional cross-entropy and the TTA uncertainty in detecting label noise data in the label noise selection process. Moreover, the proposed NoiseMix not only outperformed the state-of-the-art methods in the classification performance but also showed the most robustness to the label noise in the classifier learning.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Uncertainty estimation of the trained deep learning network provides important information for improving the learning efficiency or evaluating the reliability of the network prediction. In this paper, we propose a method for the uncertainty estimation for multi-class image classification using test-time mixup augmentation (TTMA). To improve the discrimination ability between the correct and incorrect prediction of the existing aleatoric uncertainty, we propose the data uncertainty by applying the mixup augmentation on the test data and measuring the entropy of the histogram of predicted labels. In addition to the data uncertainty, we propose a class-specific uncertainty presenting the aleatoric uncertainty associated with the specific class, which can provide information on the class confusion and class similarity of the trained network. The proposed methods are validated on two public datasets, the ISIC-18 skin lesion diagnosis dataset, and the CIFAR-100 real-world image classification dataset. The experiments demonstrate that (1) the proposed data uncertainty better separates the correct and incorrect prediction than the existing uncertainty measures thanks to the mixup perturbation, and (2) the proposed class-specific uncertainty provides information on the class confusion and class similarity of the trained network for both datasets.
translated by 谷歌翻译
已知深入学习方法遭受校准问题:通常会产生过度自信的估计。这些问题在低数据制度中加剧了。虽然研究了概率模型的校准,但在低数据制度中校准了极其过度参数化模型,呈现出独特的挑战。我们表明深度集合并不一定导致改进的校准特性。事实上,我们表明标准合奏方法,与混合规则化等现代技术结合使用时,可以导致校准的型号更少。本文审查了在数据稀缺时利用深度学习的三种最简单和常用方法之间的相互作用:数据增强,合奏和后处理校准方法。虽然标准合奏技术肯定有助于提高准确性,但我们证明了深度融合的校准依赖于微妙的折衷。我们还发现,随着深度合并使用时,需要稍微调整校准方法,如温度缩放,并且粗略地,需要在平均过程之后执行。我们的模拟表明,与低数据制度中的标准深度集合相比,这种简单的策略可以在一系列基准分类问题上对预期的校准误差(ECE)进行比较。
translated by 谷歌翻译
与其他癌症相比,胰腺癌具有最差的预后之一,因为它们已被诊断出癌症已朝着后期阶段发展。当前用于诊断胰腺腺癌的手动组织学分级是耗时的,通常会导致误诊。在数字病理学中,基于AI的癌症分级必须在预测和不确定性量化方面非常准确,以提高可靠性和解释性,对于获得临床医生对技术的信任至关重要。我们提出了MGG自动化胰腺癌分级的贝叶斯卷积神经网络,他对图像进行了染色,以估计模型预测中的不确定性。我们表明,估计的不确定性与预测误差相关。具体而言,它对于使用权衡分类准确性 - 拒绝权衡和错误分类成本的度量标准来设置验收阈值很有用,可以通过超参数控制,并且可以在临床环境中使用。
translated by 谷歌翻译
积极的未标记(PU)学习旨在仅从积极和未标记的培训数据中学习二进制分类器。最近的方法通过发展无偏的损失功能通过对成本敏感的学习解决了这一问题,后来通过迭代伪标记解决方案改善了其性能。但是,这样的两步程序容易受到错误估计的伪标签的影响,因为在以后的错误预测训练新模型时,在以后的迭代中传播了错误。为了防止这种确认偏见,我们提出PUUPL是PU学习的新型损失不足的训练程序,该程序将认知不确定性纳入伪标签选择中。通过使用基于低确定性预测的神经网络的合奏并分配伪标记,我们表明PUUPL提高了伪标签的可靠性,提高了我们方法的预测性能,并导致了新的最先进的结果在自我训练中进行PU学习。通过广泛的实验,我们显示了方法对不同数据集,模式和学习任务的有效性,以及改进的校准,对先前拼写错误的稳健性,偏见的正数据和不平衡数据集。
translated by 谷歌翻译
尽管深度学习预测模型在歧视不同阶层方面已经成功,但它们通常会遭受跨越包括医疗保健在内的具有挑战性领域的校准不良。此外,长尾分布在深度学习分类问题(包括临床疾病预测)中构成了巨大挑战。最近提出了一些方法来校准计算机视觉中的深入预测,但是没有发现代表模型如何在不同挑战性的环境中起作用。在本文中,我们通过对四个高影响力校准模型的比较研究来弥合从计算机视觉到医学成像的置信度校准。我们的研究是在不同的情况下进行的(自然图像分类和肺癌风险估计),包括在平衡与不平衡训练集以及计算机视觉与医学成像中进行。我们的结果支持关键发现:(1)我们获得了新的结​​论,这些结论未在不同的学习环境中进行研究,例如,结合两个校准模型,这些模型都可以减轻过度启发的预测,从而导致了不足的预测,并且来自计算机视觉模型的更简单的校准模型域往往更容易被医学成像化。 (2)我们强调了一般计算机视觉任务和医学成像预测之间的差距,例如,校准方法是通用计算机视觉任务的理想选择,实际上可能会损坏医学成像预测的校准。 (3)我们还加强了自然图像分类设置的先前结论。我们认为,这项研究的优点可以指导读者选择校准模型,并了解一般计算机视觉和医学成像域之间的差距。
translated by 谷歌翻译
对标签噪声的学习是一个至关重要的话题,可以保证深度神经网络的可靠表现。最近的研究通常是指具有模型输出概率和损失值的动态噪声建模,然后分离清洁和嘈杂的样本。这些方法取得了显着的成功。但是,与樱桃挑选的数据不同,现有方法在面对不平衡数据集时通常无法表现良好,这是现实世界中常见的情况。我们彻底研究了这一现象,并指出了两个主要问题,这些问题阻碍了性能,即\ emph {类间损耗分布差异}和\ emph {由于不确定性而引起的误导性预测}。第一个问题是现有方法通常执行类不足的噪声建模。然而,损失分布显示在类失衡下的类别之间存在显着差异,并且类不足的噪声建模很容易与少数族裔类别中的嘈杂样本和样本混淆。第二个问题是指该模型可能会因认知不确定性和不确定性而导致的误导性预测,因此仅依靠输出概率的现有方法可能无法区分自信的样本。受我们的观察启发,我们提出了一个不确定性的标签校正框架〜(ULC)来处理不平衡数据集上的标签噪声。首先,我们执行认识不确定性的班级特异性噪声建模,以识别可信赖的干净样本并精炼/丢弃高度自信的真实/损坏的标签。然后,我们在随后的学习过程中介绍了不确定性,以防止标签噪声建模过程中的噪声积累。我们对几个合成和现实世界数据集进行实验。结果证明了提出的方法的有效性,尤其是在数据集中。
translated by 谷歌翻译
量化监督学习模型的不确定性在制定更可靠的预测方面发挥着重要作用。认知不确定性,通常是由于对模型的知识不足,可以通过收集更多数据或精炼学习模型来减少。在过去的几年里,学者提出了许多认识的不确定性处理技术,这些技术可以大致分为两类,即贝叶斯和集合。本文对过去五年来提供了对监督学习的认识性不确定性学习技术的全面综述。因此,我们首先,将认知不确定性分解为偏见和方差术语。然后,介绍了认知不确定性学习技术以及其代表模型的分层分类。此外,提出了几种应用,例如计算机视觉(CV)和自然语言处理(NLP),然后讨论研究差距和可能的未来研究方向。
translated by 谷歌翻译
尽管基于卷积神经网络(CNN)的组织病理学图像的分类模型,但量化其不确定性是不可行的。此外,当数据偏置时,CNN可以遭受过度装备。我们展示贝叶斯-CNN可以通过自动规范并通过量化不确定性来克服这些限制。我们开发了一种新颖的技术,利用贝叶斯-CNN提供的不确定性,这显着提高了大部分测试数据的性能(约为77%的测试数据的准确性提高了约6%)。此外,我们通过非线性维度降低技术将数据投射到低尺寸空间来提供对不确定性的新颖解释。该维度降低能够通过可视化解释测试数据,并在低维特征空间中揭示数据的结构。我们表明,贝叶斯-CNN可以通过分别将假阴性和假阳性降低11%和7.7%的最先进的转移学习CNN(TL-CNN)来表现出远得更好。它具有仅为186万个参数的这种性能,而TL-CNN的参数仅为134.33亿。此外,我们通过引入随机自适应激活功能来修改贝叶斯-CNN。修改后的贝叶斯-CNN在所有性能指标上的贝叶斯-CNN略胜一筹,并显着降低了误报和误报的数量(两者减少了3%)。我们还表明,通过执行McNemar的统计显着性测试,这些结果具有统计学意义。这项工作显示了贝叶斯-CNN对现有技术的优势,解释并利用组织病理学图像的不确定性。它应该在各种医学图像分类中找到应用程序。
translated by 谷歌翻译
Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) methods generally rely on being able to learn and update models from small amounts of data. Recent advances in deep learning, on the other hand, are notorious for their dependence on large amounts of data. Second, many AL acquisition functions rely on model uncertainty, yet deep learning methods rarely represent such model uncertainty. In this paper we combine recent advances in Bayesian deep learning into the active learning framework in a practical way. We develop an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature. Taking advantage of specialised models such as Bayesian convolutional neural networks, we demonstrate our active learning techniques with image data, obtaining a significant improvement on existing active learning approaches. We demonstrate this on both the MNIST dataset, as well as for skin cancer diagnosis from lesion images (ISIC2016 task).
translated by 谷歌翻译
尽管脑肿瘤分割的准确性最近取得了进步,但结果仍然遭受低可靠性和鲁棒性的影响。不确定性估计是解决此问题的有效解决方案,因为它提供了对分割结果的信心。当前的不确定性估计方法基于分位数回归,贝叶斯神经网络,集合和蒙特卡洛辍学者受其高计算成本和不一致的限制。为了克服这些挑战,在最近的工作中开发了证据深度学习(EDL),但主要用于自然图像分类。在本文中,我们提出了一个基于区域的EDL分割框架,该框架可以生成可靠的不确定性图和可靠的分割结果。我们使用证据理论将神经网络的输出解释为从输入特征收集的证据价值。遵循主观逻辑,将证据作为差异分布进行了参数化,预测的概率被视为主观意见。为了评估我们在分割和不确定性估计的模型的性能,我们在Brats 2020数据集上进行了定量和定性实验。结果证明了所提出的方法在量化分割不确定性和稳健分割肿瘤方面的最高性能。此外,我们提出的新框架保持了低计算成本和易于实施的优势,并显示了临床应用的潜力。
translated by 谷歌翻译
Despite being robust to small amounts of label noise, convolutional neural networks trained with stochastic gradient methods have been shown to easily fit random labels. When there are a mixture of correct and mislabelled targets, networks tend to fit the former before the latter. This suggests using a suitable two-component mixture model as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled. Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss). We further adapt mixup augmentation to drive our approach a step further. Experiments on CIFAR-10/100 and TinyImageNet demonstrate a robustness to label noise that substantially outperforms recent state-of-the-art. Source code is available at https://git.io/fjsvE.
translated by 谷歌翻译
The ability to estimate epistemic uncertainty is often crucial when deploying machine learning in the real world, but modern methods often produce overconfident, uncalibrated uncertainty predictions. A common approach to quantify epistemic uncertainty, usable across a wide class of prediction models, is to train a model ensemble. In a naive implementation, the ensemble approach has high computational cost and high memory demand. This challenges in particular modern deep learning, where even a single deep network is already demanding in terms of compute and memory, and has given rise to a number of attempts to emulate the model ensemble without actually instantiating separate ensemble members. We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation (FiLM). That technique was originally developed for multi-task learning, with the aim of decoupling different tasks. We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison. Empirically, FiLM-Ensemble outperforms other implicit ensemble methods, and it and comes very close to the upper bound of an explicit ensemble of networks (sometimes even beating it), at a fraction of the memory cost.
translated by 谷歌翻译
在最近的文献中,在最近的文献中已经过度研究了不确定性估计,通常可以被归类为炼体不确定性和认知不确定性。在当前的炼拉内不确定性估计框架中,往往忽略了炼拉线性的不确定性是数据的固有属性,只能用一个无偏见的Oracle模型正确估计。由于在大多数情况下,Oracle模型无法访问,我们提出了一个新的采样和选择策略,在火车时间近似甲骨文模型以实现炼梯不确定性估计。此外,我们在基于双头的异源型梯级不确定性估计框架中显示了一种琐碎的解决方案,并引入了新的不确定性一致性损失,以避免它。对于认知不确定性估算,我们认为条件潜在变量模型中的内部变量是模拟预测分布的另一个认识性的不确定性,并探索了关于隐藏的真实模型的有限知识。我们验证了我们对密集预测任务的观察,即伪装对象检测。我们的研究结果表明,我们的解决方案实现了准确的确定性结果和可靠的不确定性估算。
translated by 谷歌翻译
动机:在超声引导活检过程中检测前列腺癌是具有挑战性的。癌症的高度异质外观,超声伪像的存在和噪声都导致了这些困难。高频超声成像的最新进展 - 微拆卸 - 在高分辨率下大大提高了组织成像的能力。我们的目的是研究专门针对微型启动引导的前列腺癌活检的强大深度学习模型的发展。对于临床采用的模型,一个关键的挑战是设计一种可以确定癌症的解决方案,同时从粗略的组织病理学测量中学习引入弱标签的活检样品。方法:我们使用了从194例接受了前列腺活检的患者中获得的微型图像的数据集。我们使用共同教学范式来训练一个深层模型,以处理标签中的噪声,以及一种证据深度学习方法进行不确定性估计。我们使用准确性与信心的临床相关指标评估了模型的性能。结果:我们的模型实现了对预测不确定性的良好估计,而面积为88 $ \%$。联合结合中的共同教学和证据深度学习的使用比单独单独的不确定性估计明显更好。在不确定性估计中,我们还提供了与最先进的比较。
translated by 谷歌翻译
在这项工作中,我们使用变分推论来量化无线电星系分类的深度学习模型预测的不确定性程度。我们表明,当标记无线电星系时,个体测试样本的模型后差水平与人类不确定性相关。我们探讨了各种不同重量前沿的模型性能和不确定性校准,并表明稀疏事先产生更良好的校准不确定性估计。使用单个重量的后部分布,我们表明我们可以通过从最低信噪比(SNR)中除去权重来修剪30%的完全连接的层权重,而无需显着损失性能。我们证明,可以使用基于Fisher信息的排名来实现更大程度的修剪,但我们注意到两种修剪方法都会影响Failaroff-Riley I型和II型无线电星系的不确定性校准。最后,我们表明,与此领域的其他工作相比,我们经历了冷的后效,因此后部必须缩小后加权以实现良好的预测性能。我们检查是否调整成本函数以适应模型拼盘可以弥补此效果,但发现它不会产生显着差异。我们还研究了原则数据增强的效果,并发现这改善了基线,而且还没有弥补观察到的效果。我们将其解释为寒冷的后效,因为我们的培训样本过于有效的策划导致可能性拼盘,并将其提高到未来无线电银行分类的潜在问题。
translated by 谷歌翻译