Trusting the predictions of deep learning models in safety critical settings such as the medical domain is still not a viable option. Distentangled uncertainty quantification in the field of medical imaging has received little attention. In this paper, we study disentangled uncertainties in image to image translation tasks in the medical domain. We compare multiple uncertainty quantification methods, namely Ensembles, Flipout, Dropout, and DropConnect, while using CycleGAN to convert T1-weighted brain MRI scans to T2-weighted brain MRI scans. We further evaluate uncertainty behavior in the presence of out of distribution data (Brain CT and RGB Face Images), showing that epistemic uncertainty can be used to detect out of distribution inputs, which should increase reliability of model outputs.
translated by 谷歌翻译
大多数机器学习模型在假设培训,测试和部署数据是独立的和相同分布的假设下运行(i.i.d.)。这种假设通常在自然设置中通常保持真实。通常,部署数据受各种类型的分布换档。模型性能的大小与数据集分发的这种转变成比例。因此,有必要评估模型的不确定性和稳健性,以分配转变,以便在真实数据上实现其预期绩效的现实估计。提供评估不确定性和模型的鲁棒性的现有方法缺乏,并且通常无法涂漆完整的图片。此外,到目前为止大多数分析主要专注于分类任务。在本文中,我们使用Shifts天气预报数据集提出了更多的始终回归任务的有洞察力度量。我们还提供了使用这些指标的基线方法的评估。
translated by 谷歌翻译