The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Unsupervised person re-identification (ReID) aims at learning discriminative identity features for person retrieval without any annotations. Recent advances accomplish this task by leveraging clustering-based pseudo labels, but these pseudo labels are inevitably noisy which deteriorate model performance. In this paper, we propose a Neighbour Consistency guided Pseudo Label Refinement (NCPLR) framework, which can be regarded as a transductive form of label propagation under the assumption that the prediction of each example should be similar to its nearest neighbours'. Specifically, the refined label for each training instance can be obtained by the original clustering result and a weighted ensemble of its neighbours' predictions, with weights determined according to their similarities in the feature space. In addition, we consider the clustering-based unsupervised person ReID as a label-noise learning problem. Then, we proposed an explicit neighbour consistency regularization to reduce model susceptibility to over-fitting while improving the training stability. The NCPLR method is simple yet effective, and can be seamlessly integrated into existing clustering-based unsupervised algorithms. Extensive experimental results on five ReID datasets demonstrate the effectiveness of the proposed method, and showing superior performance to state-of-the-art methods by a large margin.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
随着大型预训练的Vison语言模型(如剪辑)的出现,可以通过及时调整来调整可转让表示形式。及时调整试图从存储在预训练的视觉模型的图像和文本编码器中的常识中探索有益信息,以探索下游任务。最近提出的名为“上下文优化”(COP)的方法将一组可学习的向量从语言侧引入文本提示符,而单独调整文本提示符则不会影响图像编码器的计算视觉特征,从而导致了次级优势。在本文中,我们通过学习文本提示并同时为文本和图像编码器提供双重模式提示调整范式。此外,为了使视觉提示更多地集中在目标视觉概念上,我们提出了类感知的视觉及时调整(CAVPT),该调整是通过在模板提示和视觉类别令牌嵌入的语言描述之间进行交叉注意来动态生成的。我们的方法提供了一种新的范式来调整大型预训练的视觉模型,并在8个数据集上进行了广泛的实验结果,证明了该方法的有效性。我们的代码在补充材料中可用。
translated by 谷歌翻译
开发了基于深度学习的虚拟染色是为了将图像与无标签的组织截面形成鲜明对比,以数字方式与组织学染色相匹配,组织学染色是耗时,劳动密集型且与组织破坏性的。标准的虚拟染色需要在无标签组织的整个幻灯片成像过程中使用高的自动对焦精度,这会消耗总成像时间的很大一部分,并可能导致组织光损伤。在这里,我们介绍了一个快速的虚拟染色框架,该框架可以染色未标记的组织的散焦自动荧光图像,从而达到与无焦标签图像的虚拟染色相同的性能,还可以通过降低显微镜的自动焦点来节省大量的成像时间。该框架结合了一个虚拟自动化的神经网络,以数字重新聚焦了散落的图像,然后使用连续的网络将重新聚焦的图像转换为几乎染色的图像。这些级联网络构成了协作推理方案:虚拟染色模型通过培训期间的样式损失使虚拟自动化网络正常。为了证明该框架的功效,我们使用人肺组织训练并盲目地测试了这些网络。使用较低的焦点精度的4倍焦点,我们成功地将专注于重点的自动荧光图像转换为高质量的虚拟H&E图像,与使用精心注重的自动荧光输入图像的标准虚拟染色框架相匹配。在不牺牲染色质量的情况下,该框架减少了无标签的全滑动图像(WSI)虚拟染色所需的总图像获取时间〜32%,同时降低了约89%的自动对焦时间,并且具有〜89%消除病理学中费力且昂贵的组织化学染色过程的潜力。
translated by 谷歌翻译
我们提出了一种新型的机器学习方法,用于从晶格量子场理论的高维概率分布中取样。我们的建议不是迄今为止用于此任务的深层体系结构,而是基于单个神经效果层,并结合了问题的完整对称性。我们在$ \ phi^4 $理论上测试了我们的模型,这表明它系统地优于先前提出的采样效率基于流动的方法,并且对于较大的晶格而言,改进尤其明显。与以前的基线模型相比,我们将关键指标(有效样本量)提高了,从1%到91%,尺寸为$ 32 \ times 32 $。我们还证明,我们的模型可以成功学习一个连续的理论家庭,并且可以将学习结果转移到更大的晶格中。与传统的基于MCMC的方法相比,这种概括能力进一步突出了机器学习方法的潜在优势。
translated by 谷歌翻译
使能够评估风险和做出风险意识的决策的能力对于将强化学习应用于无人机等安全性机器人至关重要。在本文中,我们调查了一种特定情况,即纳米四轮摩托车机器人学会在部分可观察性下浏览杂乱无章的环境。我们提出了一个分配加强学习框架,以生成适应性的风险趋势政策。具体而言,我们建议将学习回报分布的较低尾巴条件差异作为内在的不确定性估计,并使用指数加权的平均预测(EWAF)根据估计的不确定性调整风险趋势。在模拟和现实世界的经验结果中,我们表明(1)(1)最有效的风险趋势在各州各不相同,(2)具有自适应风险趋势的代理人比风险中性政策或避免风险的政策基准相比,其绩效优于绩效。
translated by 谷歌翻译
零拍摄对象检测旨在结合类语义向量,以实现给定鉴定无约束测试图像的(两​​者)的检测。在这项研究中,我们揭示了本研究领域的核心挑战:如何合成那种塑造的强大区域特征(对于看不见的物体),作为类别的多样化和阶级作为真实样本,因此可以是强大的看不见的对象探测器训练在他们身上。为了解决这些挑战,我们构建了一种新颖的零射对对象检测框架,该框架包含类中的语义发散组件和帧间结构保存组件。前者用于实现一对一的映射,以获得来自每个类语义矢量的不同视觉功能,防止错误分类真正的未经证实的对象作为图像背景。虽然后者用于避免合成的特征太散,以混合阶级和前景背景关系。为了证明所提出的方法的有效性,对Pascal VOC,COCO和Dior数据集进行了综合实验。值得注意的是,我们的方法在Pascal VOC和Coco实现了新的最先进的性能,并且是第一次在遥感图像中进行零射对对象检测的研究。
translated by 谷歌翻译
我们提出了一种连续的标准化流量,用于从物理学中量子域理论的高尺寸概率分布采样。与迄今为止此任务的深度架构相比,我们的提案基于浅设计并包含问题的对称性。我们在$ \ PHI ^ 4 $理论上测试我们的模型,表明它系统地优于采样效率的REALNV基准,其两个增加对于较大格子的差异。在我们考虑的最大格子上,大小为32美元,我们改善了一个关键的公制,有效的样本量,从1%到66%w.r.t.Realnvp基线。
translated by 谷歌翻译
具有测试(KAT)的Kleene代数是一个用于推理计划的基本公正框架,这些框架在许多其他领域中发现了在程序转换,网络和编译优化中的应用程序。在他的开创性工作中,Kozen证明了Kat归结了命题Hoare逻辑,表明通过KAT的等级理论,可以推理(部分)的计划。在这项工作中,我们调查了KAT提供了对不正确的推理的支持,而是由Ohearn最近提出的错误逻辑所体现的。我们表明KAT不能直接表达错误的逻辑。这种限制的主要原因可以追溯到KAT无法明确表达Codomain的概念,这对于表达不正确的三元组是必不可少的。为了解决这个问题,我们使用顶部和测试(Topkat)研究Kleene代数,kat的延伸,顶部元素。我们表明Topkat足够强大,以表达Codomain操作,以表达错误的三元组,并证明所有错误逻辑声音的规则。这表明可以通过Topkat的等级理论来推理米内的米兰的不正确性。
translated by 谷歌翻译