The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In recent years, object detection has achieved a very large performance improvement, but the detection result of small objects is still not very satisfactory. This work proposes a strategy based on feature fusion and dilated convolution that employs dilated convolution to broaden the receptive field of feature maps at various scales in order to address this issue. On the one hand, it can improve the detection accuracy of larger objects. On the other hand, it provides more contextual information for small objects, which is beneficial to improving the detection accuracy of small objects. The shallow semantic information of small objects is obtained by filtering out the noise in the feature map, and the feature information of more small objects is preserved by using multi-scale fusion feature module and attention mechanism. The fusion of these shallow feature information and deep semantic information can generate richer feature maps for small object detection. Experiments show that this method can have higher accuracy than the traditional YOLOv3 network in the detection of small objects and occluded objects. In addition, we achieve 32.8\% Mean Average Precision on the detection of small objects on MS COCO2017 test set. For 640*640 input, this method has 88.76\% mAP on the PASCAL VOC2012 dataset.
translated by 谷歌翻译
Extensive empirical evidence demonstrates that conditional generative models are easier to train and perform better than unconditional ones by exploiting the labels of data. So do score-based diffusion models. In this paper, we analyze the phenomenon formally and identify that the key of conditional learning is to partition the data properly. Inspired by the analyses, we propose self-conditioned diffusion models (SCDM), which is trained conditioned on indices clustered by the k-means algorithm on the features extracted by a model pre-trained in a self-supervised manner. SCDM significantly improves the unconditional model across various datasets and achieves a record-breaking FID of 3.94 on ImageNet 64x64 without labels. Besides, SCDM achieves a slightly better FID than the corresponding conditional model on CIFAR10.
translated by 谷歌翻译
The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information (e.g., face, body shape) of the target mesh. Deep learning-based methods improved the efficiency and performance of 3D pose transfer. However, most of them are trained under the supervision of the ground truth, whose availability is limited in real-world scenarios. In this work, we present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer. In X-DualNet, we introduce a generator $G$ which contains correspondence learning and pose transfer modules to achieve 3D pose transfer. We learn the shape correspondence by solving an optimal transport problem without any key point annotations and generate high-quality meshes with our elastic instance normalization (ElaIN) in the pose transfer module. With $G$ as the basic component, we propose a cross consistency learning scheme and a dual reconstruction objective to learn the pose transfer without supervision. Besides that, we also adopt an as-rigid-as-possible deformer in the training process to fine-tune the body shape of the generated results. Extensive experiments on human and animal data demonstrate that our framework can successfully achieve comparable performance as the state-of-the-art supervised approaches.
translated by 谷歌翻译
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19. However, there are still some challenges for developing AI system. 1) Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint. 2) Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume. 3) The emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multi-scale information along different dimension of input feature maps and impose supervision on multiple predictions from different CNN layers. Second, we assign this MDA-CNN as a basic network into a novel dual multi-scale mean teacher network (DM${^2}$T-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multi-scale information. Our DM${^2}$T-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multi-scale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
translated by 谷歌翻译
The formalization of existing mathematical proofs is a notoriously difficult process. Despite decades of research on automation and proof assistants, writing formal proofs remains arduous and only accessible to a few experts. While previous studies to automate formalization focused on powerful search algorithms, no attempts were made to take advantage of available informal proofs. In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems. We investigate two relevant setups where informal proofs are either written by humans or generated by a language model. Our experiments and ablation studies show that large language models are able to produce well-structured formal sketches that follow the same reasoning steps as the informal proofs. Guiding an automated prover with these sketches enhances its performance from 20.9% to 39.3% on a collection of mathematical competition problems.
translated by 谷歌翻译
个性化的自然语言生成可解释的建议在证明为什么建议可能与用户的兴趣相匹配的原因中起着关键作用。现有模型通常通过软约束(例如〜方面计划)来控制发电过程。在有希望的同时,这些方法难以正确地生成特定的信息,这阻止了产生的解释内容丰富和多样化。在本文中,我们提出了UCEPIC,这是一个解释生成模型,该模型统一了可控个性化生成的方面计划和词汇约束。具体而言,我们首先通过提出的强大插入过程预先培训非人性化的文本生成器,以便模型能够生成包含词汇约束的句子。然后,我们演示了将方面计划和个性化引用纳入插入过程的方法,以获得个性化的解释。与先前由软限制控制的工作相比,UCEPIC结合了来自钥匙拼的特定信息,然后很大程度上提高了生成的解释的多样性和信息性。对RateBeer和Yelp的广泛实验表明,UCEPIC可以为建议产生高质量和不同的解释。
translated by 谷歌翻译
最近,深度学习方法已经在许多医学图像分割任务中实现了最先进的表现。其中许多是基于卷积神经网络(CNN)。对于这种方法,编码器是从输入图像中提取全局和局部信息的关键部分。然后将提取的特征传递给解码器以预测分割。相比之下,最近的几部作品显示了使用变压器的卓越性能,可以更好地对远程空间依赖性进行建模并捕获低级细节。但是,对于某些任务无法有效替换基于卷积的编码器的某些任务,变形金刚作为唯一的编码器表现不佳。在本文中,我们提出了一个带有双重编码器的模型,用于3D生物医学图像分割。我们的模型是带有独立变压器编码器的U形CNN。我们融合了卷积编码器和变压器的信息,并将其传递给解码器以获得结果。我们从三个不同的挑战中评估了三个公共数据集上的方法:BTCV,MODA和DECHANLON。与在每个任务上有和没有变压器的最先进模型相比,我们提出的方法在整个方面都获得了更高的骰子分数。
translated by 谷歌翻译
与伯特(Bert)等语言模型相比,已证明知识增强语言表示的预培训模型在知识基础构建任务(即〜关系提取)中更有效。这些知识增强的语言模型将知识纳入预训练中,以生成实体或关系的表示。但是,现有方法通常用单独的嵌入表示每个实体。结果,这些方法难以代表播出的实体和大量参数,在其基础代币模型之上(即〜变压器),必须使用,并且可以处理的实体数量为由于内存限制,实践限制。此外,现有模型仍然难以同时代表实体和关系。为了解决这些问题,我们提出了一个新的预培训模型,该模型分别从图书中学习实体和关系的表示形式,并分别在文本中跨越跨度。通过使用SPAN模块有效地编码跨度,我们的模型可以代表实体及其关系,但所需的参数比现有模型更少。我们通过从Wikipedia中提取的知识图对我们的模型进行了预训练,并在广泛的监督和无监督的信息提取任务上进行了测试。结果表明,我们的模型比基线学习对实体和关系的表现更好,而在监督的设置中,微调我们的模型始终优于罗伯塔,并在信息提取任务上取得了竞争成果。
translated by 谷歌翻译
对于许多下游任务(例如,情感分析,关系检测等),脑电图(EEG)和语言已被广泛探索。研究这两个领域的多模式方法尚未得到很好的探索,即使近年来,多模式学习被认为比单峰对应物更强大。在这项研究中,我们希望探索脑电图与语言之间的关系和依赖性,即一个领域如何反映和代表另一个领域。为了研究表示级别的关系,我们引入了MTAM(一种多模式变压器对准模型),以观察两种模式之间的协调表示,因此采用了转换表示来进行下游应用。我们使用各种关系对齐的寻求对准技术,例如规范相关性分析和Wasserstein距离,作为转化低级语言的损失函数,并将EEG特征转化为高级转化的特征。在下游应用程序,情感分析和关系检测上,我们在两个数据集(Zuco和k-emocon)上实现了新的最新结果。我们的方法在K-Emocon的情感分析中获得了16.5%的F1得分提高,对Zuco的情感分析的26.6%,以及对Zuco的关系检测的31.1%。此外,我们通过以下方式提供对性能改进的解释:(1)可视化原始特征分布和变换的特征分布,显示对齐模块发现和编码脑电图与语言之间的关系的有效性; (2)可视化单词级别和句子级的脑电图对齐权重,显示不同语言语义和脑电图频率特征的影响; (3)可视化大脑地形图,以提供有关大脑区域中脑电图和语言反应的连通性的直观演示。
translated by 谷歌翻译