The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP), owing to their excellent understanding and generation abilities. Remarkably, what further sets these models apart is the massive amounts of world knowledge they internalize during pretraining. While many downstream applications provide the model with an informational context to aid its performance on the underlying task, how the model's world knowledge interacts with the factual information presented in the context remains under explored. As a desirable behavior, an LLM should give precedence to the context whenever it contains task-relevant information that conflicts with the model's memorized knowledge. This enables model predictions to be grounded in the context, which can then be used to update or correct specific model predictions without frequent retraining. By contrast, when the context is irrelevant to the task, the model should ignore it and fall back on its internal knowledge. In this paper, we undertake a first joint study of the aforementioned two properties, namely controllability and robustness, in the context of LLMs. We demonstrate that state-of-the-art T5 and PaLM (both pretrained and finetuned) could exhibit poor controllability and robustness, which do not scale with increasing model size. As a solution, we propose a novel method - Knowledge Aware FineTuning (KAFT) - to strengthen both controllability and robustness by incorporating counterfactual and irrelevant contexts to standard supervised datasets. Our comprehensive evaluation showcases the utility of KAFT across model architectures and sizes.
translated by 谷歌翻译
学习敏捷技能是机器人技术的主要挑战之一。为此,加强学习方法取得了令人印象深刻的结果。这些方法需要根据奖励功能或可以在模拟中查询的专家来提供明确的任务信息,以提供目标控制输出,从而限制其适用性。在这项工作中,我们提出了一种生成的对抗方法,用于从部分和潜在的物理不兼容的演示中推断出奖励功能,以成功地获得参考或专家演示的成功技能。此外,我们表明,通过使用Wasserstein gan公式和从以粗糙和部分信息为输入的示范中进行过渡,我们能够提取强大的策略并能够模仿证明的行为。最后,在一个名为Solo 8的敏捷四倍的机器人上测试了所获得的技能,例如后空飞弹,并对手持人类示范的忠实复制进行了测试。
translated by 谷歌翻译
预训练会产生对各种下游任务有效的表示,但是目前尚不清楚预训练的有效收益必不可少的特性。值得注意的是,最近的工作表明,即使对合成任务进行预训练也可以在下游任务中取得显着增长。在这项工作中,我们进行了三个实验,可以迭代地简化预训练,并表明简化仍然保留了其大部分收益。首先,在先前的工作中,我们对六个下游任务的三种现有合成预训练方法进行系统评估。我们发现最好的合成预训练方法是石灰,平均获得了自然预训练的收益的67美元\%$。其次,令我们惊讶的是,我们发现由设定功能定义的简单且通用的合成任务进行预培训可实现$ 65 \%的好处,几乎是匹配的石灰。第三,我们发现仅使用合成预培训的参数统计数据可以实现$ 39 \%的利益。我们在https://github.com/felixzli/synthetic_pretraining上发布源代码。
translated by 谷歌翻译
尽管当前的视觉算法在许多具有挑战性的任务上都表现出色,但尚不清楚他们如何理解现实世界环境的物理动态。在这里,我们介绍了Physion,一种数据集和基准,用于严格评估预测物理场景如何随着时间而发展的能力。我们的数据集具有对各种物理现象的现实模拟,包括刚性和软体体碰撞,稳定的多对象配置,滚动,滑动和弹丸运动,因此比以前的基准提供了更全面的挑战。我们使用Physion来基准一套模型,其体系结构,学习目标,投入输出结构和培训数据各不相同。同时,我们在同一场景上获得了人类预测行为的精确测量,从而使我们能够直接评估任何模型能够近似人类行为的效果。我们发现,学习以对象为中心的表示的视觉算法通常优于那些没有人的表现,但仍未达到人类绩效。另一方面,绘制具有直接访问物理状态信息的神经网络的表现效果更好,并且做出与人类制作的预测更相似。这些结果表明,提取场景的物理表征是在视力算法中实现人类水平和类似人类的物理理解的主要瓶颈。我们已公开发布了所有数据和代码,以促进使用物理以完全可重现的方式对其他模型进行基准测试,从而使对视觉算法的进度进行系统的评估,这些算法像人们一样坚固地了解物理环境。
translated by 谷歌翻译
我们研究了科学计算的数值算法的元学习,它将一般算法结构的数学驱动,手工设计与特定的任务类的数据驱动的适应相结合。这表示从数值分析中的经典方法的偏离,这通常不具有这种基于学习的自适应。作为一个案例研究,我们开发了一种机器学习方法,基于Runge-Kutta(RK)Integrator架构,自动学习用于常用方程式(ODES)形式的初始值问题的有效求解器。通过组合神经网络近似和元学习,我们表明我们可以获得针对目标差分方程系的高阶集成商,而无需手头计算积分器系数。此外,我们证明,在某些情况下,我们可以获得古典RK方法的卓越性能。这可以归因于通过该方法识别和利用ode系列的某些属性。总的来说,这项工作展示了基于学习的基于学习的方法,用于设计差分方程的数值解的算法,一种方法可以容易地扩展到其他数值任务。
translated by 谷歌翻译
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
translated by 谷歌翻译
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Efficient surrogate modelling is a key requirement for uncertainty quantification in data-driven scenarios. In this work, a novel approach of using Sparse Random Features for surrogate modelling in combination with self-supervised dimensionality reduction is described. The method is compared to other methods on synthetic and real data obtained from crashworthiness analyses. The results show a superiority of the here described approach over state of the art surrogate modelling techniques, Polynomial Chaos Expansions and Neural Networks.
translated by 谷歌翻译