The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
Out-Of-Distribution (OOD) detection has received broad attention over the years, aiming to ensure the reliability and safety of deep neural networks (DNNs) in real-world scenarios by rejecting incorrect predictions. However, we notice a discrepancy between the conventional evaluation vs. the essential purpose of OOD detection. On the one hand, the conventional evaluation exclusively considers risks caused by label-space distribution shifts while ignoring the risks from input-space distribution shifts. On the other hand, the conventional evaluation reward detection methods for not rejecting the misclassified image in the validation dataset. However, the misclassified image can also cause risks and should be rejected. We appeal to rethink OOD detection from a human-centric perspective, that a proper detection method should reject the case that the deep model's prediction mismatches the human expectations and adopt the case that the deep model's prediction meets the human expectations. We propose a human-centric evaluation and conduct extensive experiments on 45 classifiers and 8 test datasets. We find that the simple baseline OOD detection method can achieve comparable and even better performance than the recently proposed methods, which means that the development in OOD detection in the past years may be overestimated. Additionally, our experiments demonstrate that model selection is non-trivial for OOD detection and should be considered as an integral of the proposed method, which differs from the claim in existing works that proposed methods are universal across different models.
translated by 谷歌翻译
最新的工业推理引擎(例如FASTRASTRANSFORMER1和TURBOTTRANSFORMER)已验证了半精度的浮点(FP16)和8位整数(INT8)量化可以极大地提高模型推断速度。但是,现有的FP16或INT8量化方法太复杂了,使用不当将大大导致性能损害。在本文中,我们开发了一个工具包,供用户轻松量化其模型以进行推理,其中提出了自适应混合精液(SAMP),以通过混合精确体系结构自动控制量化率,以平衡效率和性能。实验结果表明,我们的SAMP工具包比Pytorch和Fertransformer具有更高的速度,同时确保了所需的性能。此外,SAMP基于模块化设计,将令牌,嵌入,编码器和目标层解耦,该层允许用户处理各种下游任务,并且可以将其无缝集成到Pytorch中。
translated by 谷歌翻译
各种深度学习模型,尤其是一些最新的基于变压器的方法,已大大改善了长期时间序列预测的最新性能。但是,这些基于变压器的模型遭受了严重的恶化性能,并延长了输入长度除了使用扩展的历史信息。此外,这些方法倾向于在长期预测中处理复杂的示例,并增加模型复杂性,这通常会导致计算的显着增加和性能较低的鲁棒性(例如,过度拟合)。我们提出了一种新型的神经网络架构,称为Treedrnet,以进行更有效的长期预测。受稳健回归的启发,我们引入了双重残差链接结构,以使预测更加稳健。对Kolmogorov-Arnold表示定理进行了明确的介绍,并明确介绍了特征选择,模型集合和树结构,以进一步利用扩展输入序列,从而提高了可靠的输入序列和Treedrnet的代表力。与以前的顺序预测工作的深层模型不同,Treedrnet完全建立在多层感知下,因此具有很高的计算效率。我们广泛的实证研究表明,Treedrnet比最先进的方法更有效,将预测错误降低了20%至40%。特别是,Treedrnet的效率比基于变压器的方法高10倍。该代码将很快发布。
translated by 谷歌翻译
最近的研究表明,诸如RNN和Transformers之类的深度学习模型为长期预测时间序列带来了显着的性能增长,因为它们有效地利用了历史信息。但是,我们发现,如何在神经网络中保存历史信息,同时避免过度适应历史上的噪音,这仍然有很大的改进空间。解决此问题可以更好地利用深度学习模型的功能。为此,我们设计了一个\ textbf {f}要求\ textbf {i} mpraved \ textbf {l} egendre \ textbf {m} emory模型,或{\ bf film}:它应用了legendre promotions topimate legendre provientions近似历史信息,近似历史信息,使用傅立叶投影来消除噪声,并添加低级近似值以加快计算。我们的实证研究表明,所提出的膜显着提高了由(\ textbf {20.3 \%},\ textbf {22.6 \%})的多变量和单变量长期预测中最新模型的准确性。我们还证明,这项工作中开发的表示模块可以用作一般插件,以提高其他深度学习模块的长期预测性能。代码可从https://github.com/tianzhou2011/film/获得。
translated by 谷歌翻译
尽管基于变压器的方法已显着改善了长期序列预测的最新结果,但它们不仅在计算上昂贵,而且更重要的是,无法捕获全球时间序列的观点(例如,整体趋势)。为了解决这些问题,我们建议将变压器与季节性趋势分解方法相结合,在这种方法中,分解方法捕获了时间序列的全局概况,而变形金刚捕获了更详细的结构。为了进一步提高变压器的长期预测性能,我们利用了以下事实:大多数时间序列倾向于在诸如傅立叶变换之类的知名基础上具有稀疏的表示形式,并开发出频率增强的变压器。除了更有效外,所提出的方法被称为频率增强分解变压器({\ bf fedFormer}),比标准变压器更有效,具有线性复杂性对序列长度。我们对六个基准数据集的实证研究表明,与最先进的方法相比,FedFormer可以将预测错误降低14.8 \%$ $和$ 22.6 \%\%\%\%$ $,分别为多变量和单变量时间序列。代码可在https://github.com/maziqing/fedformer上公开获取。
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译