在本文中,我们通过利用给定数据集中的规律性来有效地介绍了一种新颖的方法来系统地解决数据集凝结问题。我们没有直接在原始输入空间中凝结数据集,而是假设数据集的生成过程,其中一组可学习的代码在紧凑的潜在空间中定义,然后是一组微型解码器,它们将它们映射到原始输入空间。通过互换组合不同的代码和解码器,我们可以大大增加具有相同参数计数的合成示例的数量,因为潜在空间要较低,并且由于我们可以假设尽可能多的解码器来捕获数据集中表示的不同样式费用微不足道。这种知识分解允许以系统的方式有效地共享综合示例之间的信息,从而在压缩比和生成的示例的质量之间进行了更高的权衡。我们通过实验表明,我们的方法通过各种基准数据集(例如SVHN,CIFAR10,CIFAR100和Tinyimagenet)在各种基准数据集上实现了新的最新记录。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Open world object detection aims at detecting objects that are absent in the object classes of the training data as unknown objects without explicit supervision. Furthermore, the exact classes of the unknown objects must be identified without catastrophic forgetting of the previous known classes when the corresponding annotations of unknown objects are given incrementally. In this paper, we propose a two-stage training approach named Open World DETR for open world object detection based on Deformable DETR. In the first stage, we pre-train a model on the current annotated data to detect objects from the current known classes, and concurrently train an additional binary classifier to classify predictions into foreground or background classes. This helps the model to build an unbiased feature representations that can facilitate the detection of unknown classes in subsequent process. In the second stage, we fine-tune the class-specific components of the model with a multi-view self-labeling strategy and a consistency constraint. Furthermore, we alleviate catastrophic forgetting when the annotations of the unknown classes becomes available incrementally by using knowledge distillation and exemplar replay. Experimental results on PASCAL VOC and MS-COCO show that our proposed method outperforms other state-of-the-art open world object detection methods by a large margin.
translated by 谷歌翻译
We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. The mined data and models are freely available.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization. We formulate online pseudo-labeling as an optimization problem to find pseudo-labels that maximize the mutual information between the label and data while being close to a given model probability. We derive a fixed-point iteration method and prove its convergence to the optimal solution. In contrast to baselines, MIRA combined with pseudo-label prediction enables a simple yet effective clustering-based representation learning without incorporating extra training techniques or artificial constraints such as sampling strategy, equipartition constraints, etc. With relatively small training epochs, representation learned by MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/k-NN evaluation and transfer learning. Especially, with only 400 epochs, our method applied to ImageNet dataset with ResNet-50 architecture achieves 75.6% linear evaluation accuracy.
translated by 谷歌翻译
自动化车辆功能最佳接受和舒适性的关键因素是驾驶方式。自动化和驱动程序偏爱的驾驶方式之间的不匹配可以使用户更频繁地接管甚至禁用自动化功能。这项工作建议用多模式信号识别用户驾驶样式偏好,因此该车辆可以以连续自动的方式匹配用户偏好。我们对36名参与者进行了驾驶模拟器研究,并收集了广泛的多模式数据,包括行为,生理和情境数据。这包括眼目光,转向抓地力,驾驶演习,制动和节气门踏板输入以及距踏板的脚距离,瞳孔直径,电流皮肤反应,心率和情境驱动驱动环境。然后,我们建立了机器学习模型来识别首选的驾驶方式,并确认所有模式对于识别用户偏好都很重要。这项工作为自动车辆的隐性自适应驾驶风格铺平了道路。
translated by 谷歌翻译
网络安全研究中的关键主题之一是自动COA(行动)攻击搜索方法。被动搜索攻击的传统COA攻击方法可能很困难,尤其是随着网络变大。为了解决这些问题,正在开发新的自动COA技术,其中,本文设计了一种智能的空间算法,以在可扩展网络中有效运行。除空间搜索外,还考虑了基于蒙特卡洛(MC)的时间方法来照顾时间变化的网络行为。因此,我们为可扩展和时变网络的时空攻击COA搜索算法提出了一个时空攻击。
translated by 谷歌翻译
仔细构建和介绍了一系列包含文本和数字的页面,这些页面是一系列页面,并仔细构建并呈现,以便将知识最佳地转移给学生。先前在多媒体和心理学方面的研究将演讲的有效性归因于其多模式的性质。为了开发AI的一步,以帮助学生学习作为智能教师助理,我们将多模式演讲演示文稿数据集作为大规模的基准测试,以测试机器学习模型在多模式了解教育内容的能力。我们的数据集包含一个对齐的幻灯片和口语,用于180多个小时的视频和9000多个幻灯片,其中10位来自各种主题的讲师(例如,计算机科学,牙科,生物学)。我们介绍了两项研究任务,它们被设计为对AI代理商的垫脚石,这些阶梯可以解释(自动为演讲演示字幕),并说明(综合视觉图形以伴随口语解释)教育内容。我们提供手动注释,以帮助执行这两项研究任务并评估其最新模型。比较基线和人类学生的表现,我们发现当前模型在(1)幻灯片和口语文本之间的较弱的跨模式对齐中挣扎,(2)学习新颖的视觉介质,(3)技术语言和(4)(4)远程序列。为了解决这个问题,我们还引入了Polyvilt,这是一种多模式变压器,经过多种模式的学习损失,比目前的方法更有效。最后,我们阐明了对教育演示的多模式理解的挑战和机遇。
translated by 谷歌翻译
机器学习中的许多基本问题可以通过convex程序\ [\ min _ {\ theta \ in r^d} \ sum_ {i = 1}^{n} f_ {i}(\ theta),\]每个$ f_i $都是一个凸,Lipschitz函数在$ \ theta $的$ d_i $坐标的子集中支持。以随机梯度下降为例,解决此问题的一种常见方法涉及在每次迭代时对一个$ f_i $术语进行采样以取得进展。这种方法至关重要地依赖于$ f_i $的均匀性概念,该概念正式通过其状况编号捕获。在这项工作中,我们给出了一种将上述凸公式最小化为$ \ epsilon $ -Accuracy in $ \ widetilde {o}(\ sum_ {i = 1}^n d_i \ log(1 /\ epsilon)$计算,没有关于条件号的假设。以前的最佳算法独立于条件编号是标准切割平面方法,它需要$ o(nd \ log(1/\ epsilon))$渐变计算。作为推论,我们改善了Axiotis等人的评估甲骨文的复杂性,可分解性下的最小化。 (ICML 2021)。我们的主要技术贡献是一种自适应程序,可以通过切割平面和内点方法的新型组合在每次迭代中选择$ f_i $项。
translated by 谷歌翻译