由于新兴的深度神经网络(DNN)模型的规模继续增大,使用大型GPU集群培训DNN是实现可接受培训时间的基本要求。在本文中,我们考虑了集群大小的未来增加的情况将导致全局批量大小用于培训模型以达到基本限制:超出某个点,更大的全球批量尺寸会导致样品效率降低,总体上升准确性的时间。因此,为了实现培训性能的进一步改进,我们必须考虑“强大的缩放”策略,该策略保持全局批量大小常量,并将较小的批次分配给每个GPU。不幸的是,这使得能够有效地使用群集资源。我们呈现DeepPool,通过两个关键思想解决这种效率挑战的系统。首先,突发并行性将大量GPU分配给突发中的前景作业,以利用整个层的并行性的不均匀性。其次,GPU多路复用优先考虑前台培训工作的吞吐量,而背景培训作业包装以回收未充分利用的GPU资源,从而提高集群范围利用率。这两个想法在一起使DeepPool能够在群集刻度大的单一任务中通过标准数据并行度进行2.2 - 2.4倍的完整性。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Transformer-based models have gained large popularity and demonstrated promising results in long-term time-series forecasting in recent years. In addition to learning attention in time domain, recent works also explore learning attention in frequency domains (e.g., Fourier domain, wavelet domain), given that seasonal patterns can be better captured in these domains. In this work, we seek to understand the relationships between attention models in different time and frequency domains. Theoretically, we show that attention models in different domains are equivalent under linear conditions (i.e., linear kernel to attention scores). Empirically, we analyze how attention models of different domains show different behaviors through various synthetic experiments with seasonality, trend and noise, with emphasis on the role of softmax operation therein. Both these theoretical and empirical analyses motivate us to propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition, and then additively combines an MLP which predicts the trend component with Fourier attention which predicts the seasonal component to obtain the final prediction. Extensive experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Accurately extracting driving events is the way to maximize computational efficiency and anomaly detection performance in the tire frictional nose-based anomaly detection task. This study proposes a concise and highly useful method for improving the precision of the event extraction that is hindered by extra noise such as wind noise, which is difficult to characterize clearly due to its randomness. The core of the proposed method is based on the identification of the road friction sound corresponding to the frequency of interest and removing the opposite characteristics with several frequency filters. Our method enables precision maximization of driving event extraction while improving anomaly detection performance by an average of 8.506%. Therefore, we conclude our method is a practical solution suitable for road surface anomaly detection purposes in outdoor edge computing environments.
translated by 谷歌翻译
With the recent advance in neural machine translation demonstrating its importance, research on quality estimation (QE) has been steadily progressing. QE aims to automatically predict the quality of machine translation (MT) output without reference sentences. Despite its high utility in the real world, there remain several limitations concerning manual QE data creation: inevitably incurred non-trivial costs due to the need for translation experts, and issues with data scaling and language expansion. To tackle these limitations, we present QUAK, a Korean-English synthetic QE dataset generated in a fully automatic manner. This consists of three sub-QUAK datasets QUAK-M, QUAK-P, and QUAK-H, produced through three strategies that are relatively free from language constraints. Since each strategy requires no human effort, which facilitates scalability, we scale our data up to 1.58M for QUAK-P, H and 6.58M for QUAK-M. As an experiment, we quantitatively analyze word-level QE results in various ways while performing statistical analysis. Moreover, we show that datasets scaled in an efficient way also contribute to performance improvements by observing meaningful performance gains in QUAK-M, P when adding data up to 1.58M.
translated by 谷歌翻译
由于由于电晕病毒而迅速开发了非面对面服务,因此通过互联网(例如销售和保留)的商业正在迅速增长。消费者还会在网站上发布有关商品或服务的评论,建议或判断。消费者直接使用的审查数据为消费者提供了积极的反馈和良好的影响,例如创造业务价值。因此,从营销的角度来看,分析审核数据非常重要。我们的研究提出了一种通过审核数据来找到客户满意度因素的新方法。我们采用了一种方法来通过混合和使用数据挖掘技术来找到客户满意度的因素,这是一种大数据分析方法,而自然语言处理技术(我们的研究中)是一种语言处理方法。与过去对客户满意度进行的许多研究不同,我们的研究通过使用各种技术来对论文的新颖性。由于分析,我们的实验结果非常准确。
translated by 谷歌翻译
与常规的闭合设定识别相反,开放式识别(OSR)假设存在未知类别,在训练过程中未被视为模型。 OSR中的一种主要方法是度量学习,其中对模型进行了训练以分离已知类别数据的类间表示。 OSR中的许多作品报告说,即使模型仅通过已知类别的数据进行培训,模型也会意识到未知数,并学会将未知类表征与已知类别表示分开。本文通过观察雅各布的代表规范来分析这种新兴现象。从理论上讲,我们表明已知集中的阶层内距离最小化会减少已知类表征的雅各布式规范,同时最大化已知集合中的阶层间距离会增加未知类别的雅各布式规范。因此,封闭式度量学习通过迫使其雅各布规范值有所不同,从而将未知的未知数与已知分开。我们通过使用标准OSR数据集的大量证据来验证我们的理论框架。此外,在我们的理论框架下,我们解释了标准的深度学习技术如何有助于OSR并将框架作为指导原则来开发有效的OSR模型。
translated by 谷歌翻译
我们解决了基于标签数据集基准群集技术的可靠性。外部聚类验证中的标准方案是基于每个类形成一个单一的,明显分离的群集的假设,将类标签用作地面真实群集。但是,由于这种集群标签匹配(CLM)的假设经常破坏,因此缺乏对基准数据集CLM的理智检查对外部验证的有效性产生怀疑。尽管如此,评估CLM的程度还是具有挑战性的。例如,内部聚类验证措施可用于量化同一数据集中的CLM以评估其不同的聚类,但并非旨在比较不同数据集的聚类。在这项工作中,我们提出了一种原则性的方法来生成数据集中的内部度量,以使CLM在数据集中进行比较。我们首先确定了数据集内措施之间的四个公理,并补充了Ackerman和Ben-David的数据库内公理。然后,我们提出了概括内部措施以实现这些新公理的过程,并使用它们扩展了广泛使用的Calinski-Harabasz索引,以进行数据库CLM之间的评估。通过定量实验,我们(1)验证了概括过程的有效性和必要性,(2)表明,所提出的数据与calinski-Harabasz索引索引准确地评估了整个数据集的CLM。最后,我们证明了在进行外部验证之前评估基准数据集的CLM的重要性。
translated by 谷歌翻译
本文介绍了一个分散的多代理轨迹计划(MATP)算法,该算法保证在有限的沟通范围内在障碍物丰富的环境中生成安全,无僵硬的轨迹。所提出的算法利用基于网格的多代理路径计划(MAPP)算法进行僵局,我们引入了子目标优化方法,使代理会收敛到从MAPP生成的无僵局生成的路点。此外,提出的算法通过采用线性安全走廊(LSC)来确保优化问题和避免碰撞的可行性。我们验证所提出的算法不会在随机森林和密集的迷宫中造成僵局,而不论沟通范围如何,并且在飞行时间和距离方面的表现都优于我们以前的工作。我们通过使用十个四肢的硬件演示来验证提出的算法。
translated by 谷歌翻译