In recent years, image and video delivery systems have begun integrating deep learning super-resolution (SR) approaches, leveraging their unprecedented visual enhancement capabilities while reducing reliance on networking conditions. Nevertheless, deploying these solutions on mobile devices still remains an active challenge as SR models are excessively demanding with respect to workload and memory footprint. Despite recent progress on on-device SR frameworks, existing systems either penalize visual quality, lead to excessive energy consumption or make inefficient use of the available resources. This work presents NAWQ-SR, a novel framework for the efficient on-device execution of SR models. Through a novel hybrid-precision quantization technique and a runtime neural image codec, NAWQ-SR exploits the multi-precision capabilities of modern mobile NPUs in order to minimize latency, while meeting user-specified quality constraints. Moreover, NAWQ-SR selectively adapts the arithmetic precision at run time to equip the SR DNN's layers with wider representational power, improving visual quality beyond what was previously possible on NPUs. Altogether, NAWQ-SR achieves an average speedup of 7.9x, 3x and 1.91x over the state-of-the-art on-device SR systems that use heterogeneous processors (MobiSR), CPU (SplitSR) and NPU (XLSR), respectively. Furthermore, NAWQ-SR delivers an average of 3.2x speedup and 0.39 dB higher PSNR over status-quo INT8 NPU designs, but most importantly mitigates the negative effects of quantization on visual quality, setting a new state-of-the-art in the attainable quality of NPU-based SR.
translated by 谷歌翻译
Recent image degradation estimation methods have enabled single-image super-resolution (SR) approaches to better upsample real-world images. Among these methods, explicit kernel estimation approaches have demonstrated unprecedented performance at handling unknown degradations. Nonetheless, a number of limitations constrain their efficacy when used by downstream SR models. Specifically, this family of methods yields i) excessive inference time due to long per-image adaptation times and ii) inferior image fidelity due to kernel mismatch. In this work, we introduce a learning-to-learn approach that meta-learns from the information contained in a distribution of images, thereby enabling significantly faster adaptation to new images with substantially improved performance in both kernel estimation and image fidelity. Specifically, we meta-train a kernel-generating GAN, named MetaKernelGAN, on a range of tasks, such that when a new image is presented, the generator starts from an informed kernel estimate and the discriminator starts with a strong capability to distinguish between patch distributions. Compared with state-of-the-art methods, our experiments show that MetaKernelGAN better estimates the magnitude and covariance of the kernel, leading to state-of-the-art blind SR results within a similar computational regime when combined with a non-blind SR model. Through supervised learning of an unsupervised learner, our method maintains the generalizability of the unsupervised learner, improves the optimization stability of kernel estimation, and hence image adaptation, and leads to a faster inference with a speedup between 14.24 to 102.1x over existing methods.
translated by 谷歌翻译
Embedded and IoT devices, largely powered by microcontroller units (MCUs), could be made more intelligent by leveraging on-device deep learning. One of the main challenges of neural network inference on an MCU is the extremely limited amount of read-write on-chip memory (SRAM, < 512 kB). SRAM is consumed by the neural network layer (operator) input and output buffers, which, traditionally, must be in memory (materialised) for an operator to execute. We discuss a novel execution paradigm for microcontroller deep learning, which modifies the execution of neural networks to avoid materialising full buffers in memory, drastically reducing SRAM usage with no computation overhead. This is achieved by exploiting the properties of operators, which can consume/produce a fraction of their input/output at a time. We describe a partial execution compiler, Pex, which produces memory-efficient execution schedules automatically by identifying subgraphs of operators whose execution can be split along the feature ("channel") dimension. Memory usage is reduced further by targeting memory bottlenecks with structured pruning, leading to the co-design of the network architecture and its execution schedule. Our evaluation of image and audio classification models: (a) establishes state-of-the-art performance in low SRAM usage regimes for considered tasks with up to +2.9% accuracy increase; (b) finds that a 4x memory reduction is possible by applying partial execution alone, or up to 10.5x when using the compiler-pruning co-design, while maintaining the classification accuracy compared to prior work; (c) uses the recovered SRAM to process higher resolution inputs instead, increasing accuracy by up to +3.9% on Visual Wake Words.
translated by 谷歌翻译
随着深度神经网络(DNN)的出现,成为许多计算机视觉任务中的骨干,它们在现实世界中的消费应用程序中的采用不断扩大。鉴于智能设备的丰富性和无所不能,正在形成“智能生态系统”,同时进行感应而不是独立。这将处式推理范式转移到在边缘部署集中式神经加工单元(NPU),其中多个设备(例如,在智能家居或自动驾驶汽车中)可以通过动态速率流式传输数据以进行处理。尽管这为输入批处理提供了增强的潜力,但幼稚的解决方案可以导致表现不佳的性能和经验质量,尤其是在尖峰负载下。同时,动态DNN的部署,包括随机计算图(例如早期 - 外观(EE)模型),引入了此类系统中动态行为的新维度。在这项工作中,我们提出了一种新颖的早期感知的调度算法,该算法允许在运行时进行样本抢占,以说明到达和早期外来过程引入的动态性。同时,我们向NPU硬件体系结构的设计空间介绍了两个新颖的维度,即流体批处理和可堆叠的处理元素,这些元素可以使运行时适应性适应不同的批次尺寸,并显着改善了NPU利用率,即使在小批次尺寸下也是如此。我们的评估表明,我们的系统分别在平均延迟和尾部潜伏期SLO满意度方面,平均达到1.97倍和6.7倍的改善。
translated by 谷歌翻译
基于注意力的神经网络在许多AI任务中都普遍存在。尽管其出色的算法性能,但注意力机制和前馈网络(FFN)的使用仍需要过多的计算和内存资源,这通常会损害其硬件性能。尽管已经引入了各种稀疏变体,但大多数方法仅着重于缓解算法级别上的二次注意力缩放,而无需明确考虑将其方法映射到真实硬件设计上的效率。此外,大多数努力仅专注于注意机制或FFN,但没有共同优化这两个部分,导致当前的大多数设计在处理不同的输入长度时缺乏可扩展性。本文从硬件角度系统地考虑了不同变体中的稀疏模式。在算法级别上,我们提出了Fabnet,这是一种适合硬件的变体,它采用统一的蝴蝶稀疏模式来近似关注机制和FFN。在硬件级别上,提出了一种新颖的适应性蝴蝶加速器,可以在运行时通过专用硬件控件配置,以使用单个统一的硬件引擎加速不同的蝴蝶层。在远程 - ARENA数据集上,FabNet达到了与香草变压器相同的精度,同时将计算量减少10到66次,参数数量为2至22次。通过共同优化算法和硬件,我们的基于FPGA的蝴蝶加速器在归一化到同一计算预算的最新加速器上达到了14.2至23.2倍的速度。与Raspberry Pi 4和Jetson Nano上优化的CPU和GPU设计相比,我们的系统在相同的功率预算下的最大273.8和15.1倍。
translated by 谷歌翻译
当可用的硬件无法满足内存和计算要求以有效地训练高性能的机器学习模型时,需要妥协训练质量或模型复杂性。在联合学习(FL)中,节点是比传统服务器级硬件更具限制的数量级,并且通常是电池供电的,严重限制了可以在此范式下训练的模型的复杂性。尽管大多数研究都集中在设计更好的聚合策略上以提高收敛速度并减轻FL的沟通成本,但更少的努力致力于加快设备培训。这样的阶段重复数百次(即每回合)并可能涉及数千个设备,这是培训联合模型所需的大部分时间,以及客户端的全部能源消耗。在这项工作中,我们介绍了第一个研究在FL工作负载中培训时间引入稀疏性时出现的独特方面的研究。然后,我们提出了Zerofl,该框架依赖于高度稀疏的操作来加快设备训练。与通过将最先进的稀疏训练框架适应FL设置相比,接受Zerofl和95%稀疏性训练的模型高达2.3%的精度。
translated by 谷歌翻译
启用摄像头的移动设备的无处不在导致在边缘生产大量未标记的视频数据。尽管已经提出了各种自我监督学习(SSL)方法来收集其潜在的时空表征,以进行特定于任务的培训,但实际挑战包括隐私问题和沟通成本,可以阻止SSL在大规模上部署。为了减轻这些问题,我们建议将联合学习(FL)用于视频SSL的任务。在这项工作中,我们评估了当前最新ART(SOTA)视频-SSL技术的性能,并确定其在与Kinetics-400数据集模拟的大规模FL设置中集成到大规模的FL设置时的缺陷。我们遵循,为视频(称为FedVSSL)提出了一个新颖的Federated SSL框架,该框架集成了不同的聚合策略和部分重量更新。广泛的实验证明了FEDVSSL的有效性和意义,因为它在UCF-101上优于下游检索任务的集中式SOTA,而HMDB-51的效率为6.66%。
translated by 谷歌翻译
联邦学习(FL)已成为一种前瞻性解决方案,可促进对高性能的集中模型的培训,而不会损害用户的隐私。尽管成功,但目前的研究受到了在实验初期建立现实的大规模FL系统的可能性的限制。仿真可以帮助加速这一过程。为了促进异构客户的有效可扩展的FL模拟,我们设计和实施ProteA,这是使用FL框架花朵在联合系统中灵活且轻巧的客户型分析组件。它允许自动收集系统级统计信息并估算每个客户所需的资源,从而以资源感知方式运行模拟。结果表明,我们的设计成功地增加了1.66 $ \ times $ $更快的壁挂时间和2.6 $ \ times $更好的GPU利用率的平行性,这可以对异构客户进行大规模实验。
translated by 谷歌翻译
支持麦克风的设备的无处不在导致在边缘生产大量未标记的音频数据。自我监督学习(SSL)和联合学习(FL)的整合到一个连贯的系统中,可以提供数据隐私保证,同时还可以提高语音表示的质量和稳健性。在本文中,从算法,硬件和系统限制的角度来看,我们对FL场景下的培训语音SSL模型的可行性和复杂性提供了首个系统研究。尽管它们的组合具有很高的潜力,但我们发现现有的系统限制和算法行为使SSL和FL系统几乎无法构建。然而,至关重要的是,我们的结果表明了特定的绩效瓶颈和研究机会,这将使这种情况得到逆转。尽管我们的分析表明,鉴于硬件的现有趋势,混合SSL和FL语音系统要等到2027年才能可行。我们认为,这项研究可以成为加速工作以提早达到这一里程碑的路线图。
translated by 谷歌翻译
无监督域适应(UDA)的突破可以帮助将富含标签的源域的模型调整为未标记的目标域。尽管有这些进步,但缺乏对UDA算法的缺乏研究,特别是基于对抗性学习的算法,可以在分布式设置中工作。在现实世界应用中,目标域通常分布在数千个设备上,并且现有的对抗UDA算法 - 这些算法中集中在本质上 - 无法应用于这些设置。为了解决这一重要问题,我们介绍了弗鲁加:分布式对策UDA的端到端框架。通过对UDA文献进行仔细分析,我们确定了分布式UDA系统的设计目标,并提出了两种新算法,以提高分布式环境中对抗性UDA的适应准确性和培训效率。我们对具有五个图像和语音数据集的弗鲁加的评估表明,它可以将目标域精度升高至50%,并提高对抗越野UDA的培训效率至少11次。
translated by 谷歌翻译