Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design. 1
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices. However, existing approaches are too resource demanding for case-by-case redesigns. Also, previous work focuses primarily on reducing FLOPs, but FLOP count does not always reflect actual latency. To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize Con-vNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. FBNets (Facebook-Berkeley-Nets), a family of models discovered by DNAS surpass state-of-the-art models both designed manually and generated automatically. FBNet-B achieves 74.1% top-1 accuracy on ImageNet with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, 2.4x smaller and 1.5x faster than MobileNetV2-1.3[17] with similar accuracy. Despite higher accuracy and lower latency than MnasNet[20], we estimate FBNet-B's search cost is 420x smaller than MnasNet's, at only 216 GPUhours. Searched for different resolutions and channel sizes, FBNets achieve 1.5% to 6.4% higher accuracy than Mo-bileNetV2. The smallest FBNet achieves 50.2% accuracy and 2.9 ms latency (345 frames per second) on a Samsung S8. Over a Samsung-optimized FBNet, the iPhone-Xoptimized model achieves a 1.4x speedup on an iPhone X. FBNet models are open-sourced at https://github. com/facebookresearch/mobile-vision. * Work done while interning at Facebook.… Figure 1. Differentiable neural architecture search (DNAS) for ConvNet design. DNAS explores a layer-wise space that each layer of a ConvNet can choose a different block. The search space is represented by a stochastic super net. The search process trains the stochastic super net using SGD to optimize the architecture distribution. Optimal architectures are sampled from the trained distribution. The latency of each operator is measured on target devices and used to compute the loss for the super net.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
在过去几年中,已经制作了神经结构搜索领域的显着改进。然而,由于存在搜索的约束和实际推断时间之间的间隙,搜索有效网络仍然具有挑战性。为了搜索具有低推理时间的高性能网络,若干以前的作品为搜索算法设置了计算复杂性约束。然而,许多因素影响推理的速度(例如,拖鞋,MAC)。单个指示符与延迟之间的相关性并不强。目前,提出了一些重新参数化(REP)技术将多分支转换为对单路径架构进行推断友好的。然而,多分支架构仍然是人为定义和效率低下。在这项工作中,我们提出了一种适用于结构重新参数化技术的新搜索空间。 repnas是一种单级NAS方法,以便在分支号约束下有效地搜索每个层的最佳分支块(ODBB)。我们的实验结果表明,搜索的ODBB可以轻松超越手动各种分支块(DBB),高效培训。代码和型号将越早提供。
translated by 谷歌翻译
从搜索效率中受益,可区分的神经体系结构搜索(NAS)已发展为自动设计竞争性深神经网络(DNNS)的最主要替代品。我们注意到,必须在现实世界中严格的性能限制下执行DNN,例如,自动驾驶汽车的运行时间延迟。但是,要获得符合给定性能限制的体系结构,先前的硬件可区分的NAS方法必须重复多次搜索运行,以通过反复试验和错误手动调整超参数,因此总设计成本会成比例地增加。为了解决这个问题,我们引入了一个轻巧的硬件可区分的NAS框架,称为lightnas,努力找到所需的架构,通过一次性搜索来满足各种性能约束(即,\ \ suesperline {\ textIt {您只搜索一次}})) 。进行了广泛的实验,以显示LINDNA的优越性,而不是先前的最新方法。
translated by 谷歌翻译
我们提出了三种新型的修剪技术,以提高推理意识到的可区分神经结构搜索(DNAS)的成本和结果。首先,我们介绍了DNA的随机双路构建块,它可以通过内存和计算复杂性在内部隐藏尺寸上进行搜索。其次,我们在搜索过程中提出了一种在超级网的随机层中修剪块的算法。第三,我们描述了一种在搜索过程中修剪不必要的随机层的新技术。由搜索产生的优化模型称为Prunet,并在Imagenet Top-1图像分类精度的推理潜伏期中为NVIDIA V100建立了新的最先进的Pareto边界。将Prunet作为骨架还优于COCO对象检测任务的GPUNET和EFIDENENET,相对于平均平均精度(MAP)。
translated by 谷歌翻译
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.
translated by 谷歌翻译
为了部署,神经架构搜索应该是硬件感知的,以满足设备特定的约束(例如,内存使用,延迟和能量消耗),并提高模型效率。硬件感知NAS的现有方法从目标设备收集大量样本(例如,精度和延迟),要么构建查找表或延迟估计器。然而,这种方法在现实世界方案中是不切实际的,因为存在具有不同硬件规格的许多器件,并从这些大量设备收集样本将需要禁止的计算和货币成本。为了克服这些限制,我们提出了硬件 - 自适应高效延迟预测器(帮助),其将设备特定的延迟估计问题交给了元学习问题,使得我们可以估计模型对给定任务的性能的延迟有一些样品的看不见的装置。为此,我们引入了新颖的硬件嵌入,将任何设备嵌入,将其视为输出延迟的黑盒功能,并使用硬件嵌入式以设备依赖方式学习硬件自适应延迟预测器。我们验证了在看不见的平台上实现了延迟估计性能的提议帮助,其中它达到了高估计性能,少于10个测量样本,优于所有相关基线。我们还验证了在没有它的帮助下使用帮助的端到端NAS框架,并表明它在很大程度上降低了基础NAS方法的总时间成本,在延迟约束的设置中。代码可在https://github.com/hayeonlee/help获得。
translated by 谷歌翻译
在移动边缘网络上部署深神经网络(DNN)的主要挑战是如何分离DNN模型,以匹配网络架构以及所有节点的计算和通信容量。这基本上涉及两个高耦合程序:模型生成和模型分裂。在本文中,提出了一种联合模型分割和神经结构搜索(JMSNAS)框架以在移动边缘网络上自动生成和部署DNN模型。考虑到计算和通信资源约束,配制计算图形搜索问题以查找DNN模型的多分裂点,然后培训模型以满足一些精度要求。此外,通过正确设计目标函数来实现模型精度和完成延迟之间的权衡。实验结果证实了通过最先进的分机学习设计方法的提出框架的优越性。
translated by 谷歌翻译
Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
Designing convolutional neural networks (CNN) for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant efforts have been dedicated to design and improve mobile CNNs on all dimensions, it is very difficult to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike previous work, where latency is considered via another, often inaccurate proxy (e.g., FLOPS), our approach directly measures real-world inference latency by executing the model on mobile phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that encourages layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our MnasNet achieves 75.2% top-1 accuracy with 78ms latency on a Pixel phone, which is 1.8× faster than MobileNetV2 [29] with 0.5% higher accuracy and 2.3× faster than NASNet [36] with 1.2% higher accuracy. Our MnasNet also achieves better mAP quality than MobileNets for COCO object detection. Code is at https://github.com/tensorflow/tpu/ tree/master/models/official/mnasnet.
translated by 谷歌翻译
Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.
translated by 谷歌翻译
神经体系结构搜索(NAS)最近在深度学习社区中变得越来越流行,主要是因为它可以提供一个机会,使感兴趣的用户没有丰富的专业知识,从而从深度神经网络(DNNS)的成功中受益。但是,NAS仍然很费力且耗时,因为在NAS的搜索过程中需要进行大量的性能估计,并且训练DNNS在计算上是密集的。为了解决NAS的主要局限性,提高NAS的效率对于NAS的设计至关重要。本文以简要介绍了NAS的一般框架。然后,系统地讨论了根据代理指标评估网络候选者的方法。接下来是对替代辅助NAS的描述,该NAS分为三个不同类别,即NAS的贝叶斯优化,NAS的替代辅助进化算法和NAS的MOP。最后,讨论了剩余的挑战和开放研究问题,并在这个新兴领域提出了有希望的研究主题。
translated by 谷歌翻译
基于深度学习的超分辨率(SR)近年来由于其高图像质量性能和广泛的应用方案而获得了极大的知名度。但是,先前的方法通常会遭受大量计算和巨大的功耗,这会导致实时推断的困难,尤其是在资源有限的平台(例如移动设备)上。为了减轻这种情况,我们建议使用自适应SR块进行深度搜索和每层宽度搜索,以进行深度搜索和每层宽度搜索。推理速度与SR损失一起直接将其带入具有高图像质量的SR模型,同​​时满足实时推理需求。借用了与编译器优化的速度模型在搜索过程中每次迭代中的移动设备上的速度,以预测具有各种宽度配置的SR块的推理潜伏期,以更快地收敛。通过提出的框架,我们在移动平台的GPU/DSP上实现了实时SR推断,以实现具有竞争性SR性能的720p分辨率(三星Galaxy S21)。
translated by 谷歌翻译
卷积神经网络(CNNS)用于许多现实世界应用,例如基于视觉的自主驾驶和视频内容分析。要在各种目标设备上运行CNN推断,硬件感知神经结构搜索(NAS)至关重要。有效的硬件感知NAS的关键要求是对推理延迟的快速评估,以便对不同的架构进行排名。在构建每个目标设备的延迟预测器的同时,在本领域中通常使用,这是一个非常耗时的过程,在极定的设备存在下缺乏可扩展性。在这项工作中,我们通过利用延迟单调性来解决可扩展性挑战 - 不同设备上的架构延迟排名通常相关。当存在强烈的延迟单调性时,我们可以重复使用在新目标设备上搜索一个代理设备的架构,而不会丢失最佳状态。在没有强烈的延迟单调性的情况下,我们提出了一种有效的代理适应技术,以显着提高延迟单调性。最后,我们验证了我们的方法,并在多个主流搜索空间上使用不同平台的设备进行实验,包括MobileNet-V2,MobileNet-V3,NAS-Bench-201,Proxylessnas和FBNet。我们的结果突出显示,通过仅使用一个代理设备,我们可以找到几乎与现有的每个设备NAS相同的帕累托最优架构,同时避免为每个设备构建延迟预测器的禁止成本。 github:https://github.com/ren-research/oneproxy.
translated by 谷歌翻译
可微分的架构搜索逐渐成为神经结构中的主流研究主题,以实现与早期NAS(基于EA的RL的)方法相比提高效率的能力。最近的可分辨率NAS还旨在进一步提高搜索效率,降低GPU记忆消耗,并解决“深度间隙”问题。然而,这些方法不再能够解决非微弱目标,更不用说多目标,例如性能,鲁棒性,效率和其他指标。我们提出了一个端到端的架构搜索框架,朝向非微弱的目标TND-NAS,具有在多目标NAs(MNA)中的不同NAS框架中的高效率的优点和兼容性的兼容性(MNA)。在可分辨率的NAS框架下,随着搜索空间的连续放松,TND-NAS具有在离散空间中优化的架构参数($ \ alpha $),同时通过$ \ alpha $逐步缩小超缩小的搜索策略。我们的代表性实验需要两个目标(参数,准确性),例如,我们在CIFAR10上实现了一系列高性能紧凑型架构(1.09米/ 3.3%,2.4M / 2.95%,9.57M / 2.54%)和CIFAR100(2.46 M / 18.3%,5.46 / 16.73%,12.88 / 15.20%)数据集。有利地,在现实世界的情景下(资源受限,平台专用),TND-NA可以方便地达到Pareto-Optimal解决方案。
translated by 谷歌翻译
神经结构搜索(NAS)引起了日益增长的兴趣。为了降低搜索成本,最近的工作已经探讨了模型的重量分享,并在单枪NAS进行了重大进展。然而,已经观察到,单次模型精度较高的模型并不一定在独立培训时更好地执行更好。为了解决这个问题,本文提出了搜索空间的逐步自动设计,名为Pad-NAS。与超字幕中的所有层共享相同操作搜索空间的先前方法不同,我们根据操作修剪制定逐行搜索策略,并构建层面操作搜索空间。通过这种方式,Pad-NAS可以自动设计每层的操作,并在搜索空间质量和模型分集之间实现权衡。在搜索过程中,我们还考虑了高效神经网络模型部署的硬件平台约束。关于Imagenet的广泛实验表明我们的方法可以实现最先进的性能。
translated by 谷歌翻译
There is growing interest in automating neural network architecture design. Existing architecture search methods can be computationally expensive, requiring thousands of different architectures to be trained from scratch. Recent work has explored weight sharing across models to amortize the cost of training. Although previous methods reduced the cost of architecture search by orders of magnitude, they remain complex, requiring hypernetworks or reinforcement learning controllers. We aim to understand weight sharing for one-shot architecture search. With careful experimental analysis, we show that it is possible to efficiently identify promising architectures from a complex search space without either hypernetworks or RL.
translated by 谷歌翻译