神经体系结构搜索(NAS)旨在自动化体系结构设计过程并改善深神经网络的性能。平台感知的NAS方法同时考虑性能和复杂性,并且可以找到具有低计算资源的表现良好的体系结构。尽管普通的NAS方法由于模型培训的重复而导致了巨大的计算成本,但在搜索过程中,训练包含所有候选架构的超级网的权重训练了一杆NAS,据报道会导致搜索成本较低。这项研究着重于体系结构复杂性的单发NAS,该NA优化了由两个指标的加权总和组成的目标函数,例如预测性能和参数数量。在现有方法中,必须使用加权总和的不同系数多次运行架构搜索过程,以获得具有不同复杂性的多个体系结构。这项研究旨在降低与寻找多个体系结构相关的搜索成本。提出的方法使用多个分布来生成具有不同复杂性的体系结构,并使用基于重要性采样的多个分布获得的样本来更新每个分布。提出的方法使我们能够在单个体系结构搜索中获得具有不同复杂性的多个体系结构,从而降低了搜索成本。所提出的方法应用于CIAFR-10和Imagenet数据集上卷积神经网络的体系结构搜索。因此,与基线方法相比,提出的方法发现了多个复杂性不同的架构,同时需要减少计算工作。
translated by 谷歌翻译
近年来,可微弱的建筑搜索(飞镖)已经受到了大量的关注,主要是因为它通过重量分享和连续放松来显着降低计算成本。然而,更近期的作品发现现有的可分辨率NAS技术难以俯视幼稚基线,产生劣化架构作为搜索所需。本文通过将体系结构权重放入高斯分布,而不是直接优化架构参数,而不是直接优化架构参数,而是作为分布学习问题。通过利用自然梯度变分推理(NGVI),可以基于现有的码票来容易地优化架构分布而不会产生更多内存和计算消耗。我们展示了贝叶斯原则的可分解NAS如何益处,提高勘探和提高稳定性。 NAS-BENCH-201和NAS-BENCH-1SHOT1基准数据集的实验结果证实了所提出的框架可以制造的重要改进。此外,我们还在学习参数上只需简单地应用argmax,我们进一步利用了NAS中最近提出的无培训代理,从优化分布中汲取的组架构中选择最佳架构,从而实现最终的架构-ART在NAS-BENCH-201和NAS-BENCH-1SHOT1基准上的结果。我们在飞镖搜索空间中的最佳架构也会分别获得2.37 \%,15.72 \%和24.2 \%的竞争性测试错误,分别在Cifar-10,CiFar-100和Imagenet数据集上。
translated by 谷歌翻译
本文提出了一种用于拆分计算的神经体系结构搜索(NAS)方法。拆分计算是一种新兴的机器学习推理技术,可解决在物联网系统中部署深度学习的隐私和延迟挑战。在拆分计算中,神经网络模型通过网络使用Edge服务器和IoT设备进行了分离和合作处理。因此,神经网络模型的体系结构显着影响通信有效载荷大小,模型准确性和计算负载。在本文中,我们解决了优化神经网络体系结构以进行拆分计算的挑战。为此,我们提出了NASC,该NASC共同探讨了最佳模型架构和一个拆分点,以达到延迟需求(即,计算和通信的总延迟较小,都比某个阈值较小)。 NASC采用单发NAS,不需要重复模型培训进行计算高效的体系结构搜索。我们使用硬件(HW) - 基准数据的NAS基础的绩效评估表明,拟议的NASC可以改善``通信潜伏期和模型准确性''的权衡,即,将延迟降低了约40-60%,从基线降低了约40-60%有轻微的精度降解。
translated by 谷歌翻译
Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.
translated by 谷歌翻译
在本文中,我们提出了一种基于沙普利价值的方法来评估用于神经体系结构搜索的操作贡献(Shapley-NAS)。可区分的体系结构搜索(DARTS)通过使用梯度下降优化体系结构参数来获取最佳体系结构,从而大大降低了搜索成本。但是,梯度下降更新的体系结构参数的幅度未能揭示对任务性能的实际操作重要性,因此损害了获得的体系结构的有效性。相比之下,我们建议评估操作对验证准确性的直接影响。为了处理超级核成分之间的复杂关系,我们通过考虑所有可能的组合来利用Shapley的价值来量化其边际贡献。具体而言,我们通过Shapley值评估操作贡献来迭代优化SuperNet权重,并更新体系结构参数,从而通过选择对任务贡献显着贡献的操作来得出最佳体系结构。由于Shapley值的确切计算是NP-HARD,因此采用了基于早期截断的蒙特卡洛抽样算法进行有效的近似,并且采用了动量更新机制来减轻采样过程的波动。在各种数据集和各种搜索空间上进行的广泛实验表明,我们的Shapley-NAS的表现优于最先进的方法,并具有相当大的利润,并具有轻盈的搜索成本。该代码可从https://github.com/euphoria16/shapley-nas.git获得
translated by 谷歌翻译
Vision Transformers(VITS)为计算机视觉的最新突破提供了基础。但是,设计VIT的架构是艰苦的,并且在很大程度上依赖专家知识。为了自动化设计过程并结合了部署灵活性,一击神经体系结构搜索将超级网训练和体系结构专业化解除了各种部署场景。为了应对超级网中的大量子网络,现有方法在培训期间的每个更新步骤中都同样重要且随机对所有体系结构进行处理。在体系结构搜索过程中,这些方法着重于在性能和资源消耗的帕累托前沿寻找体系结构,这在培训和部署之间形成了差距。在本文中,我们设计了一种简单而有效的方法,称为FocusFormer,以弥合这种差距。为此,我们建议学习一个体系结构采样器,以在超级网训练期间在不同的资源限制下为帕累托前沿上的这些架构分配更高的采样概率,从而使它们充分优化,从而提高其性能。在专业化过程中,我们可以直接使用训练有素的体系结构采样器来获得满足给定资源约束的准确体系结构,从而大大提高了搜索效率。关于CIFAR-100和Imagenet的广泛实验表明,我们的FocusFormer能够提高搜索架构的性能,同时大大降低搜索成本。例如,在ImageNet上,我们具有1.4G FLOPS的FocusFormer-Ti在TOP-1准确性方面优于自动构架Ti 0.5%。
translated by 谷歌翻译
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense. Among these methods, adversarial training has been drawing increasing attention because of its simplicity and effectiveness. However, the performance of the adversarial training is greatly limited by the architectures of target DNNs, which often makes the resulting DNNs with poor accuracy and unsatisfactory robustness. To address this problem, we propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training. In particular, we design a novel cell-based search space specially for adversarial training, which improves the accuracy and the robustness upper bound of the searched architectures by carefully designing the placement of the cells and the proportional relationship of the filter numbers. Then we propose a two-stage search strategy to search for both accurate and robust neural architectures. At the first stage, the architecture parameters are optimized to minimize the adversarial loss, which makes full use of the effectiveness of the adversarial training in enhancing the robustness. At the second stage, the architecture parameters are optimized to minimize both the natural loss and the adversarial loss utilizing the proposed multi-objective adversarial training method, so that the searched neural architectures are both accurate and robust. We evaluate the proposed algorithm under natural data and various adversarial attacks, which reveals the superiority of the proposed method in terms of both accurate and robust architectures. We also conclude that accurate and robust neural architectures tend to deploy very different structures near the input and the output, which has great practical significance on both hand-crafting and automatically designing of accurate and robust neural architectures.
translated by 谷歌翻译
最近,已经成功地应用于各种遥感图像(RSI)识别任务的大量基于深度学习的方法。然而,RSI字段中深度学习方法的大多数现有进步严重依赖于手动设计的骨干网络提取的特征,这严重阻碍了由于RSI的复杂性以及先前知识的限制而受到深度学习模型的潜力。在本文中,我们研究了RSI识别任务中的骨干架构的新设计范式,包括场景分类,陆地覆盖分类和对象检测。提出了一种基于权重共享策略和进化算法的一拍架构搜索框架,称为RSBNet,其中包括三个阶段:首先,在层面搜索空间中构造的超空网是在自组装的大型中预先磨削 - 基于集合单路径培训策略进行缩放RSI数据集。接下来,预先培训的SuperNet通过可切换识别模块配备不同的识别头,并分别在目标数据集上进行微调,以获取特定于任务特定的超网络。最后,我们根据没有任何网络训练的进化算法,搜索最佳骨干架构进行不同识别任务。对于不同识别任务的五个基准数据集进行了广泛的实验,结果显示了所提出的搜索范例的有效性,并证明搜索后的骨干能够灵活地调整不同的RSI识别任务并实现令人印象深刻的性能。
translated by 谷歌翻译
神经体系结构搜索(NAS)是自动化有效图像处理DNN设计的强大工具。该排名已被倡导为NAS设计有效的性能预测指标。先前的对比方法通过比较架构对并预测其相对性能来解决排名问题。但是,它仅关注两个相关建筑之间的排名,而忽略了搜索空间的整体质量分布,这可能会遇到概括性问题。提出了一个预测因子,即专注于特定体系结构的全球质量层的神经体系结构排名,以解决由当地观点引起的此类问题。 NAR在全球范围内探索搜索空间的质量层,并根据其全球排名将每个人分类为他们所属的层。因此,预测变量获得了搜索空间的性能分布的知识,这有助于更轻松地将其排名能力推广到数据集。同时,全球质量分布通过根据质量层的统计数据直接对候选者进行采样,从而促进了搜索阶段,而质量层的统计数据没有培训搜索算法,例如增强型学习(RL)或进化算法(EA),因此简化了NAS管道并保存计算开销。拟议的NAR比在两个广泛使用的NAS研究数据集上的最先进方法取得了更好的性能。在NAS-Bench-101的庞大搜索空间中,NAR可以轻松地找到具有最高0.01 $ \ unicode {x2030} $ performance的架构。它还可以很好地概括为NAS Bench-201的不同图像数据集,即CIFAR-10,CIFAR-100和Imagenet-16-120,通过识别每个它们的最佳体系结构。
translated by 谷歌翻译
现有的神经结构搜索算法主要在具有短距离连接的搜索空间上。我们争辩说,这种设计虽然安全稳定,障碍搜索算法从探索更复杂的情景。在本文中,我们在具有长距离连接的复杂搜索空间上构建搜索算法,并显示现有的权重共享搜索算法由于存在\ TextBF {交织连接}而大部分失败。基于观察,我们介绍了一个名为\ textbf {if-nas}的简单且有效的算法,在那里我们在搜索过程中执行定期采样策略来构建不同的子网,避免在任何中的交织连接出现。在所提出的搜索空间中,IF-NAS优于随机采样和先前的重量共享搜索算法,通过显着的余量。 IF-NAS还推广到微单元的空间,这些空间更容易。我们的研究强调了宏观结构的重要性,我们期待沿着这个方向进一步努力。
translated by 谷歌翻译
本文旨在探讨神经架构搜索(NAS)的可行性仅在不使用任何原始训练数据的情况下给出预先训练的模型。这是实质保护,偏离避免等的重要情况。为实现这一目标,我们首先通过从预先训练的深神经网络中恢复知识来综合可用数据。然后我们使用合成数据及其预测的软标签来指导神经结构搜索。我们确定NAS任务需要具有足够的语义,多样性和来自自然图像的最小域间隙的合成数据(我们在此处瞄准)。对于语义,我们提出了递归标签校准,以产生更多的信息性输出。对于多样性,我们提出了一个区域更新策略,以产生更多样化和富集的合成数据。对于最小的域间隙,我们使用输入和特征级正则化来模拟潜在空间的原始数据分布。我们将我们提出的三个流行NAS算法实例化:飞镖,Proxylessnas和Spos。令人惊讶的是,我们的结果表明,通过搜索我们的合成数据来实现的架构,实现了与从原始的架构中搜索的架构相当的准确性,首次导出了NAS可以有效完成的结论如果合成方法设计良好,则无需访问原件或称为自然数据。我们的代码将公开提供。
translated by 谷歌翻译
Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design. 1
translated by 谷歌翻译
在最近,对表现良好的神经体系结构(NAS)的高效,自动化的搜索引起了人们的关注。因此,主要的研究目标是减少对神经体系结构进行昂贵评估的必要性,同时有效地探索大型搜索空间。为此,替代模型将体系结构嵌入了潜在的空间并预测其性能,而神经体系结构的生成模型则可以在生成器借鉴的潜在空间内基于优化的搜索。替代模型和生成模型都具有促进结构良好的潜在空间中的查询搜索。在本文中,我们通过利用有效的替代模型和生成设计的优势来进一步提高查询效率和有前途的建筑生成之间的权衡。为此,我们提出了一个与替代预测指标配对的生成模型,该模型迭代地学会了从越来越有希望的潜在子空间中生成样品。这种方法可导致非常有效和高效的架构搜索,同时保持查询量较低。此外,我们的方法允许以一种直接的方式共同优化准确性和硬件延迟等多个目标。我们展示了这种方法的好处,不仅是W.R.T.优化体系结构以提高最高分类精度,但在硬件约束和在单个NAS基准测试中的最新方法和多个目标的最先进方法的优化。我们还可以在Imagenet上实现最先进的性能。该代码可在http://github.com/jovitalukasik/ag-net上找到。
translated by 谷歌翻译
We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing one-shot method, however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is performed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces (e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet.Equal contribution. This work is done when Haoyuan Mu and Zechun Liu are interns at MEGVII Technology.
translated by 谷歌翻译
深度神经网络中的建筑进步导致了跨越一系列计算机视觉任务的巨大飞跃。神经建筑搜索(NAS)并没有依靠人类的专业知识,而是成为自动化建筑设计的有前途的途径。尽管图像分类的最新成就提出了机会,但NAS的承诺尚未对更具挑战性的语义细分任务进行彻底评估。将NAS应用于语义分割的主要挑战来自两个方面:(i)要处理的高分辨率图像; (ii)针对自动驾驶等应用的实时推理速度(即实时语义细分)的其他要求。为了应对此类挑战,我们在本文中提出了一种替代辅助的多目标方法。通过一系列自定义预测模型,我们的方法有效地将原始的NAS任务转换为普通的多目标优化问题。然后是用于填充选择的层次预筛选标准,我们的方法逐渐实现了一组有效的体系结构在细分精度和推理速度之间进行交易。对三个基准数据集的经验评估以及使用华为地图集200 dk的应用程序的实证评估表明,我们的方法可以识别架构明显优于人类专家手动设计和通过其他NAS方法自动设计的现有最先进的体系结构。
translated by 谷歌翻译
Neural Architecture Search (NAS) is an automatic technique that can search for well-performed architectures for a specific task. Although NAS surpasses human-designed architecture in many fields, the high computational cost of architecture evaluation it requires hinders its development. A feasible solution is to directly evaluate some metrics in the initial stage of the architecture without any training. NAS without training (WOT) score is such a metric, which estimates the final trained accuracy of the architecture through the ability to distinguish different inputs in the activation layer. However, WOT score is not an atomic metric, meaning that it does not represent a fundamental indicator of the architecture. The contributions of this paper are in three folds. First, we decouple WOT into two atomic metrics which represent the distinguishing ability of the network and the number of activation units, and explore better combination rules named (Distinguishing Activation Score) DAS. We prove the correctness of decoupling theoretically and confirmed the effectiveness of the rules experimentally. Second, in order to improve the prediction accuracy of DAS to meet practical search requirements, we propose a fast training strategy. When DAS is used in combination with the fast training strategy, it yields more improvements. Third, we propose a dataset called Darts-training-bench (DTB), which fills the gap that no training states of architecture in existing datasets. Our proposed method has 1.04$\times$ - 1.56$\times$ improvements on NAS-Bench-101, Network Design Spaces, and the proposed DTB.
translated by 谷歌翻译
Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices. However, existing approaches are too resource demanding for case-by-case redesigns. Also, previous work focuses primarily on reducing FLOPs, but FLOP count does not always reflect actual latency. To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize Con-vNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. FBNets (Facebook-Berkeley-Nets), a family of models discovered by DNAS surpass state-of-the-art models both designed manually and generated automatically. FBNet-B achieves 74.1% top-1 accuracy on ImageNet with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, 2.4x smaller and 1.5x faster than MobileNetV2-1.3[17] with similar accuracy. Despite higher accuracy and lower latency than MnasNet[20], we estimate FBNet-B's search cost is 420x smaller than MnasNet's, at only 216 GPUhours. Searched for different resolutions and channel sizes, FBNets achieve 1.5% to 6.4% higher accuracy than Mo-bileNetV2. The smallest FBNet achieves 50.2% accuracy and 2.9 ms latency (345 frames per second) on a Samsung S8. Over a Samsung-optimized FBNet, the iPhone-Xoptimized model achieves a 1.4x speedup on an iPhone X. FBNet models are open-sourced at https://github. com/facebookresearch/mobile-vision. * Work done while interning at Facebook.… Figure 1. Differentiable neural architecture search (DNAS) for ConvNet design. DNAS explores a layer-wise space that each layer of a ConvNet can choose a different block. The search space is represented by a stochastic super net. The search process trains the stochastic super net using SGD to optimize the architecture distribution. Optimal architectures are sampled from the trained distribution. The latency of each operator is measured on target devices and used to compute the loss for the super net.
translated by 谷歌翻译
Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.
translated by 谷歌翻译
从搜索效率中受益,可区分的神经体系结构搜索(NAS)已发展为自动设计竞争性深神经网络(DNNS)的最主要替代品。我们注意到,必须在现实世界中严格的性能限制下执行DNN,例如,自动驾驶汽车的运行时间延迟。但是,要获得符合给定性能限制的体系结构,先前的硬件可区分的NAS方法必须重复多次搜索运行,以通过反复试验和错误手动调整超参数,因此总设计成本会成比例地增加。为了解决这个问题,我们引入了一个轻巧的硬件可区分的NAS框架,称为lightnas,努力找到所需的架构,通过一次性搜索来满足各种性能约束(即,\ \ suesperline {\ textIt {您只搜索一次}})) 。进行了广泛的实验,以显示LINDNA的优越性,而不是先前的最新方法。
translated by 谷歌翻译
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.
translated by 谷歌翻译