机器学习中的超参数优化通常是使用只会导致大约一组超参数的幼稚技术来实现的。尽管贝叶斯优化之类的技术在给定超参数的给定域进行了智能搜索,但不能保证最佳解决方案。大多数这些方法的一个主要缺点是用超参数数量增加其搜索域的指数增加,从而增加了计算成本并使方法缓慢。超参数优化问题本质上是双重优化任务,一些研究尝试了解决此问题的双重解决方案方法。但是,这些研究假设了一组独特的模型权重,可以最大程度地减少训练损失,这通常受到深度学习体系结构的影响。本文讨论了一种基于梯度的双层方法,该方法解决了这些缺点以解决超参数优化问题。所提出的方法可以处理我们在实验中选择正则化高参数的连续超参数。该方法保证了本研究已在理论上证明的一组最佳超参数的收敛。该想法基于使用高斯过程回归近似较低级别的最佳值函数。结果,使用增强拉格朗日方法解决的单个级别约束优化任务缩小为单个级别约束优化任务。我们已经对多层感知器和LENET架构进行了有关MNIST和CIFAR-10数据集的广泛计算研究,以证实该方法的效率。一项针对网格搜索,随机搜索,贝叶斯优化和Hyberband方法的比较研究表明,所提出的算法会收敛于较低的计算,并导致模型在测试集上更好地推广。
translated by 谷歌翻译
二重优化(BO)可用于解决各种重要的机器学习问题,包括但不限于超参数优化,元学习,持续学习和增强学习。常规的BO方法需要通过与隐式分化的低级优化过程进行区分,这需要与Hessian矩阵相关的昂贵计算。最近,人们一直在寻求BO的一阶方法,但是迄今为止提出的方法对于大规模的深度学习应用程序往往是复杂且不切实际的。在这项工作中,我们提出了一种简单的一阶BO算法,仅取决于一阶梯度信息,不需要隐含的区别,并且对于大规模的非凸函数而言是实用和有效的。我们为提出的方法提供了非注重方法分析非凸目标的固定点,并提出了表明其出色实践绩效的经验结果。
translated by 谷歌翻译
Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a "black art" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.
translated by 谷歌翻译
基于梯度的高参数调整的优化方法可确保理论收敛到固定解决方案时,对于固定的上层变量值,双光线程序的下层级别强烈凸(LLSC)和平滑(LLS)。对于在许多机器学习算法中调整超参数引起的双重程序,不满足这种情况。在这项工作中,我们开发了一种基于不精确度(VF-IDCA)的基于依次收敛函数函数算法。我们表明,该算法从一系列的超级参数调整应用程序中实现了无LLSC和LLS假设的固定解决方案。我们的广泛实验证实了我们的理论发现,并表明,当应用于调子超参数时,提出的VF-IDCA会产生较高的性能。
translated by 谷歌翻译
Two-level stochastic optimization formulations have become instrumental in a number of machine learning contexts such as continual learning, neural architecture search, adversarial learning, and hyperparameter tuning. Practical stochastic bilevel optimization problems become challenging in optimization or learning scenarios where the number of variables is high or there are constraints. In this paper, we introduce a bilevel stochastic gradient method for bilevel problems with lower-level constraints. We also present a comprehensive convergence theory that covers all inexact calculations of the adjoint gradient (also called hypergradient) and addresses both the lower-level unconstrained and constrained cases. To promote the use of bilevel optimization in large-scale learning, we introduce a practical bilevel stochastic gradient method (BSG-1) that does not require second-order derivatives and, in the lower-level unconstrained case, dismisses any system solves and matrix-vector products.
translated by 谷歌翻译
We introduce a framework based on bilevel programming that unifies gradient-based hyperparameter optimization and meta-learning. We show that an approximate version of the bilevel problem can be solved by taking into explicit account the optimization dynamics for the inner objective. Depending on the specific setting, the outer variables take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We provide sufficient conditions under which solutions of the approximate problem converge to those of the exact problem. We instantiate our approach for meta-learning in the case of deep learning where representation layers are treated as hyperparameters shared across a set of training episodes. In experiments, we confirm our theoretical findings, present encouraging results for few-shot learning and contrast the bilevel approach against classical approaches for learning-to-learn.
translated by 谷歌翻译
人工神经网络(ANN)训练景观的非凸起带来了固有的优化困难。虽然传统的背传播随机梯度下降(SGD)算法及其变体在某些情况下是有效的,但它们可以陷入杂散的局部最小值,并且对初始化和普通公共表敏感。最近的工作表明,随着Relu激活的ANN的培训可以重新重整为凸面计划,使希望能够全局优化可解释的ANN。然而,天真地解决凸训练制剂具有指数复杂性,甚至近似启发式需要立方时间。在这项工作中,我们描述了这种近似的质量,并开发了两个有效的算法,这些算法通过全球收敛保证培训。第一算法基于乘法器(ADMM)的交替方向方法。它解决了精确的凸形配方和近似对应物。实现线性全局收敛,并且初始几次迭代通常会产生具有高预测精度的解决方案。求解近似配方时,每次迭代时间复杂度是二次的。基于“采样凸面”理论的第二种算法更简单地实现。它解决了不受约束的凸形制剂,并收敛到大约全球最佳的分类器。当考虑对抗性培训时,ANN训练景观的非凸起加剧了。我们将稳健的凸优化理论应用于凸训练,开发凸起的凸起制剂,培训Anns对抗对抗投入。我们的分析明确地关注一个隐藏层完全连接的ANN,但可以扩展到更复杂的体系结构。
translated by 谷歌翻译
二重优化发现在现代机器学习问题中发现了广泛的应用,例如超参数优化,神经体系结构搜索,元学习等。而具有独特的内部最小点(例如,内部功能是强烈凸的,都具有唯一的内在最小点)的理解,这是充分理解的,多个内部最小点的问题仍然是具有挑战性和开放的。为此问题设计的现有算法适用于限制情况,并且不能完全保证融合。在本文中,我们采用了双重优化的重新制定来限制优化,并通过原始的双二线优化(PDBO)算法解决了问题。 PDBO不仅解决了多个内部最小挑战,而且还具有完全一阶效率的情况,而无需涉及二阶Hessian和Jacobian计算,而不是大多数现有的基于梯度的二杆算法。我们进一步表征了PDBO的收敛速率,它是与多个内部最小值的双光线优化的第一个已知的非质合收敛保证。我们的实验证明了所提出的方法的预期性能。
translated by 谷歌翻译
计算高效的非近视贝叶斯优化(BO)的最新进展提高了传统近视方法的查询效率,如预期的改进,同时仅适度提高计算成本。然而,这些进展在很大程度上是有限的,因为不受约束的优化。对于约束优化,少数现有的非近视博方法需要重量计算。例如,一个现有的非近视约束BO方法[LAM和Willcox,2017]依赖于计算昂贵的不可靠的暴力衍生物的无可靠性衍生物优化蒙特卡罗卷展卷采集功能。使用Reparameterization技巧进行更有效的基于衍生物的优化的方法,如在不受约束的环境中,如样本平均近似和无限扰动分析,不扩展:约束在取样的采集功能表面中引入阻碍其优化的不连续性。此外,我们认为非近视在受限制问题中更为重要,因为违反限制的恐惧将近视方法推动了可行和不可行区域之间的边界,减缓了具有严格约束的最佳解决方案的发现。在本文中,我们提出了一种计算的有效的两步保护受限贝叶斯优化采集功能(2-OPT-C)支持顺序和批处理设置。为了实现快速采集功能优化,我们开发了一种新的基于似然比的非偏见估计,其两步最佳采集函数的梯度不使用Reparameterization技巧。在数值实验中,2-OPT-C通常通过先前的方法通过2倍或更多的查询效率,并且在某些情况下通过10倍或更大。
translated by 谷歌翻译
The hyperparameter optimization of neural network can be expressed as a bilevel optimization problem. The bilevel optimization is used to automatically update the hyperparameter, and the gradient of the hyperparameter is the approximate gradient based on the best response function. Finding the best response function is very time consuming. In this paper we propose CPMLHO, a new hyperparameter optimization method using cutting plane method and mixed-level objective function.The cutting plane is added to the inner layer to constrain the space of the response function. To obtain more accurate hypergradient,the mixed-level can flexibly adjust the loss function by using the loss of the training set and the verification set. Compared to existing methods, the experimental results show that our method can automatically update the hyperparameters in the training process, and can find more superior hyperparameters with higher accuracy and faster convergence.
translated by 谷歌翻译
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration nonstochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, Hyperband, for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that Hyperband can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems.
translated by 谷歌翻译
Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations. It relies on querying a distribution over functions defined by a relatively cheap surrogate model. An accurate model for this distribution over functions is critical to the effectiveness of the approach, and is typically fit using Gaussian processes (GPs). However, since GPs scale cubically with the number of observations, it has been challenging to handle objectives whose optimization requires many evaluations, and as such, massively parallelizing the optimization.In this work, we explore the use of neural networks as an alternative to GPs to model distributions over functions. We show that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically. This allows us to achieve a previously intractable degree of parallelism, which we apply to large scale hyperparameter optimization, rapidly finding competitive models on benchmark object recognition tasks using convolutional networks, and image caption generation using neural language models.
translated by 谷歌翻译
在确定性优化中,通常假定问题的所有参数都是固定和已知的。但是,实际上,某些参数可能是未知的先验参数,但可以从历史数据中估算。典型的预测 - 优化方法将预测和优化分为两个阶段。最近,端到端的预测到优化已成为有吸引力的替代方法。在这项工作中,我们介绍了PYEPO软件包,这是一个基于Pytorch的端到端预测,然后在Python中进行了优化的库。据我们所知,PYEPO(发音为“带有静音” n“”的“菠萝”)是线性和整数编程的第一个通用工具,具有预测的目标函数系数。它提供了两种基本算法:第一种基于Elmachtoub&Grigas(2021)的开创性工作的凸替代损失函数,第二个基于Vlastelica等人的可区分黑盒求解器方法。 (2019)。 PYEPO提供了一个简单的接口,用于定义新的优化问题,最先进的预测 - 优化训练算法,自定义神经网络体系结构的使用以及端到端方法与端到端方法与与端到端方法的比较两阶段的方法。 PYEPO使我们能够进行一系列全面的实验,以比较沿轴上的多种端到端和两阶段方法,例如预测准确性,决策质量和运行时间,例如最短路径,多个背包和旅行等问题销售人员问题。我们讨论了这些实验中的一些经验见解,这些见解可以指导未来的研究。 PYEPO及其文档可在https://github.com/khalil-research/pyepo上找到。
translated by 谷歌翻译
As machine learning being used increasingly in making high-stakes decisions, an arising challenge is to avoid unfair AI systems that lead to discriminatory decisions for protected population. A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints, which achieves Pareto efficiency when trading off performance against fairness. Among various fairness metrics, the ones based on the area under the ROC curve (AUC) are emerging recently because they are threshold-agnostic and effective for unbalanced data. In this work, we formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints. This problem can be reformulated as a min-max optimization problem with min-max constraints, which we solve by stochastic first-order methods based on a new Bregman divergence designed for the special structure of the problem. We numerically demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
translated by 谷歌翻译
由于其数据效率,贝叶斯优化已经出现在昂贵的黑盒优化的最前沿。近年来,关于新贝叶斯优化算法及其应用的发展的研究激增。因此,本文试图对贝叶斯优化的最新进展进行全面和更新的调查,并确定有趣的开放问题。我们将贝叶斯优化的现有工作分为九个主要群体,并根据所提出的算法的动机和重点。对于每个类别,我们介绍了替代模型的构建和采集功能的适应的主要进步。最后,我们讨论了开放的问题,并提出了有希望的未来研究方向,尤其是在分布式和联合优化系统中的异质性,隐私保护和公平性方面。
translated by 谷歌翻译
找到模型的最佳超参数可以作为双重优化问题,通常使用零级技术解决。在这项工作中,当内部优化问题是凸但不平滑时,我们研究一阶方法。我们表明,近端梯度下降和近端坐标下降序列序列的前向模式分化,雅各比人会收敛到精确的雅各布式。使用隐式差异化,我们表明可以利用内部问题的非平滑度来加快计算。最后,当内部优化问题大约解决时,我们对高度降低的误差提供了限制。关于回归和分类问题的结果揭示了高参数优化的计算益处,尤其是在需要多个超参数时。
translated by 谷歌翻译
We propose an algorithm for inexpensive gradient-based hyperparameter optimization that combines the implicit function theorem (IFT) with efficient inverse Hessian approximations. We present results on the relationship between the IFT and differentiating through optimization, motivating our algorithm. We use the proposed approach to train modern network architectures with millions of weights and millions of hyperparameters. We learn a data-augmentation networkwhere every weight is a hyperparameter tuned for validation performance-that outputs augmented training examples; we learn a distilled dataset where each feature in each datapoint is a hyperparameter; and we tune millions of regularization hyperparameters. Jointly tuning weights and hyperparameters with our approach is only a few times more costly in memory and compute than standard training.• We scale IFT-based hyperparameter optimization to modern, large neural architectures, including AlexNet and LSTM-based language models.• We demonstrate several uses for fitting hyperparameters almost as easily as weights, including perparameter regularization, data distillation, and learned-from-scratch data augmentation methods.• We explore how training-validation splits should change when tuning many hyperparameters.
translated by 谷歌翻译
我们开发了快速算法和可靠软件,以凸出具有Relu激活功能的两层神经网络的凸优化。我们的工作利用了标准的重量罚款训练问题作为一组组-YELL_1 $调查的数据本地模型的凸重新印度,其中局部由多面体锥体约束强制执行。在零规范化的特殊情况下,我们表明此问题完全等同于凸“ Gated Relu”网络的不受约束的优化。对于非零正则化的问题,我们表明凸面式relu模型获得了RELU训练问题的数据依赖性近似范围。为了优化凸的重新制定,我们开发了一种加速的近端梯度方法和实用的增强拉格朗日求解器。我们表明,这些方法比针对非凸问题(例如SGD)和超越商业内部点求解器的标准训练启发式方法要快。在实验上,我们验证了我们的理论结果,探索组-ELL_1 $正则化路径,并对神经网络进行比例凸的优化,以在MNIST和CIFAR-10上进行图像分类。
translated by 谷歌翻译
HyperParameter Optimization(HPO)是一种确保机器学习(ML)算法最佳性能的必要步骤。已经开发了几种方法来执行HPO;其中大部分都集中在优化一个性能措施(通常是基于错误的措施),并且在这种单一目标HPO问题上的文献是巨大的。然而,最近似乎似乎侧重于同时优化多个冲突目标的算法。本文提出了对2014年至2020年的文献的系统调查,在多目标HPO算法上发布,区分了基于成逐的算法,Metamodel的算法以及使用两者混合的方法。我们还讨论了用于比较多目标HPO程序和今后的研究方向的质量指标。
translated by 谷歌翻译
深度神经网络(DNNS)和数据集的增长不断上升,这激发了对同时选择和培训的有效解决方案的需求。许多迭代学习者的高参数优化方法(HPO)的许多方法,包括DNNS试图通过查询和学习响应表面来解决该问题的最佳表面来解决此问题。但是,这些方法中的许多方法都会产生近视疑问,不考虑有关响应结构的先验知识和/或执行偏见的成本感知搜索,当指定总成本预算时,所有这些都会加剧识别表现最好的模型。本文提出了一种新颖的方法,称为迭代学习者(BAPI),以在成本预算有限的情况下解决HPO问题。 BAPI是一种有效的非洋流贝叶斯优化解决方案,可以说明预算,并利用有关目标功能和成本功能的先验知识来选择更好的配置,并在评估期间(培训)做出更明智的决策。针对迭代学习者的不同HPO基准测试的实验表明,在大多数情况下,BAPI的性能比最先进的基线表现更好。
translated by 谷歌翻译