In this paper, we present a pure-Python open-source library, called PyPop7, for black-box optimization (BBO). It provides a unified and modular interface for more than 60 versions and variants of different black-box optimization algorithms, particularly population-based optimizers, which can be classified into 12 popular families: Evolution Strategies (ES), Natural Evolution Strategies (NES), Estimation of Distribution Algorithms (EDA), Cross-Entropy Method (CEM), Differential Evolution (DE), Particle Swarm Optimizer (PSO), Cooperative Coevolution (CC), Simulated Annealing (SA), Genetic Algorithms (GA), Evolutionary Programming (EP), Pattern Search (PS), and Random Search (RS). It also provides many examples, interesting tutorials, and full-fledged API documentations. Through this new library, we expect to provide a well-designed platform for benchmarking of optimizers and promote their real-world applications, especially for large-scale BBO. Its source code and documentations are available at https://github.com/Evolutionary-Intelligence/pypop and https://pypop.readthedocs.io/en/latest, respectively.
translated by 谷歌翻译
The deep learning revolution has greatly been accelerated by the 'hardware lottery': Recent advances in modern hardware accelerators and compilers paved the way for large-scale batch gradient optimization. Evolutionary optimization, on the other hand, has mainly relied on CPU-parallelism, e.g. using Dask scheduling and distributed multi-host infrastructure. Here we argue that also modern evolutionary computation can significantly benefit from the massive computational throughput provided by GPUs and TPUs. In order to better harness these resources and to enable the next generation of black-box optimization algorithms, we release evosax: A JAX-based library of evolution strategies which allows researchers to leverage powerful function transformations such as just-in-time compilation, automatic vectorization and hardware parallelization. evosax implements 30 evolutionary optimization algorithms including finite-difference-based, estimation-of-distribution evolution strategies and various genetic algorithms. Every single algorithm can directly be executed on hardware accelerators and automatically vectorized or parallelized across devices using a single line of code. It is designed in a modular fashion and allows for flexible usage via a simple ask-evaluate-tell API. We thereby hope to facilitate a new wave of scalable evolutionary optimization algorithms.
translated by 谷歌翻译
为了实现峰值预测性能,封路计优化(HPO)是机器学习的重要组成部分及其应用。在过去几年中,HPO的有效算法和工具的数量大幅增加。与此同时,社区仍缺乏现实,多样化,计算廉价和标准化的基准。这是多保真HPO方法的情况。为了缩短这个差距,我们提出了HPoBench,其中包括7个现有和5个新的基准家庭,共有100多个多保真基准问题。 HPobench允许以可重复的方式运行该可扩展的多保真HPO基准,通过隔离和包装容器中的各个基准。它还提供了用于计算实惠且统计数据的评估的代理和表格基准。为了展示HPoBench与各种优化工具的广泛兼容性,以及其有用性,我们开展了一个来自6个优化工具的13个优化器的示例性大规模研究。我们在这里提供HPobench:https://github.com/automl/hpobench。
translated by 谷歌翻译
复杂系统的一个重要特征是具有许多局部最小值和子结构的问题域。生物系统通过根据环境或发育环境在不同子系统之间切换来管理这些局部最小值。遗传算法(GA)可以模仿此切换性能,并提供一种克服问题域复杂性的手段。但是,标准GA需要其他操作员,该操作员将允许以随机方式进行大规模探索。无梯度的启发式搜索技术适合在离散域中为这种单个客观优化任务提供最佳解决方案,尤其是与明显较慢的基于梯度的方法相比。为此,作者从飞行计划域中转向优化问题。作者比较了这种常见的无梯度启发式搜索算法的性能,并提出了气体的变体。还引入了迭代的链接方法(IC)方法,这是通过触发多个局部搜索而不是突变操作员的单数动作来基于传统链接技术的。作者将表明,使用多个本地搜索可以改善本地随机搜索的性能,从而为许多其他问题域提供了足够的机会。据观察,所提出的GA变体在所有基准测试基准中的平均成本最低,包括提出的问题和IC算法的性能优于其成分。
translated by 谷歌翻译
在本文中,我们提出了一个简单的策略,可以通过平均估计精英子人群来估计收敛点。基于这个想法,我们得出了两种方法,它们是普通的平均策略和加权平均策略。我们还设计了一个具有估计收敛点的平均值的高斯采样算子,具有一定的标准偏差。该操作员与传统的差分进化算法(DE)结合使用,以加速收敛。数值实验表明,我们的建议可以在CEC2013套件上的28个低维测试功能的大多数功能上加速DE,并且可以轻松扩展我们的建议与其他基于人群的进化算法结合使用,并简单地修改。
translated by 谷歌翻译
Metaheuristics are popularly used in various fields, and they have attracted much attention in the scientific and industrial communities. In recent years, the number of new metaheuristic names has been continuously growing. Generally, the inventors attribute the novelties of these new algorithms to inspirations from either biology, human behaviors, physics, or other phenomena. In addition, these new algorithms, compared against basic versions of other metaheuristics using classical benchmark problems without shift/rotation, show competitive performances. In this study, we exhaustively tabulate more than 500 metaheuristics. To comparatively evaluate the performance of the recent competitive variants and newly proposed metaheuristics, 11 newly proposed metaheuristics and 4 variants of established metaheuristics are comprehensively compared on the CEC2017 benchmark suite. In addition, whether these algorithms have a search bias to the center of the search space is investigated. The results show that the performance of the newly proposed EBCM (effective butterfly optimizer with covariance matrix adaptation) algorithm performs comparably to the 4 well performing variants of the established metaheuristics and possesses similar properties and behaviors, such as convergence, diversity, exploration and exploitation trade-offs, in many aspects. The performance of all 15 of the algorithms is likely to deteriorate due to certain transformations, while the 4 state-of-the-art metaheuristics are less affected by transformations such as the shifting of the global optimal point away from the center of the search space. It should be noted that, except EBCM, the other 10 new algorithms proposed mostly during 2019-2020 are inferior to the well performing 2017 variants of differential evolution and evolution strategy in terms of convergence speed and global search ability on CEC 2017 functions.
translated by 谷歌翻译
In the field of derivative-free optimization, both of its main branches, the deterministic and nature-inspired techniques, experienced in recent years substantial advancement. In this paper, we provide an extensive computational comparison of selected methods from each of these branches. The chosen representatives were either standard and well-utilized methods, or the best-performing methods from recent numerical comparisons. The computational comparison was performed on five different benchmark sets and the results were analyzed in terms of performance, time complexity, and convergence properties of the selected methods. The results showed that, when dealing with situations where the objective function evaluations are relatively cheap, the nature-inspired methods have a significantly better performance than their deterministic counterparts. However, in situations when the function evaluations are costly or otherwise prohibited, the deterministic methods might provide more consistent and overall better results.
translated by 谷歌翻译
基准和性能分析在理解迭代优化启发式(IOHS)的行为中发挥着重要作用,例如本地搜索算法,遗传和进化算法,贝叶斯优化算法等。然而,这项任务涉及手动设置,执行和分析实验单独的基础,这是艰苦的,可以通过通用和设计精心设计的平台来缓解。为此,我们提出了Iohanalyzer,一种用于分析,比较和可视化IOH的性能数据的新用户友好的工具。在R和C ++中实现,Iohanalyzer是完全开源的。它可以在Cran和GitHub上获得。 Iohanalyzer提供有关固定目标运行时间的详细统计信息以及具有实际值的Codomain,单目标优化任务的基准算法的固定预算性能。例如,在多个基准问题上的性能聚合是可能的,例如以经验累积分布函数的形式。 Iohanalyzer在其他性能分析包上的主要优点是其高度交互式设计,允许用户指定对其实验最有用的性能测量,范围和粒度,以及不仅分析性能迹线,还可以分析演变动态状态参数。 Iohanalyzer可以直接从主基准平台处理性能数据,包括Coco平台,JOVERRAD,SOS平台和iohExperenter。提供R编程接口,供用户更倾向于对实现的功能进行更精细的控制。
translated by 谷歌翻译
遗传算法具有独特的属性,当应用于黑匣子优化时很有用。使用选择,交叉和突变算子,可以获得候选溶液,而无需计算梯度。在这项工作中,我们研究了从遗传算法的选择机理中使用量子增强的算子获得的结果。我们的方法将选择过程描述为最小化的二元二次模型,我们使用该模型编码适合度和人群成员之间的距离,我们利用量子退火系统来为选择机制采样低能解决方案。我们在各种黑盒目标函数(包括ONEMAX函数)以及来自IOH-Profiler库中的函数进行黑盒优化的函数基准对这些量子增强算法基准针对经典算法进行基准测试。与OneMax功能上的经典相比,我们观察到平均世代相传的性能增长,以收敛到量子增强的精英选择运算符。我们还发现,具有非专业选择的量子增强选择算子在IOHProfiler库中具有适应性扰动的功能上的基准优于基准。此外,我们发现,在精英选择的情况下,量子增强的操作员在不同程度的虚拟变量和中立性方面的函数上优于经典基准。
translated by 谷歌翻译
鉴于选择算法和/或配置问题,黑框优化(BBO)问题的搜索地面特征(BBO)问题的知识提供了有价值的信息。探索性景观分析(ELA)模型已在识别预定义的人类衍生特征和促进投资组合选择器方面取得成功,以应对这些挑战。与ELA方法不同,当前的研究提议将识别问题转变为图像识别问题,并有可能检测不含概念的机器驱动的景观特征。为此,我们介绍了景观图像的概念,这使我们能够每个基准函数生成图像实例,然后将分类挑战定位于各种函数的广义数据集。我们将其作为有监督的多级图像识别问题来解决,并应用基本的人工神经网络模型来解决它。我们方法的功效在无噪声的BBOB和IOHPRILER基准测试套件上进行了数值验证。这种明显的成功学习是朝着自动化特征提取和局部结构扣除BBO问题的又一步。通过使用这种景观图像的定义,并利用图像识别算法的现有功能,我们预见了像Imagenet一样的功能库的构建,用于训练依靠机器驱动功能的通用检测器。
translated by 谷歌翻译
Surrogate algorithms such as Bayesian optimisation are especially designed for black-box optimisation problems with expensive objectives, such as hyperparameter tuning or simulation-based optimisation. In the literature, these algorithms are usually evaluated with synthetic benchmarks which are well established but have no expensive objective, and only on one or two real-life applications which vary wildly between papers. There is a clear lack of standardisation when it comes to benchmarking surrogate algorithms on real-life, expensive, black-box objective functions. This makes it very difficult to draw conclusions on the effect of algorithmic contributions and to give substantial advice on which method to use when. A new benchmark library, EXPObench, provides first steps towards such a standardisation. The library is used to provide an extensive comparison of six different surrogate algorithms on four expensive optimisation problems from different real-life applications. This has led to new insights regarding the relative importance of exploration, the evaluation time of the objective, and the used model. We also provide rules of thumb for which surrogate algorithm to use in which situation. A further contribution is that we make the algorithms and benchmark problem instances publicly available, contributing to more uniform analysis of surrogate algorithms. Most importantly, we include the performance of the six algorithms on all evaluated problem instances. This results in a unique new dataset that lowers the bar for researching new methods as the number of expensive evaluations required for comparison is significantly reduced.
translated by 谷歌翻译
大多数机器学习算法由一个或多个超参数配置,必须仔细选择并且通常会影响性能。为避免耗时和不可递销的手动试验和错误过程来查找性能良好的超参数配置,可以采用各种自动超参数优化(HPO)方法,例如,基于监督机器学习的重新采样误差估计。本文介绍了HPO后,本文审查了重要的HPO方法,如网格或随机搜索,进化算法,贝叶斯优化,超带和赛车。它给出了关于进行HPO的重要选择的实用建议,包括HPO算法本身,性能评估,如何将HPO与ML管道,运行时改进和并行化结合起来。这项工作伴随着附录,其中包含关于R和Python的特定软件包的信息,以及用于特定学习算法的信息和推荐的超参数搜索空间。我们还提供笔记本电脑,这些笔记本展示了这项工作的概念作为补充文件。
translated by 谷歌翻译
最近几十年来,已经采用了用于解决各种多主体优化问题(MOPS)的多主体进化算法(MOEAS)的显着进步。但是,这些逐渐改善的MOEAS并不一定配备了精致的可扩展和可学习的解决问题的策略,这些策略能够应对缩放型拖把带来的新的和宏伟的挑战,并不断提高各种方面的复杂性或规模,主要包括昂贵的方面,包括昂贵的方面。功能评估,许多目标,大规模搜索空间,时变环境和多任务。在不同的情况下,它需要不同的思考来设计新的强大MOEAS,以有效地解决它们。在这种情况下,对可学习的MOEAS进行的研究,以机器学习技术进行缩放的拖把,在进化计算领域受到了广泛的关注。在本文中,我们从可扩展的拖把和可学习的MOEAS的分类学开始,然后分析将拖把构成对传统MOEAS的挑战的分析。然后,我们综合概述了可学习的MOEAS的最新进展,以求解各种扩展拖把,主要集中在三个有吸引力的有前途的方向上(即,可学习的环境选择的可学习的进化鉴别器,可学习的进化生物的可学习生殖发生器,以及可学习的进化转移,用于分享或分享或分享或进行分享或可学习的转移。不同问题域之间的经验)。在本文中提供了有关可学习的MOEAS的见解,以参考该领域的努力的一般踪迹。
translated by 谷歌翻译
交叉和突变策略的选择在搜索能力,收敛效率和遗传算法的精度中起着至关重要的作用。本文通过提高简单遗传算法的交叉和突变操作,提出了一种新的改进的遗传算法,并通过四个测试功能验证。仿真结果表明,与三个其他主流群智能优化算法相比,该算法不仅可以提高全球搜索能力,收敛效率和精度,还可以在相同的实验条件下提高收敛成功率。最后,算法应用于神经网络的侵扰攻击。所应用的结果表明,该方法不需要神经网络模型内的结构和参数信息,并且可以在短时间内获得对来自神经网络输出的分类和置信信息的短暂的信心。
translated by 谷歌翻译
优化问题在人工智能中至关重要。优化算法通常用于调整人工智能模型的性能,以最小化映射输入的误差输出。优化算法上的当前评估方法通常考虑质量方面的性能。然而,并非所有测试用例的所有优化算法都是等于质量的等于的,但也应考虑计算时间以进行优化任务。在本文中,我们研究了优化问题中优化算法的质量和计算时间,而不是唯一的质量评估。我们选择众所周知的优化算法(贝叶斯优化和进化算法),并在质量和计算时间方面评估基准测试功能。结果表明,BO适用于在有限函数评估中获得所需质量所需的优化任务中,并且EAS适合搜索允许找到具有足够函数最佳解决方案的任务的最佳选择评估。本文提供了选择合适的优化算法的建议,以了解不同数量的函数评估的优化问题,这有助于获得所需质量的效率,以较少的计算时间进行优化问题。
translated by 谷歌翻译
传统的统计技术或元启发式学很难解决大多数现实世界的优化问题。主要困难与存在相当数量的局部Optima有关,这可能导致优化过程的过早收敛性。为了解决这个问题,我们提出了一种新型的启发式方法,用于构建原始功能的平滑替代模型。替代功能更容易优化,但保持原始坚固的健身景观的基本属性:全球最佳的位置。为了创建这样的替代模型,我们考虑通过自我调整健身函数增强的线性遗传编程方法。所提出的称为GP-FST-PSO替代模型的算法在搜索全局最优值和原始基准函数的视觉近似(在二维情况下)的视觉近似都可以达到令人满意的结果。
translated by 谷歌翻译
Array programming provides a powerful, compact, expressive syntax for accessing, manipulating, and operating on data in vectors, matrices, and higher-dimensional arrays [1]. NumPy is the primary array programming library for the Python language [2,3,4,5]. It plays an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, material science, engineering, finance, and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves [6] and the first imaging of a black hole [7].Here we show how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring, and analyzing scientific data. NumPy is the foundation upon which the entire scientific Python universe is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Because of its central position in the ecosystem, NumPy increasingly plays the role of an interoperability layer between these new array computation libraries.
translated by 谷歌翻译
近年来,行业和学术界的深度学习(DL)迅速发展。但是,找到DL模型的最佳超参数通常需要高计算成本和人类专业知识。为了减轻上述问题,进化计算(EC)作为一种强大的启发式搜索方法显示出在DL模型的自动设计中,所谓的进化深度学习(EDL)具有重要优势。本文旨在从自动化机器学习(AUTOML)的角度分析EDL。具体来说,我们首先从机器学习和EC阐明EDL,并将EDL视为优化问题。根据DL管道的说法,我们系统地介绍了EDL方法,从功能工程,模型生成到具有新的分类法的模型部署(即,什么以及如何发展/优化),专注于解决方案表示和搜索范式的讨论通过EC处理优化问题。最后,提出了关键的应用程序,开放问题以及可能有希望的未来研究线。这项调查回顾了EDL的最新发展,并为EDL的开发提供了有见地的指南。
translated by 谷歌翻译
本文介绍了更深层的扩展版本,这是一种基于搜索的仿真集成测试解决方案,该解决方案生成了用于测试基于神经网络的巷道式泳道系统的检测失败测试方案。在新提出的版本中,我们使用了一组新的生物启发的搜索算法,遗传算法(GA),$({\ mu}+{\ lambda})$和$({\ mu},{\ mu},{\ lambda}),{\ lambda}) $进化策略(ES)和粒子群优化(PSO),利用了针对用于对测试场景进行建模的演示模型量身定制的优质人口种子和特定于域的交叉和突变操作。为了证明更深层次的新测试生成器的功能,我们就SBST 2021的网络物理系统测试竞赛中的五个参与工具进行了经验评估和比较。我们的评估显示了新提出的测试更深层次的发电机不仅代表了先前版本的可观改进,而且还被证明是有效和有效地引发相当数量的不同故障的测试方案,用于测试ML驱动的车道保存系统。在有限的测试时间预算,高目标故障严重性和严格的速度限制限制下,它们可以在促进测试方案多样性的同时触发几次失败。
translated by 谷歌翻译
参数适应性,即根据面临的问题自动调整算法的超参数的能力,是应用于数值优化的进化计算的主要趋势之一。多年来,已经提出了一些手工制作的适应政策来解决这个问题,但到目前为止,在应用机器学习以学习此类政策时,只有很少的尝试。在这里,我们介绍了一个通用框架,用于基于最新的增强学习算法在连续域元启发术中进行参数适应。我们证明了该框架在两种算法上的适用性,即协方差矩阵适应性进化策略(CMA-ES)和差异演化(DE),我们分别学习,我们分别学习了对阶梯大小(CMA-ES),CMA-ES的适应性策略,以及比例因子和交叉率(DE)。我们在不同维度的一组46个基准函数上训练这些策略,在两个设置中具有各种策略的投入:每个功能的一个策略,以及所有功能的全局策略。将分别与累积的阶梯尺寸适应(CSA)策略和两个众所周知的自适应DE变体(IDE和JDE)进行了比较,我们的政策能够在大多数情况下产生竞争成果,尤其是在DE的情况下。
translated by 谷歌翻译