Parameter space reduction has been proved to be a crucial tool to speed-up the execution of many numerical tasks such as optimization, inverse problems, sensitivity analysis, and surrogate models' design, especially when in presence of high-dimensional parametrized systems. In this work we propose a new method called local active subspaces (LAS), which explores the synergies of active subspaces with supervised clustering techniques in order to carry out a more efficient dimension reduction in the parameter space. The clustering is performed without losing the input-output relations by introducing a distance metric induced by the global active subspace. We present two possible clustering algorithms: K-medoids and a hierarchical top-down approach, which is able to impose a variety of subdivision criteria specifically tailored for parameter space reduction tasks. This method is particularly useful for the community working on surrogate modelling. Frequently, the parameter space presents subdomains where the objective function of interest varies less on average along different directions. So, it could be approximated more accurately if restricted to those subdomains and studied separately. We tested the new method over several numerical experiments of increasing complexity, we show how to deal with vectorial outputs, and how to classify the different regions with respect to the local active subspace dimension. Employing this classification technique as a preprocessing step in the parameter space, or output space in case of vectorial outputs, brings remarkable results for the purpose of surrogate modelling.
translated by 谷歌翻译
本文为工程产品的计算模型或仅返回分类信息的过程提供了一种新的高效和健壮方法,用于罕见事件概率估计,例如成功或失败。对于此类模型,大多数用于估计故障概率的方法,这些方法使用结果的数值来计算梯度或估计与故障表面的接近度。即使性能函数不仅提供了二进制输出,系统的状态也可能是连续输入变量域中定义的不平滑函数,甚至是不连续的函数。在这些情况下,基于经典的梯度方法通常会失败。我们提出了一种简单而有效的算法,该算法可以从随机变量的输入域进行顺序自适应选择点,以扩展和完善简单的基于距离的替代模型。可以在连续采样的任何阶段完成两个不同的任务:(i)估计失败概率,以及(ii)如果需要进一步改进,则选择最佳的候选者进行后续模型评估。选择用于模型评估的下一个点的建议标准最大化了使用候选者分类的预期概率。因此,全球探索与本地剥削之间的完美平衡是自动维持的。该方法可以估计多种故障类型的概率。此外,当可以使用模型评估的数值来构建平滑的替代物时,该算法可以容纳此信息以提高估计概率的准确性。最后,我们定义了一种新的简单但一般的几何测量,这些测量是对稀有事实概率对单个变量的全局敏感性的定义,该度量是作为所提出算法的副产品获得的。
translated by 谷歌翻译
In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.
translated by 谷歌翻译
神经网络的经典发展主要集中在有限维欧基德空间或有限组之间的学习映射。我们提出了神经网络的概括,以学习映射无限尺寸函数空间之间的运算符。我们通过一类线性积分运算符和非线性激活函数的组成制定运营商的近似,使得组合的操作员可以近似复杂的非线性运算符。我们证明了我们建筑的普遍近似定理。此外,我们介绍了四类运算符参数化:基于图形的运算符,低秩运算符,基于多极图形的运算符和傅里叶运算符,并描述了每个用于用每个计算的高效算法。所提出的神经运营商是决议不变的:它们在底层函数空间的不同离散化之间共享相同的网络参数,并且可以用于零击超分辨率。在数值上,与现有的基于机器学习的方法,达西流程和Navier-Stokes方程相比,所提出的模型显示出卓越的性能,而与传统的PDE求解器相比,与现有的基于机器学习的方法有关的基于机器学习的方法。
translated by 谷歌翻译
We review clustering as an analysis tool and the underlying concepts from an introductory perspective. What is clustering and how can clusterings be realised programmatically? How can data be represented and prepared for a clustering task? And how can clustering results be validated? Connectivity-based versus prototype-based approaches are reflected in the context of several popular methods: single-linkage, spectral embedding, k-means, and Gaussian mixtures are discussed as well as the density-based protocols (H)DBSCAN, Jarvis-Patrick, CommonNN, and density-peaks.
translated by 谷歌翻译
我们提出了一种从有限的训练数据学习高维参数映射的解析替代框架。在许多需要重复查询复杂计算模型的许多应用中出现了对参数代理的需求。这些应用包括贝叶斯逆问题,最佳实验设计和不确定度的最佳设计和控制等“外环”问题,以及实时推理和控制问题。许多高维参数映射承认低维结构,这可以通过映射信息的输入和输出的绘图信息的减少基础来利用。利用此属性,我们通过自适应地构造其输入和输出的缩小基础之间的Reset近似来制定用于学习这些地图的低维度近似的框架。最近的近似近似理论作为控制流的离散化,我们证明了我们所提出的自适应投影Reset框架的普遍近似性,这激励了Resnet构造的相关迭代算法。该策略代表了近似理论和算法的汇合,因为两者都使用顺序最小化流量。在数值例子中,我们表明,在训练数据少量的培训数据中,能够实现显着高精度,使其能够实现培训数据生成的最小计算投资的理想代理策略。
translated by 谷歌翻译
我们为特殊神经网络架构,称为运营商复发性神经网络的理论分析,用于近似非线性函数,其输入是线性运算符。这些功能通常在解决方案算法中出现用于逆边值问题的问题。传统的神经网络将输入数据视为向量,因此它们没有有效地捕获与对应于这种逆问题中的数据的线性运算符相关联的乘法结构。因此,我们介绍一个类似标准的神经网络架构的新系列,但是输入数据在向量上乘法作用。由较小的算子出现在边界控制中的紧凑型操作员和波动方程的反边值问题分析,我们在网络中的选择权重矩阵中促进结构和稀疏性。在描述此架构后,我们研究其表示属性以及其近似属性。我们还表明,可以引入明确的正则化,其可以从所述逆问题的数学分析导出,并导致概括属性上的某些保证。我们观察到重量矩阵的稀疏性改善了概括估计。最后,我们讨论如何将运营商复发网络视为深度学习模拟,以确定诸如用于从边界测量的声波方程中重建所未知的WAVESTED的边界控制的算法算法。
translated by 谷歌翻译
本文提出了一个无网格的计算框架和机器学习理论,用于在未知的歧管上求解椭圆形PDE,并根据扩散地图(DM)和深度学习确定点云。 PDE求解器是作为监督的学习任务制定的,以解决最小二乘回归问题,该问题施加了近似PDE的代数方程(如果适用)。该代数方程涉及通过DM渐近扩展获得的图形拉平型矩阵,该基质是二阶椭圆差差算子的一致估计器。最终的数值方法是解决受神经网络假设空间解决方案的高度非凸经验最小化问题。在体积良好的椭圆PDE设置中,当假设空间由具有无限宽度或深度的神经网络组成时,我们表明,经验损失函数的全球最小化器是大型训练数据极限的一致解决方案。当假设空间是一个两层神经网络时,我们表明,对于足够大的宽度,梯度下降可以识别经验损失函数的全局最小化器。支持数值示例证明了解决方案的收敛性,范围从具有低和高共限度的简单歧管到具有和没有边界的粗糙表面。我们还表明,所提出的NN求解器可以在具有概括性误差的新数据点上稳健地概括PDE解决方案,这些误差几乎与训练错误相同,从而取代了基于Nystrom的插值方法。
translated by 谷歌翻译
本文介绍了在高斯过程回归/克里格替代建模技术中选择/设计内核的算法。我们在临时功能空间中采用内核方法解决方案的设置,即繁殖内核希尔伯特空间(RKHS),以解决在观察到它的观察值的情况下近似定期目标函数的问题,即监督学习。第一类算法是内核流,该算法是在机器学习中的分类中引入的。它可以看作是一个交叉验证过程,因此选择了“最佳”内核,从而最小化了通过删除数据集的某些部分(通常为一半)而产生的准确性损失。第二类算法称为光谱内核脊回归,旨在选择“最佳”核,以便在相关的RKHS中,要近似的函数的范围很小。在Mercer定理框架内,我们就目标函数的主要特征来获得该“最佳”内核的明确结构。从数据中学习内核的两种方法均通过有关合成测试功能的数值示例,以及在湍流建模验证二维机翼的湍流模型验证中的经典测试用例。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics, and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than classical feed-forward neural networks, recurrent neural networks, and convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering, In this work, we review such methods and extend them for parametric studies as well as for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications.
translated by 谷歌翻译
Recent advances in operator learning theory have improved our knowledge about learning maps between infinite dimensional spaces. However, for large-scale engineering problems such as concurrent multiscale simulation for mechanical properties, the training cost for the current operator learning methods is very high. The article presents a thorough analysis on the mathematical underpinnings of the operator learning paradigm and proposes a kernel learning method that maps between function spaces. We first provide a survey of modern kernel and operator learning theory, as well as discuss recent results and open problems. From there, the article presents an algorithm to how we can analytically approximate the piecewise constant functions on R for operator learning. This implies the potential feasibility of success of neural operators on clustered functions. Finally, a k-means clustered domain on the basis of a mechanistic response is considered and the Lippmann-Schwinger equation for micro-mechanical homogenization is solved. The article briefly discusses the mathematics of previous kernel learning methods and some preliminary results with those methods. The proposed kernel operator learning method uses graph kernel networks to come up with a mechanistic reduced order method for multiscale homogenization.
translated by 谷歌翻译
专家(MOE)的混合是一种流行的统计和机器学习模型,由于其灵活性和效率,多年来一直引起关注。在这项工作中,我们将高斯门控的局部MOE(GLOME)和块对基因协方差局部MOE(Blome)回归模型在异质数据中呈现非线性关系,并在高维预测变量之间具有潜在的隐藏图形结构相互作用。这些模型从计算和理论角度提出了困难的统计估计和模型选择问题。本文致力于研究以混合成分数量,高斯平均专家的复杂性以及协方差矩阵的隐藏块 - 基因结构为特征的Glome或Blome模型集合中的模型选择问题。惩罚最大似然估计框架。特别是,我们建立了以弱甲骨文不平等的形式的非反应风险界限,但前提是罚款的下限。然后,在合成和真实数据集上证明了我们的模型的良好经验行为。
translated by 谷歌翻译
我们介绍了一类小说的预计方法,对实际线上的概率分布数据集进行统计分析,具有2-Wassersein指标。我们特别关注主成分分析(PCA)和回归。为了定义这些模型,我们通过将数据映射到合适的线性空间并使用度量投影运算符来限制Wassersein空间中的结果来利用与其弱利米结构密切相关的Wasserstein空间的表示。通过仔细选择切线,我们能够推出快速的经验方法,利用受约束的B样条近似。作为我们方法的副产品,我们还能够为PCA的PCA进行更快的例程来获得分布。通过仿真研究,我们将我们的方法与先前提出的方法进行比较,表明我们预计的PCA具有类似的性能,即使在拼盘下也是极其灵活的。研究了模型的若干理论性质,并证明了渐近一致性。讨论了两个真实世界应用于美国和风速预测的Covid-19死亡率。
translated by 谷歌翻译
The workhorse of machine learning is stochastic gradient descent. To access stochastic gradients, it is common to consider iteratively input/output pairs of a training dataset. Interestingly, it appears that one does not need full supervision to access stochastic gradients, which is the main motivation of this paper. After formalizing the "active labeling" problem, which focuses on active learning with partial supervision, we provide a streaming technique that provably minimizes the ratio of generalization error over the number of samples. We illustrate our technique in depth for robust regression.
translated by 谷歌翻译
在不断努力提高产品质量和降低运营成本中,越来越多地部署计算建模以确定产品设计或配置的可行性。通过本地模型代理这些计算机实验的建模,仅考虑短程交互,诱导稀疏性,可以解决复杂输入输出关系的巨大分析。然而,缩小到地方规模的重点意味着必须一遍又一遍地重新学习全球趋势。在本文中,我们提出了一种框架,用于将来自全局敏感性分析的信息纳入代理模型作为输入旋转和重新扫描预处理步骤。我们讨论了基于内核回归的几个敏感性分析方法的关系在描述它们如何产生输入变量的转换之前。具体而言,我们执行输入扭曲,使得“翘曲模拟器”对所有输入方向同样敏感,释放本地模型以专注于本地动态。观测数据和基准测试功能的数值实验,包括来自汽车行业的高维计算机模拟器,提供了实证验证。
translated by 谷歌翻译
现代高维方法经常采用“休稀稀物”的原则,而在监督多元学习统计学中可能面临着大量非零系数的“密集”问题。本文提出了一种新的聚类减少秩(CRL)框架,其施加了两个联合矩阵规范化,以自动分组构建预测因素的特征。 CRL比低级别建模更具可解释,并放松变量选择中的严格稀疏假设。在本文中,提出了新的信息 - 理论限制,揭示了寻求集群的内在成本,以及多元学习中的维度的祝福。此外,开发了一种有效的优化算法,其执行子空间学习和具有保证融合的聚类。所获得的定点估计器虽然不一定是全局最佳的,但在某些规则条件下享有超出标准似然设置的所需的统计准确性。此外,提出了一种新的信息标准,以及其无垢形式,用于集群和秩选择,并且具有严格的理论支持,而不假设无限的样本大小。广泛的模拟和实数据实验证明了所提出的方法的统计准确性和可解释性。
translated by 谷歌翻译
本文涉及使用多项式的有限样品的平滑,高维函数的近似。这项任务是计算科学和工程中许多应用的核心 - 尤其是由参数建模和不确定性量化引起的。通常在此类应用中使用蒙特卡洛(MC)采样,以免屈服于维度的诅咒。但是,众所周知,这种策略在理论上是最佳的。尺寸$ n $有许多多项式空间,样品复杂度尺度划分为$ n $。这种有据可查的现象导致了一致的努力,以设计改进的,实际上是近乎最佳的策略,其样本复杂性是线性的,甚至线性地缩小了$ n $。自相矛盾的是,在这项工作中,我们表明MC实际上是高维度中的一个非常好的策略。我们首先通过几个数值示例记录了这种现象。接下来,我们提出一个理论分析,该分析能够解决这种悖论,以实现无限多变量的全体形态功能。我们表明,基于$ M $ MC样本的最小二乘方案,其错误衰减为$ m/\ log(m)$,其速率与最佳$ n $ term的速率相同多项式近似。该结果是非构造性的,因为它假定了进行近似的合适多项式空间的知识。接下来,我们提出了一个基于压缩感应的方案,该方案达到了相同的速率,除了较大的聚类因子。该方案是实用的,并且在数值上,它的性能和比知名的自适应最小二乘方案的性能和更好。总体而言,我们的发现表明,当尺寸足够高时,MC采样非常适合平滑功能近似。因此,改进的采样策略的好处通常仅限于较低维度的设置。
translated by 谷歌翻译
This paper presents a surrogate modelling technique based on domain partitioning for Bayesian parameter inference of highly nonlinear engineering models. In order to alleviate the computational burden typically involved in Bayesian inference applications, a multielement Polynomial Chaos Expansion based Kriging metamodel is proposed. The developed surrogate model combines in a piecewise function an array of local Polynomial Chaos based Kriging metamodels constructed on a finite set of non-overlapping subdomains of the stochastic input space. Therewith, the presence of non-smoothness in the response of the forward model (e.g.~ nonlinearities and sparseness) can be reproduced by the proposed metamodel with minimum computational costs owing to its local adaptation capabilities. The model parameter inference is conducted through a Markov chain Monte Carlo approach comprising adaptive exploration and delayed rejection. The efficiency and accuracy of the proposed approach are validated through two case studies, including an analytical benchmark and a numerical case study. The latter relates the partial differential equation governing the hydrogen diffusion phenomenon of metallic materials in Thermal Desorption Spectroscopy tests.
translated by 谷歌翻译
This paper proposes a novel Adaptive Clustering-based Reduced-Order Modeling (ACROM) framework to significantly improve and extend the recent family of clustering-based reduced-order models (CROMs). This adaptive framework enables the clustering-based domain decomposition to evolve dynamically throughout the problem solution, ensuring optimum refinement in regions where the relevant fields present steeper gradients. It offers a new route to fast and accurate material modeling of history-dependent nonlinear problems involving highly localized plasticity and damage phenomena. The overall approach is composed of three main building blocks: target clusters selection criterion, adaptive cluster analysis, and computation of cluster interaction tensors. In addition, an adaptive clustering solution rewinding procedure and a dynamic adaptivity split factor strategy are suggested to further enhance the adaptive process. The coined Adaptive Self-Consistent Clustering Analysis (ASCA) is shown to perform better than its static counterpart when capturing the multi-scale elasto-plastic behavior of a particle-matrix composite and predicting the associated fracture and toughness. Given the encouraging results shown in this paper, the ACROM framework sets the stage and opens new avenues to explore adaptivity in the context of CROMs.
translated by 谷歌翻译