The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce CONCRETE random variables-CONtinuous relaxations of disCRETE random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
translated by 谷歌翻译
Gumbel-Softmax是对单纯形的连续分布,通常用作离散分布的放松。因为它可以轻松解释和容易重新聚集,所以它可以广泛使用。我们提出了一个模块化,更灵活的可重新聚集分布家族,其中高斯噪声通过可逆函数转化为单热近似。这种可逆函数由修改后的软磁性组成,可以结合具有不同特定目的的各种转换。例如,破坏过程使我们能够将重新聚集技巧扩展到具有无数支持的分布,从而使我们沿非参数模型的分布使用或归一化流量使我们提高了分布的灵活性。我们的构造具有比Gumbel-Softmax(例如封闭形式KL)的理论优势,并且在各种实验中都显着优于它。我们的代码可在https://github.com/cunningham-lab/igr上获得。
translated by 谷歌翻译
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification. * Work done during an internship at Google Brain.
translated by 谷歌翻译
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
概率分布允许从业者发现数据中的隐藏结构,并构建模型,以使用有限的数据解决监督的学习问题。该报告的重点是变异自动编码器,这是一种学习大型复杂数据集概率分布的方法。该报告提供了对变异自动编码器的理论理解,并巩固了该领域的当前研究。该报告分为多个章节,第一章介绍了问题,描述了变异自动编码器并标识了该领域的关键研究方向。第2、3、4和5章深入研究了每个关键研究领域的细节。第6章总结了报告,并提出了未来工作的指示。具有机器学习基本思想但想了解机器学习研究中的一般主题的读者可以从报告中受益。该报告解释了有关学习概率分布的中心思想,人们为使这种危险做些什么,并介绍了有关当前如何应用深度学习的细节。该报告还为希望为这个子场做出贡献的人提供了温和的介绍。
translated by 谷歌翻译
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation -rules for gradient backpropagation through stochastic variables -and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.
translated by 谷歌翻译
自动编码变化贝叶斯(AEVB)是一种用于拟合潜在变量模型(无监督学习的有前途的方向)的强大而通用的算法,并且是训练变量自动编码器(VAE)的众所周知的。在本教程中,我们专注于从经典的期望最大化(EM)算法中激励AEVB,而不是确定性自动编码器。尽管自然而有些不言而喻,但在最近的深度学习文献中并未强调EM与AEVB之间的联系,我们认为强调这种联系可以改善社区对AEVB的理解。特别是,我们发现(1)优化有关推理参数的证据下限(ELBO)作为近似E-step,并且(2)优化ELBO相对于生成参数作为近似M-step;然后,与AEVB中的同时进行同时进行,然后同时拧紧并推动Elbo。我们讨论如何将近似E-Step解释为执行变异推断。详细讨论了诸如摊销和修复技巧之类的重要概念。最后,我们从划痕中得出了非深度和几个深层变量模型的AEVB训练程序,包括VAE,有条件的VAE,高斯混合物VAE和变异RNN。我们希望读者能够将AEVB认识为一种通用算法,可用于拟合广泛的潜在变量模型(不仅仅是VAE),并将AEVB应用于自己的研究领域中出现的此类模型。所有纳入型号的Pytorch代码均可公开使用。
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
尽管深层生成模型在图像处理,自然语言处理和强化学习方面已经成功,但由于其梯度估计过程的较高差异,涉及离散随机变量的培训仍然具有挑战性。蒙特卡洛是大多数降低方法中使用的常见解决方案。但是,这涉及耗时的重采样和多功能评估。我们提出了一个张开的直通(GST)估计器,以减少方差,而不会产生重新采样开销。该估计器的灵感来自直通牙龈 - 软胶的基本属性。我们确定这些特性,并通过消融研究表明它们是必不可少的。实验表明,与在两个离散的深层生成建模任务:MNIST-VAE和LISTOPS上相比,所提出的GST估计器与强基础相比具有更好的性能。
translated by 谷歌翻译
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
translated by 谷歌翻译
当前独立于域的经典计划者需要问题域和实例作为输入的符号模型,从而导致知识采集瓶颈。同时,尽管深度学习在许多领域都取得了重大成功,但知识是在与符号系统(例如计划者)不兼容的亚符号表示中编码的。我们提出了Latplan,这是一种无监督的建筑,结合了深度学习和经典计划。只有一组未标记的图像对,显示了环境中允许的过渡子集(训练输入),Latplan学习了环境的完整命题PDDL动作模型。稍后,当给出代表初始状态和目标状态(计划输入)的一对图像时,Latplan在符号潜在空间中找到了目标状态的计划,并返回可视化的计划执行。我们使用6个计划域的基于图像的版本来评估LATPLAN:8个插头,15个式嘴,Blockworld,Sokoban和两个LightsOut的变体。
translated by 谷歌翻译
Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
translated by 谷歌翻译
贝叶斯结构学习允许从数据推断贝叶斯网络结构,同时推理认识性不确定性 - 朝着实现现实世界系统的主动因果发现和设计干预的关键因素。在这项工作中,我们为贝叶斯结构学习(DIBS)提出了一般,完全可微分的框架,其在潜在概率图表表示的连续空间中运行。与现有的工作相反,DIBS对局部条件分布的形式不可知,并且允许图形结构和条件分布参数的关节后部推理。这使得我们的配方直接适用于复杂贝叶斯网络模型的后部推理,例如,具有由神经网络编码的非线性依赖性。使用DIBS,我们设计了一种高效,通用的变分推理方法,用于近似结构模型的分布。在模拟和现实世界数据的评估中,我们的方法显着优于关节后部推理的相关方法。
translated by 谷歌翻译
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
translated by 谷歌翻译
Outstanding achievements of graph neural networks for spatiotemporal time series analysis show that relational constraints introduce an effective inductive bias into neural forecasting architectures. Often, however, the relational information characterizing the underlying data-generating process is unavailable and the practitioner is left with the problem of inferring from data which relational graph to use in the subsequent processing stages. We propose novel, principled - yet practical - probabilistic score-based methods that learn the relational dependencies as distributions over graphs while maximizing end-to-end the performance at task. The proposed graph learning framework is based on consolidated variance reduction techniques for Monte Carlo score-based gradient estimation, is theoretically grounded, and, as we show, effective in practice. In this paper, we focus on the time series forecasting problem and show that, by tailoring the gradient estimators to the graph learning problem, we are able to achieve state-of-the-art performance while controlling the sparsity of the learned graph and the computational scalability. We empirically assess the effectiveness of the proposed method on synthetic and real-world benchmarks, showing that the proposed solution can be used as a stand-alone graph identification procedure as well as a graph learning component of an end-to-end forecasting architecture.
translated by 谷歌翻译
Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis, and these efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a "black box" variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.
translated by 谷歌翻译
我们提出了使用多级蒙特卡罗(MLMC)方法的变分推理的差异减少框架。我们的框架是基于Reparameterized梯度估计的梯度估计,并在优化中从过去更新历史记录获得的“回收”参数。此外,我们的框架还提供了一种基于随机梯度下降(SGD)的新优化算法,其自适应地估计根据梯度方差的比率用于梯度估计的样本大小。理论上,通过我们的方法,梯度估计器的方差随着优化进行而降低,并且学习率调度器函数有助于提高收敛。我们还表明,就\ Texit {信噪比}比率而言,我们的方法可以通过提高初始样本大小来提高学习速率调度器功能的梯度估计的质量。最后,我们确认我们的方法通过使用多个基准数据集的基线方法的实验比较来实现更快的收敛性并降低梯度估计器的方差,并降低了与其他方法相比的其他方法。
translated by 谷歌翻译
深度学习体系结构中离散算法组件的集成具有许多应用。最近,隐含的最大似然估计(Imle,Niepert,Minervini和Franceschi 2021)是一类用于离散指数家庭分布的梯度估计器,是通过通过与路径级别梯度估计器组合隐式分化来结合隐式分化的。但是,由于梯度的有限差近似,它对需要由用户指定的有限差步长的选择特别敏感。在这项工作中,我们提出了自适应IMLE(AIMLE),是第一个用于复杂离散分布的自适应梯度估计器:它通过在梯度估计中以偏见程度来换取梯度信息的密度来适应IMLE的目标分布。我们从经验上评估了关于合成示例的估计量,以及学习解释,离散的变异自动编码器和神经关系推理任务。在我们的实验中,我们表明我们的自适应梯度估计器可以产生忠实的估计值,同时需要的样本较少,而样品比其他梯度估计器少。
translated by 谷歌翻译