At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit (16; 4; 7; 13; 6), thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function f θ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinitewidth limit, the network function f θ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.
translated by 谷歌翻译
在这项工作中,我们研究了矩阵产品状态(MPS)的神经切线内核(NTK)和NTK在无限键尺寸极限中的收敛性。我们证明了在梯度下降(训练)过程(以及初始化阶段)期间,MPS的NTK渐近地收敛于恒定矩阵,因为MPS的键尺寸通过观察到渐近的张量在MPS中的张量的变化而变化在无限极限的培训期间零。通过显示MPS NTK的正面定义,在函数空间训练期间MP的收敛性(MPS表示的函数空间)没有任何额外的数据集的额外假设。然后,我们考虑使用均方误差(RMSE)和(无监督)出生的机器(BM)的(监督)回归的设置,并在无限债券尺寸限制中分析它们的动态。常规方程(ODES)在闭合形式中衍生和解决了解RMSE和BM中MPS响应的动态的常规差分方程(ODES)。对于回归,我们考虑Mercer内核(高斯内核)并发现MPS响应的平均值的演变遵循NTK的最大特征值。由于BM中的内核函数的正交,获得了不同模式(样本)去源的演变和训练中收敛的“特征时间”。
translated by 谷歌翻译
The logit outputs of a feedforward neural network at initialization are conditionally Gaussian, given a random covariance matrix defined by the penultimate layer. In this work, we study the distribution of this random matrix. Recent work has shown that shaping the activation function as network depth grows large is necessary for this covariance matrix to be non-degenerate. However, the current infinite-width-style understanding of this shaping method is unsatisfactory for large depth: infinite-width analyses ignore the microscopic fluctuations from layer to layer, but these fluctuations accumulate over many layers. To overcome this shortcoming, we study the random covariance matrix in the shaped infinite-depth-and-width limit. We identify the precise scaling of the activation function necessary to arrive at a non-trivial limit, and show that the random covariance matrix is governed by a stochastic differential equation (SDE) that we call the Neural Covariance SDE. Using simulations, we show that the SDE closely matches the distribution of the random covariance matrix of finite networks. Additionally, we recover an if-and-only-if condition for exploding and vanishing norms of large shaped networks based on the activation function.
translated by 谷歌翻译
一项开创性的工作[Jacot等,2018]表明,在特定参数化下训练神经网络等同于执行特定的内核方法,因为宽度延伸到无穷大。这种等效性为将有关内核方法的丰富文献结果应用于神经网的结果开辟了一个有希望的方向,而神经网络很难解决。本调查涵盖了内核融合的关键结果,因为宽度进入无穷大,有限宽度校正,应用以及对相应方法的局限性的讨论。
translated by 谷歌翻译
量化的神经网络吸引了很多关注,因为它们在推理过程中降低了空间和计算复杂性。此外,人们已经有民间传说是一种隐性的正规化程序,因此可以改善神经网络的普遍性,但是没有现有的工作正式使这种有趣的民间传说形式化。在本文中,我们将神经网络中的二元权重作为随机舍入的随机变量,并研究神经网络中不同层的分布传播。我们提出了一个准神经网络来近似分布传播,该分布传播是一个具有连续参数和平滑激活函数的神经网络。我们为该准神经网络得出神经切线核(NTK),并表明NTK的特征值大约以指数呈指数速率衰减,这与具有随机尺度的高斯内核相当。这反过来表明,与具有实际价值权重的二元重量神经网络的繁殖核Hilbert空间(RKHS)涵盖了严格的功能子集。我们使用实验来验证我们提出的准神经网络可以很好地近似二进制重量神经网络。此外,与实际值重量神经网络相比,二进制重量神经网络的概括差距较低,这与高斯内核和拉普拉斯内核之间的差异相似。
translated by 谷歌翻译
在分析过度参数化神经网络的训练动力学方面的最新进展主要集中在广泛的网络上,因此无法充分解决深度在深度学习中的作用。在这项工作中,我们介绍了第一个无限深层但狭窄的神经网络的训练保证。我们研究具有特定初始化的多层感知器(MLP)的无限深度极限,并使用NTK理论建立了可训练性保证。然后,我们将分析扩展到无限深的卷积神经网络(CNN),并进行简短的实验。
translated by 谷歌翻译
我们为生成对抗网络(GAN)提出了一个新颖的理论框架。我们揭示了先前分析的基本缺陷,通过错误地对GANS的训练计划进行了错误的建模,该缺陷受到定义不定的鉴别梯度的约束。我们克服了这个问题,该问题阻碍了对GAN培训的原则研究,并考虑了歧视者的体系结构在我们的框架内解决它。为此,我们通过其神经切线核为歧视者提供了无限宽度神经网络的理论。我们表征了训练有素的判别器,以实现广泛的损失,并建立网络的一般可怜性属性。由此,我们获得了有关生成分布的融合的新见解,从而促进了我们对GANS训练动态的理解。我们通过基于我们的框架的分析工具包来证实这些结果,并揭示了与GAN实践一致的直觉。
translated by 谷歌翻译
A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
translated by 谷歌翻译
In a series of recent theoretical works, it was shown that strongly overparameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this "lazy training" phenomenon is not specific to overparameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that "lazy training" is behind the many successes of neural networks in difficult high dimensional tasks.
translated by 谷歌翻译
我们研究了具有由完全连接的神经网络产生的密度场的固体各向同性物质惩罚(SIMP)方法,将坐标作为输入。在大的宽度限制中,我们表明DNN的使用导致滤波效果类似于SIMP的传统过滤技术,具有由神经切线内核(NTK)描述的过滤器。然而,这种过滤器在翻译下不是不变的,导致视觉伪像和非最佳形状。我们提出了两个输入坐标的嵌入,导致NTK和滤波器的空间不变性。我们经验证实了我们的理论观察和研究了过滤器大小如何受网络架构的影响。我们的解决方案可以很容易地应用于任何其他基于坐标的生成方法。
translated by 谷歌翻译
为了理论上了解训练有素的深神经网络的行为,有必要研究来自随机初始化的梯度方法引起的动态。然而,这些模型的非线性和组成结构使得这些动态难以分析。为了克服这些挑战,最近出现了大宽度的渐近学作为富有成效的观点,并导致了对真实世界的深网络的实用洞察。对于双层神经网络,已经通过这些渐近学理解,训练模型的性质根据初始随机权重的规模而变化,从内核制度(大初始方差)到特征学习制度(对于小初始方差)。对于更深的网络,更多的制度是可能的,并且在本文中,我们详细研究了与神经网络的“卑鄙字段”限制相对应的“小”初始化的特定选择,我们称之为可分配的参数化(IP)。首先,我们展示了标准I.I.D.零平均初始化,具有多于四个层的神经网络的可集参数,从无限宽度限制的静止点开始,并且不会发生学习。然后,我们提出了各种方法来避免这种琐碎的行为并详细分析所得到的动态。特别是,这些方法中的一种包括使用大的初始学习速率,并且我们表明它相当于最近提出的最大更新参数化$ \ mu $ p的修改。我们将结果与图像分类任务的数值实验确认,其另外显示出在尚未捕获的激活功能的各种选择之间的行为中的强烈差异。
translated by 谷歌翻译
现代神经网络通常以强烈的过度构造状态运行:它们包含许多参数,即使实际标签被纯粹随机的标签代替,它们也可以插入训练集。尽管如此,他们在看不见的数据上达到了良好的预测错误:插值训练集并不会导致巨大的概括错误。此外,过度散色化似乎是有益的,因为它简化了优化景观。在这里,我们在神经切线(NT)制度中的两层神经网络的背景下研究这些现象。我们考虑了一个简单的数据模型,以及各向同性协变量的矢量,$ d $尺寸和$ n $隐藏的神经元。我们假设样本量$ n $和尺寸$ d $都很大,并且它们在多项式上相关。我们的第一个主要结果是对过份术的经验NT内核的特征结构的特征。这种表征意味着必然的表明,经验NT内核的最低特征值在$ ND \ gg n $后立即从零界限,因此网络可以在同一制度中精确插值任意标签。我们的第二个主要结果是对NT Ridge回归的概括误差的表征,包括特殊情况,最小值-ULL_2 $ NORD插值。我们证明,一旦$ nd \ gg n $,测试误差就会被内核岭回归之一相对于无限宽度内核而近似。多项式脊回归的误差依次近似后者,从而通过与激活函数的高度组件相关的“自我诱导的”项增加了正则化参数。多项式程度取决于样本量和尺寸(尤其是$ \ log n/\ log d $)。
translated by 谷歌翻译
对于由缺陷线性回归中的标签噪声引起的预期平均平方概率,我们证明了无渐近分布的下限。我们的下部结合概括了过度公共数据(内插)制度的类似已知结果。与最先前的作品相比,我们的分析适用于广泛的输入分布,几乎肯定的全排列功能矩阵,允许我们涵盖各种类型的确定性或随机特征映射。我们的下限是渐近的锐利,暗示在存在标签噪声时,缺陷的线性回归不会在任何这些特征映射中围绕内插阈值进行良好的。我们详细分析了强加的假设,并为分析(随机)特征映射提供了理论。使用此理论,我们可以表明我们的假设对于具有(Lebesgue)密度的输入分布以及随机深神经网络给出的特征映射,具有Sigmoid,Tanh,SoftPlus或Gelu等分析激活功能。作为进一步的例子,我们示出了来自随机傅里叶特征和多项式内核的特征映射也满足我们的假设。通过进一步的实验和分析结果,我们补充了我们的理论。
translated by 谷歌翻译
通过建立神经网络和内核方法之间的联系,无限宽度极限阐明了深度学习的概括和优化方面。尽管它们的重要性,但这些内核方法的实用性在大规模学习设置中受到限制,因为它们(超)二次运行时和内存复杂性。此外,大多数先前关于神经内核的作品都集中在relu激活上,这主要是由于其受欢迎程度,但这也是由于很难计算此类内核来进行一般激活。在这项工作中,我们通过提供进行一般激活的方法来克服此类困难。首先,我们编译和扩展激活功能的列表,该函数允许精确的双重激活表达式计算神经内核。当确切的计算未知时,我们提出有效近似它们的方法。我们提出了一种快速的素描方法,该方法近似于任何多种多层神经网络高斯过程(NNGP)内核和神经切线核(NTK)矩阵,以实现广泛的激活功能,这超出了常见的经过分析的RELU激活。这是通过显示如何使用任何所需激活函​​数的截短的Hermite膨胀来近似神经内核来完成的。虽然大多数先前的工作都需要单位球体上的数据点,但我们的方法不受此类限制的影响,并且适用于$ \ Mathbb {r}^d $中的任何点数据集。此外,我们为NNGP和NTK矩阵提供了一个子空间嵌入,具有接近输入的距离运行时和接近最佳的目标尺寸,该目标尺寸适用于任何\ EMPH {均质}双重激活功能,具有快速收敛的Taylor膨胀。从经验上讲,关于精确的卷积NTK(CNTK)计算,我们的方法可实现$ 106 \ times $速度,用于在CIFAR-10数据集上的5层默特网络的近似CNTK。
translated by 谷歌翻译
Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep overparameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result.
translated by 谷歌翻译
要了解深度学习的作品,了解神经网络的培训动态至关重要。关于这些动态的几个有趣的假设是基于经验观察到的现象,但存在有限的理论上了解此类现象的时间和原因。在本文中,我们考虑了内核最小二乘目标对梯度流动的培训动态,这是SGD培训的神经网络的限制动态。使用精确的高维渐近学,我们将拟合模型的动态表征在两个“世界”中:在甲骨文世界中,该模型在人口分布和实证世界中培训,模型在采样的数据集上培训。我们展示在内核的温和条件下,$ L ^ 2 $目标回归函数,培训动力学经历三个阶段,其特征在于两个世界的模型的行为。我们的理论结果也在数学上正式化一些有趣的深度学习现象。具体而言,在我们的环境中,我们展示了SGD逐步了解更多复杂的功能,并且存在“深度引导”现象:在第二阶段,尽管经验训练误差要小得多,但两个世界的测试错误仍然接近。最后,我们提供了一个具体的例子,比较了两种不同核的动态,这表明更快的培训不需要更好地推广。
translated by 谷歌翻译
低维歧管假设认为,在许多应用中发现的数据,例如涉及自然图像的数据(大约)位于嵌入高维欧几里得空间中的低维歧管上。在这种情况下,典型的神经网络定义了一个函数,该函数在嵌入空间中以有限数量的向量作为输入。但是,通常需要考虑在训练分布以外的点上评估优化网络。本文考虑了培训数据以$ \ mathbb r^d $的线性子空间分配的情况。我们得出对由神经网络定义的学习函数变化的估计值,沿横向子空间的方向。我们研究了数据歧管的编纂中与网络的深度和噪声相关的潜在正则化效应。由于存在噪声,我们还提出了训练中的其他副作用。
translated by 谷歌翻译
过分分度化是没有凸起的关键因素,以解释神经网络的全局渐变(GD)的全局融合。除了研究良好的懒惰政权旁边,已经为浅网络开发了无限宽度(平均场)分析,使用凸优化技术。为了弥合懒惰和平均场制度之间的差距,我们研究残留的网络(RESNET),其中残留块具有线性参数化,同时仍然是非线性的。这种Resnets承认无限深度和宽度限制,在再现内核Hilbert空间(RKHS)中编码残差块。在这个限制中,我们证明了当地的Polyak-Lojasiewicz不等式。因此,每个关键点都是全球最小化器和GD的局部收敛结果,并检索懒惰的制度。与其他平均场研究相比,它在残留物的表达条件下适用于参数和非参数案。我们的分析导致实用和量化的配方:从通用RKHS开始,应用随机傅里叶特征来获得满足我们的表征条件的高概率的有限维参数化。
translated by 谷歌翻译
This paper studies the infinite-width limit of deep linear neural networks initialized with random parameters. We obtain that, when the number of neurons diverges, the training dynamics converge (in a precise sense) to the dynamics obtained from a gradient descent on an infinitely wide deterministic linear neural network. Moreover, even if the weights remain random, we get their precise law along the training dynamics, and prove a quantitative convergence result of the linear predictor in terms of the number of neurons. We finally study the continuous-time limit obtained for infinitely wide linear neural networks and show that the linear predictors of the neural network converge at an exponential rate to the minimal $\ell_2$-norm minimizer of the risk.
translated by 谷歌翻译
We consider neural networks with a single hidden layer and non-decreasing positively homogeneous activation functions like the rectified linear units. By letting the number of hidden units grow unbounded and using classical non-Euclidean regularization tools on the output weights, they lead to a convex optimization problem and we provide a detailed theoretical analysis of their generalization performance, with a study of both the approximation and the estimation errors. We show in particular that they are adaptive to unknown underlying linear structures, such as the dependence on the projection of the input variables onto a low-dimensional subspace. Moreover, when using sparsity-inducing norms on the input weights, we show that high-dimensional non-linear variable selection may be achieved, without any strong assumption regarding the data and with a total number of variables potentially exponential in the number of observations. However, solving this convex optimization problem in infinite dimensions is only possible if the non-convex subproblem of addition of a new unit can be solved efficiently. We provide a simple geometric interpretation for our choice of activation functions and describe simple conditions for convex relaxations of the finite-dimensional non-convex subproblem to achieve the same generalization error bounds, even when constant-factor approximations cannot be found. We were not able to find strong enough convex relaxations to obtain provably polynomial-time algorithms and leave open the existence or non-existence of such tractable algorithms with non-exponential sample complexities.
translated by 谷歌翻译