图形神经网络(GNNS)由于其强大的表示能力而广泛用于图形结构化数据处理。通常认为,GNNS可以隐式消除非预测性的噪音。但是,对图神经网络中隐式降解作用的分析仍然开放。在这项工作中,我们进行了一项全面的理论研究,并分析了隐式denoising在GNN中发生的何时以及为什么发生。具体而言,我们研究噪声矩阵的收敛性。我们的理论分析表明,隐式转化很大程度上取决于连接性,图形大小和GNN体系结构。此外,我们通过扩展图形信号降解问题来正式定义并提出对抗图信号denoising(AGSD)问题。通过解决这样的问题,我们得出了一个可靠的图形卷积,可以增强节点表示的平滑度和隐式转化效果。广泛的经验评估验证了我们的理论分析和我们提出的模型的有效性。
translated by 谷歌翻译
图形神经网络(GNNS)对图表上的半监督节点分类展示了卓越的性能,结果是它们能够同时利用节点特征和拓扑信息的能力。然而,大多数GNN隐含地假设曲线图中的节点和其邻居的标签是相同或一致的,其不包含在异质图中,其中链接节点的标签可能不同。因此,当拓扑是非信息性的标签预测时,普通的GNN可以显着更差,而不是在每个节点上施加多层Perceptrons(MLPS)。为了解决上述问题,我们提出了一种新的$ -laplacian基于GNN模型,称为$ ^ P $ GNN,其消息传递机制来自离散正则化框架,并且可以理论上解释为多项式图的近似值在$ p $ -laplacians的频谱域上定义过滤器。光谱分析表明,新的消息传递机制同时用作低通和高通滤波器,从而使$ ^ P $ GNNS对同性恋和异化图有效。关于现实世界和合成数据集的实证研究验证了我们的调查结果,并证明了$ ^ P $ GNN明显优于异交基准的几个最先进的GNN架构,同时在同性恋基准上实现竞争性能。此外,$ ^ p $ gnns可以自适应地学习聚合权重,并且对嘈杂的边缘具有强大。
translated by 谷歌翻译
数据增强已广泛用于图像数据和语言数据,但仍然探索图形神经网络(GNN)。现有方法专注于从全局视角增强图表数据,并大大属于两个类型:具有特征噪声注入的结构操纵和对抗训练。但是,最近的图表数据增强方法忽略了GNNS“消息传递机制的本地信息的重要性。在这项工作中,我们介绍了本地增强,这通过其子图结构增强了节点表示的局部。具体而言,我们将数据增强模拟为特征生成过程。鉴于节点的功能,我们的本地增强方法了解其邻居功能的条件分布,并生成更多邻居功能,以提高下游任务的性能。基于本地增强,我们进一步设计了一个新颖的框架:La-GNN,可以以即插即用的方式应用于任何GNN模型。广泛的实验和分析表明,局部增强一致地对各种基准的各种GNN架构始终如一地产生性能改进。
translated by 谷歌翻译
A central challenge of building more powerful Graph Neural Networks (GNNs) is the oversmoothing phenomenon, where increasing the network depth leads to homogeneous node representations and thus worse classification performance. While previous works have only demonstrated that oversmoothing is inevitable when the number of graph convolutions tends to infinity, in this paper, we precisely characterize the mechanism behind the phenomenon via a non-asymptotic analysis. Specifically, we distinguish between two different effects when applying graph convolutions -- an undesirable mixing effect that homogenizes node representations in different classes, and a desirable denoising effect that homogenizes node representations in the same class. By quantifying these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM), we show that oversmoothing happens once the mixing effect starts to dominate the denoising effect, and the number of layers required for this transition is $O(\log N/\log (\log N))$ for sufficiently dense graphs with $N$ nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR) on oversmoothing. Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth and are outperformed by the graph convolution approach on certain graphs. Finally, we support our theoretical results with numerical experiments, which further suggest that the oversmoothing phenomenon observed in practice may be exacerbated by the difficulty of optimizing deep GNN models.
translated by 谷歌翻译
图神经网络(GNN)正在在各种应用领域中实现出色的性能。但是,GNN容易受到输入数据中的噪声和对抗性攻击。在噪音和对抗性攻击方面使GNN坚固是一个重要的问题。现有的GNN防御方法在计算上是要求的,并且不可扩展。在本文中,我们提出了一个通用框架,用于鲁棒化的GNN称为加权laplacian GNN(RWL-GNN)。该方法将加权图拉普拉斯学习与GNN实现结合在一起。所提出的方法受益于Laplacian矩阵的积极半定义特性,具有光滑度和潜在特征,通过制定统一的优化框架,从而确保丢弃对抗性/嘈杂的边缘,并适当加权图中的相关连接。为了进行演示,实验是通过图形卷积神经网络(GCNN)体系结构进行的,但是,所提出的框架很容易适合任何现有的GNN体系结构。使用基准数据集的仿真结果建立了所提出方法的疗效,无论是准确性还是计算效率。可以在https://github.com/bharat-runwal/rwl-gnn上访问代码。
translated by 谷歌翻译
图形注意力网络(GAT)是处理图数据的有用深度学习模型。但是,最近的作品表明,经典的GAT容易受到对抗攻击的影响。它在轻微的扰动下急剧降低。因此,如何增强GAT的鲁棒性是一个关键问题。本文提出了强大的GAT(Rogat),以根据注意机制的修订来改善GAT的鲁棒性。与原始的GAT不同,该GAT使用注意力机制的不同边缘,但仍然对扰动敏感,Rogat逐渐增加了动态注意力评分并提高了稳健性。首先,Rogat根据平滑度假设修改边缘的重量,这对于普通图很常见。其次,Rogat进一步修改了功能以抑制功能的噪声。然后,由动态边缘的重量产生额外的注意力评分,可用于减少对抗性攻击的影响。针对引文数据的引文数据的针对目标和不靶向攻击的不同实验表明,Rogat的表现优于最近的大多数防御方法。
translated by 谷歌翻译
尽管近期图形神经网络(GNN)成功,但常见的架构通常表现出显着的限制,包括对过天飞机,远程依赖性和杂散边缘的敏感性,例如,由于图形异常或对抗性攻击。至少部分地解决了一个简单的透明框架内的这些问题,我们考虑了一个新的GNN层系列,旨在模仿和整合两个经典迭代算法的更新规则,即近端梯度下降和迭代重复最小二乘(IRLS)。前者定义了一个可扩展的基础GNN架构,其免受过性的,而仍然可以通过允许任意传播步骤捕获远程依赖性。相反,后者产生了一种新颖的注意机制,该注意机制被明确地锚定到底层端到端能量函数,以及相对于边缘不确定性的稳定性。当结合时,我们获得了一个非常简单而强大的模型,我们在包括标准化基准,与异常扰动的图形,具有异化的图形和涉及远程依赖性的图形的不同方案的极其简单而强大的模型。在此过程中,我们与已明确为各个任务设计的SOTA GNN方法进行比较,实现竞争或卓越的节点分类准确性。我们的代码可以在https://github.com/fftyyy/twirls获得。
translated by 谷歌翻译
Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes. While conventional wisdom largely attributes the success of GNNs to their advanced expressivity for learning desired functions on nodes' ego-graphs, we conjecture that this is \emph{not} the main cause of GNNs' superiority in node prediction tasks. This paper pinpoints the major source of GNNs' performance gain to their intrinsic generalization capabilities, by introducing an intermediate model class dubbed as P(ropagational)MLP, which is identical to standard MLP in training, and then adopt GNN's architecture in testing. Intriguingly, we observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts across ten benchmarks and different experimental settings, despite the fact that PMLPs share the same (trained) weights with poorly-performed MLP. This critical finding opens a door to a brand new perspective for understanding the power of GNNs, and allow bridging GNNs and MLPs for dissecting their generalization behaviors. As an initial step to analyze PMLP, we show its essential difference with MLP at infinite-width limit lies in the NTK feature map in the post-training stage. Moreover, though MLP and PMLP cannot extrapolate non-linear functions for extreme OOD data, PMLP has more freedom to generalize near the training support.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been predominant for graph learning tasks; however, recent studies showed that a well-known graph algorithm, Label Propagation (LP), combined with a shallow neural network can achieve comparable performance to GNNs in semi-supervised node classification on graphs with high homophily. In this paper, we show that this approach falls short on graphs with low homophily, where nodes often connect to the nodes of the opposite classes. To overcome this, we carefully design a combination of a base predictor with LP algorithm that enjoys a closed-form solution as well as convergence guarantees. Our algorithm first learns the class compatibility matrix and then aggregates label predictions using LP algorithm weighted by class compatibilities. On a wide variety of benchmarks, we show that our approach achieves the leading performance on graphs with various levels of homophily. Meanwhile, it has orders of magnitude fewer parameters and requires less execution time. Empirical evaluations demonstrate that simple adaptations of LP can be competitive in semi-supervised node classification in both homophily and heterophily regimes.
translated by 谷歌翻译
图形神经网络(GNN)在许多基于图的任务中表现出强大的表示能力。具体而言,由于其简单性和性能优势,GNN(例如APPNP)的解耦结构变得流行。但是,这些GNN的端到端培训使它们在计算和记忆消耗方面效率低下。为了应对这些局限性,在这项工作中,我们为图形神经网络提供了交替的优化框架,不需要端到端培训。在不同设置下进行的广泛实验表明,所提出的算法的性能与现有的最新算法相当,但具有更好的计算和记忆效率。此外,我们表明我们的框架可以利用优势来增强现有的脱钩GNN。
translated by 谷歌翻译
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the over-smoothing problem.In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: Initial residual and Identity mapping. We provide theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing. Our experiments show that the deep GCNII model outperforms the state-of-the-art methods on various semi-and fullsupervised tasks. Code is available at https: //github.com/chennnM/GCNII.
translated by 谷歌翻译
鉴于他们的普及和应用程序的多样性,图形神经网络(GNNS)越来越重要。然而,对对抗性袭击的脆弱性的现有研究依赖于相对较小的图形。我们解决了这个差距并研究了如何在规模攻击和捍卫GNN。我们提出了两个稀疏感知的一阶优化攻击,尽管优化了在节点数量中的许多参数上优化了有效的表示。我们表明,普通的替代损失并不适合全球对GNN的攻击。我们的替代品可以加倍攻击力量。此外,为了提高GNNS的可靠性,我们设计了强大的聚合函数,软中位,导致所有尺度的有效防御。我们评估了我们的攻击和防御与图形的标准GNN,与以前的工作相比大于100倍以上。我们甚至通过将技术扩展到可伸缩的GNN来进一步缩放一个数量级。
translated by 谷歌翻译
Graph neural networks have shown significant success in the field of graph representation learning. Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations. Nevertheless, one layer of these neighborhood aggregation methods only consider immediate neighbors, and the performance decreases when going deeper to enable larger receptive fields. Several recent studies attribute this performance deterioration to the over-smoothing issue, which states that repeated propagation makes node representations of different classes indistinguishable. In this work, we study this observation systematically and develop new insights towards deeper graph neural networks. First, we provide a systematical analysis on this issue and argue that the key factor compromising the performance significantly is the entanglement of representation transformation and propagation in current graph convolution operations. After decoupling these two operations, deeper graph neural networks can be used to learn graph node representations from larger receptive fields. We further provide a theoretical analysis of the above observation when building very deep models, which can serve as a rigorous and gentle description of the over-smoothing issue. Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields. A set of experiments on citation, coauthorship, and co-purchase datasets have confirmed our analysis and insights and demonstrated the superiority of our proposed methods. CCS CONCEPTS• Mathematics of computing → Graph algorithms; • Computing methodologies → Artificial intelligence; Neural networks.
translated by 谷歌翻译
随着从现实世界所收集的图形数据仅仅是无噪声,图形的实际表示应该是强大的噪声。现有的研究通常侧重于特征平滑,但留下几何结构不受影响。此外,大多数工作需要L2-Norm,追求全局平滑度,这限制了图形神经网络的表现。本文根据特征和结构噪声裁定图表数据的常规程序,其中目标函数用乘法器(ADMM)的交替方向方法有效地解决。该方案允许采用多个层,而无需过平滑的关注,并且保证对最佳解决方案的收敛性。实证研究证明,即使在重大污染的情况下,我们的模型也与流行的图表卷积相比具有明显更好的性能。
translated by 谷歌翻译
Graph Neural Networks (graph NNs) are a promising deep learning approach for analyzing graph-structured data. However, it is known that they do not improve (or sometimes worsen) their predictive performance as we pile up many layers and add non-lineality. To tackle this problem, we investigate the expressive power of graph NNs via their asymptotic behaviors as the layer size tends to infinity. Our strategy is to generalize the forward propagation of a Graph Convolutional Network (GCN), which is a popular graph NN variant, as a specific dynamical system. In the case of a GCN, we show that when its weights satisfy the conditions determined by the spectra of the (augmented) normalized Laplacian, its output exponentially approaches the set of signals that carry information of the connected components and node degrees only for distinguishing nodes. Our theory enables us to relate the expressive power of GCNs with the topological information of the underlying graphs inherent in the graph spectra. To demonstrate this, we characterize the asymptotic behavior of GCNs on the Erdős -Rényi graph. We show that when the Erdős -Rényi graph is sufficiently dense and large, a broad range of GCNs on it suffers from the "information loss" in the limit of infinite layers with high probability. Based on the theory, we provide a principled guideline for weight normalization of graph NNs. We experimentally confirm that the proposed weight scaling enhances the predictive performance of GCNs in real data 1 .
translated by 谷歌翻译
We investigate the representation power of graph neural networks in the semisupervised node classification task under heterophily or low homophily, i.e., in networks where connected nodes may have different class labels and dissimilar features. Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure (e.g., multilayer perceptrons). Motivated by this limitation, we identify a set of key designs-ego-and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations-that boost learning from the graph structure under heterophily. We combine them into a graph neural network, H 2 GCN, which we use as the base method to empirically evaluate the effectiveness of the identified designs. Going beyond the traditional benchmarks with strong homophily, our empirical analysis shows that the identified designs increase the accuracy of GNNs by up to 40% and 27% over models without them on synthetic and real networks with heterophily, respectively, and yield competitive performance under homophily.
translated by 谷歌翻译
A lot of theoretical and empirical evidence shows that the flatter local minima tend to improve generalization. Adversarial Weight Perturbation (AWP) is an emerging technique to efficiently and effectively find such minima. In AWP we minimize the loss w.r.t. a bounded worst-case perturbation of the model parameters thereby favoring local minima with a small loss in a neighborhood around them. The benefits of AWP, and more generally the connections between flatness and generalization, have been extensively studied for i.i.d. data such as images. In this paper, we extensively study this phenomenon for graph data. Along the way, we first derive a generalization bound for non-i.i.d. node classification tasks. Then we identify a vanishing-gradient issue with all existing formulations of AWP and we propose a new Weighted Truncated AWP (WT-AWP) to alleviate this issue. We show that regularizing graph neural networks with WT-AWP consistently improves both natural and robust generalization across many different graph learning tasks and models.
translated by 谷歌翻译
图形神经网络(GNNS)在建模图形结构数据方面表明了它们的能力。但是,实际图形通常包含结构噪声并具有有限的标记节点。当在这些图表中培训时,GNN的性能会显着下降,这阻碍了许多应用程序的GNN。因此,与有限标记的节点开发抗噪声GNN是重要的。但是,这是一个相当有限的工作。因此,我们研究了在具有有限标记节点的嘈杂图中开发鲁棒GNN的新问题。我们的分析表明,嘈杂的边缘和有限的标记节点都可能损害GNN的消息传递机制。为减轻这些问题,我们提出了一种新颖的框架,该框架采用嘈杂的边缘作为监督,以学习去噪和密集的图形,这可以减轻或消除嘈杂的边缘,并促进GNN的消息传递,以缓解有限标记节点的问题。生成的边缘还用于规则地将具有标记平滑度的未标记节点的预测规范化,以更好地列车GNN。实验结果对现实世界数据集展示了在具有有限标记节点的嘈杂图中提出框架的稳健性。
translated by 谷歌翻译
最近,图形神经网络(GNN)通过利用图形结构和节点特征的知识来表现出图表表示的显着性能。但是,他们中的大多数都有两个主要限制。首先,GNN可以通过堆叠更多的层来学习高阶结构信息,但由于过度光滑的问题,无法处理较大的深度。其次,由于昂贵的计算成本和高内存使用情况,在大图上应用这些方法并不容易。在本文中,我们提出了节点自适应特征平滑(NAFS),这是一种简单的非参数方法,该方法构建了没有参数学习的节点表示。 NAFS首先通过特征平滑提取每个节点及其不同啤酒花的邻居的特征,然后自适应地结合了平滑的特征。此外,通过不同的平滑策略提取的平滑特征的合奏可以进一步增强构建的节点表示形式。我们在两个不同的应用程序方案上对四个基准数据集进行实验:节点群集和链接预测。值得注意的是,具有功能合奏的NAFS优于这些任务上最先进的GNN,并减轻上述大多数基于学习的GNN对应物的两个限制。
translated by 谷歌翻译
作为建模复杂关系的强大工具,HyperGraphs从图表学习社区中获得了流行。但是,深度刻画学习中的常用框架专注于具有边缘独立的顶点权重(EIVW)的超图,而无需考虑具有具有更多建模功率的边缘依赖性顶点权重(EDVWS)的超图。为了弥补这一点,我们提出了一般的超图光谱卷积(GHSC),这是一个通用学习框架,不仅可以处理EDVW和EIVW HyperGraphs,而且更重要的是,理论上可以明确地利用现有强大的图形卷积神经网络(GCNN)明确说明,从而很大程度上可以释放。超图神经网络的设计。在此框架中,给定的无向GCNN的图形拉普拉斯被统一的HyperGraph Laplacian替换,该统一的HyperGraph Laplacian通过将我们所定义的广义超透明牌与简单的无向图等同起来,从随机的步行角度将顶点权重信息替换。来自各个领域的广泛实验,包括社交网络分析,视觉目标分类和蛋白质学习,证明了拟议框架的最新性能。
translated by 谷歌翻译