Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data while being far less impressive when handling non-homophilic graph data due to the inherent low-pass filtering property of GNNs. In general, since the real-world graphs are often a complex mixture of diverse subgraph patterns, learning a universal spectral filter on the graph from the global perspective as in most current works may still suffer from great difficulty in adapting to the variation of local patterns. On the basis of the theoretical analysis on local patterns, we rethink the existing spectral filtering methods and propose the \textbf{\underline{N}}ode-oriented spectral \textbf{\underline{F}}iltering for \textbf{\underline{G}}raph \textbf{\underline{N}}eural \textbf{\underline{N}}etwork (namely NFGNN). By estimating the node-oriented spectral filter for each node, NFGNN is provided with the capability of precise local node positioning via the generalized translated operator, thus discriminating the variations of local homophily patterns adaptively. Meanwhile, the utilization of re-parameterization brings a good trade-off between global consistency and local sensibility for learning the node-oriented spectral filters. Furthermore, we theoretically analyze the localization property of NFGNN, demonstrating that the signal after adaptive filtering is still positioned around the corresponding node. Extensive experimental results demonstrate that the proposed NFGNN achieves more favorable performance.
translated by 谷歌翻译
图表学习目的旨在将节点内容与图形结构集成以学习节点/图表示。然而,发现许多现有的图形学习方法在具有高异性级别的数据上不能很好地工作,这是不同类标签之间很大比例的边缘。解决这个问题的最新努力集中在改善消息传递机制上。但是,尚不清楚异质性是否确实会损害图神经网络(GNNS)的性能。关键是要展现一个节点与其直接邻居之间的关系,例如它们是异性还是同质性?从这个角度来看,我们在这里研究了杂质表示在披露连接节点之间的关系之前/之后的杂音表示的作用。特别是,我们提出了一个端到端框架,该框架既学习边缘的类型(即异性/同质性),并利用边缘类型的信息来提高图形神经网络的表现力。我们以两种不同的方式实施此框架。具体而言,为了避免通过异质边缘传递的消息,我们可以通过删除边缘分类器鉴定的异性边缘来优化图形结构。另外,可以利用有关异性邻居的存在的信息进行特征学习,因此,设计了一种混合消息传递方法来汇总同质性邻居,并根据边缘分类使异性邻居多样化。广泛的实验表明,在整个同质级别的多个数据集上,通过在多个数据集上提出的框架对GNN的绩效提高了显着提高。
translated by 谷歌翻译
图神经网络(GNN)已证明其在各种应用中的表现出色。然而,其背后的工作机制仍然神秘。 GNN模型旨在学习图形结构数据的有效表示,该数据本质上与图形信号denoising(GSD)的原理相吻合。算法展开是一种“学习优化”技术的算法,由于其在构建高效和可解释的神经网络体系结构方面的前景,人们引起了人们的关注。在本文中,我们引入了基于GSD问题的截断优化算法(例如梯度下降和近端梯度下降)构建的一类展开网络。它们被证明与许多流行的GNN模型紧密相连,因为这些GNN中的正向传播实际上是为特定GSD提供服务的展开网络。此外,可以将GNN模型的训练过程视为解决了较低级别的GSD问题的双重优化问题。这种连接带来了GNN的新景,因为我们可以尝试从GSD对应物中理解它们的实际功能,并且还可以激励设计新的GNN模型。基于算法展开的观点,一种名为UGDGNN的表达模型,即展开的梯度下降GNN,进一步提出了继承具有吸引力的理论属性的。七个基准数据集上的大量数值模拟表明,UGDGNN可以比最新模型实现卓越或竞争性的性能。
translated by 谷歌翻译
Designing spectral convolutional networks is a challenging problem in graph learning. ChebNet, one of the early attempts, approximates the spectral graph convolutions using Chebyshev polynomials. GCN simplifies ChebNet by utilizing only the first two Chebyshev polynomials while still outperforming it on real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions. Such conclusions are counter-intuitive in the field of approximation theory, where it is established that the Chebyshev polynomial achieves the optimum convergent rate for approximating a function. In this paper, we revisit the problem of approximating the spectral graph convolutions with Chebyshev polynomials. We show that ChebNet's inferior performance is primarily due to illegal coefficients learnt by ChebNet approximating analytic filter functions, which leads to over-fitting. We then propose ChebNetII, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshev polynomial approximation while reducing the Runge phenomenon. We conducted an extensive experimental study to demonstrate that ChebNetII can learn arbitrary graph convolutions and achieve superior performance in both full- and semi-supervised node classification tasks. Most notably, we scale ChebNetII to a billion graph ogbn-papers100M, showing that spectral-based GNNs have superior performance. Our code is available at https://github.com/ivam-he/ChebNetII.
translated by 谷歌翻译
图形神经网络(GNNS)对图表上的半监督节点分类展示了卓越的性能,结果是它们能够同时利用节点特征和拓扑信息的能力。然而,大多数GNN隐含地假设曲线图中的节点和其邻居的标签是相同或一致的,其不包含在异质图中,其中链接节点的标签可能不同。因此,当拓扑是非信息性的标签预测时,普通的GNN可以显着更差,而不是在每个节点上施加多层Perceptrons(MLPS)。为了解决上述问题,我们提出了一种新的$ -laplacian基于GNN模型,称为$ ^ P $ GNN,其消息传递机制来自离散正则化框架,并且可以理论上解释为多项式图的近似值在$ p $ -laplacians的频谱域上定义过滤器。光谱分析表明,新的消息传递机制同时用作低通和高通滤波器,从而使$ ^ P $ GNNS对同性恋和异化图有效。关于现实世界和合成数据集的实证研究验证了我们的调查结果,并证明了$ ^ P $ GNN明显优于异交基准的几个最先进的GNN架构,同时在同性恋基准上实现竞争性能。此外,$ ^ p $ gnns可以自适应地学习聚合权重,并且对嘈杂的边缘具有强大。
translated by 谷歌翻译
We investigate the representation power of graph neural networks in the semisupervised node classification task under heterophily or low homophily, i.e., in networks where connected nodes may have different class labels and dissimilar features. Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure (e.g., multilayer perceptrons). Motivated by this limitation, we identify a set of key designs-ego-and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations-that boost learning from the graph structure under heterophily. We combine them into a graph neural network, H 2 GCN, which we use as the base method to empirically evaluate the effectiveness of the identified designs. Going beyond the traditional benchmarks with strong homophily, our empirical analysis shows that the identified designs increase the accuracy of GNNs by up to 40% and 27% over models without them on synthetic and real networks with heterophily, respectively, and yield competitive performance under homophily.
translated by 谷歌翻译
许多代表性图形神经网络,例如GPR-GNN和CHEBNET,具有曲线图谱滤波器的图形卷曲。但是,现有的工作要么应用预定义的滤波器权重,或者没有必要的约束来学习它们,这可能导致过度简化或不良滤波器。为了克服这些问题,我们提出了一种具有理论支持的新型图形神经网络的Bernnet,提供了一种简单但有效的设计和学习任意曲线图谱滤波器的方案。特别是,对于在图形的标准化Laplacian谱上的任何过滤器上,我们的Bernnet通过命令估计它是一个订单 - $ k $伯尔尼斯坦多项式近似,并通过设置伯尔尼斯坦的系数来设计其光谱特性。此外,我们可以基于观察的图形及其相关信号学习系数(和相应的滤波器权重),从而实现专门用于数据的BERNNET。我们的实验表明,Bernnet可以学习任意光谱滤波器,包括复杂的带抑制和梳状滤波器,并且它在真实的图形建模任务中实现了卓越的性能。代码可在https://github.com/ivam-he/bernnet上获得。
translated by 谷歌翻译
消息传递已作为设计图形神经网络(GNN)的有效工具的发展。但是,消息传递的大多数现有方法简单地简单或平均所有相邻的功能更新节点表示。它们受到两个问题的限制,即(i)缺乏可解释性来识别对GNN的预测重要的节点特征,以及(ii)特征过度混合,导致捕获长期依赖和无能为力的过度平滑问题在异质或低同质的下方处理图。在本文中,我们提出了一个节点级胶囊图神经网络(NCGNN),以通过改进的消息传递方案来解决这些问题。具体而言,NCGNN表示节点为节点级胶囊组,其中每个胶囊都提取其相应节点的独特特征。对于每个节点级胶囊,开发了一个新颖的动态路由过程,以适应适当的胶囊,以从设计的图形滤波器确定的子图中聚集。 NCGNN聚集仅有利的胶囊并限制无关的消息,以避免交互节点的过度混合特征。因此,它可以缓解过度平滑的问题,并通过同粒或异质的图表学习有效的节点表示。此外,我们提出的消息传递方案本质上是可解释的,并免于复杂的事后解释,因为图形过滤器和动态路由过程确定了节点特征的子集,这对于从提取的子分类中的模型预测最为重要。关于合成和现实图形的广泛实验表明,NCGNN可以很好地解决过度光滑的问题,并为半监视的节点分类产生更好的节点表示。它的表现优于同质和异质的艺术状态。
translated by 谷歌翻译
图表神经网络(GNNS)在各种机器学习任务中获得了表示学习的提高。然而,应用邻域聚合的大多数现有GNN通常在图中的图表上执行不良,其中相邻的节点属于不同的类。在本文中,我们示出了在典型的异界图中,边缘可以被引导,以及是否像是处理边缘,也可以使它们过度地影响到GNN模型的性能。此外,由于异常的限制,节点对来自本地邻域之外的类似节点的消息非常有益。这些激励我们开发一个自适应地学习图表的方向性的模型,并利用潜在的长距离相关性节点之间。我们首先将图拉普拉斯概括为基于所提出的特征感知PageRank算法向数字化,该算法同时考虑节点之间的图形方向性和长距离特征相似性。然后,Digraph Laplacian定义了一个图形传播矩阵,导致一个名为{\ em diglaciangcn}的模型。基于此,我们进一步利用节点之间的通勤时间测量的节点接近度,以便在拓扑级别上保留节点的远距离相关性。具有不同级别的10个数据集的广泛实验,同意级别展示了我们在节点分类任务任务中对现有解决方案的有效性。
translated by 谷歌翻译
本文旨在为多尺度帧卷积提供一种新颖的光谱图神经网络设计。在光谱范例中,光谱GNN通过提出频谱域中的各种光谱滤波器来提高图形学习任务性能,以捕获全局和本地图形结构信息。虽然现有的光谱方法在某些图表中显示出卓越的性能,但是当图表信息不完整或扰乱时,它们患有缺乏灵活性并脆弱。我们的新帧卷曲卷积包括直接在光谱域中设计的过滤功能,以克服这些限制。所提出的卷积在切断光谱信息中表现出具有很大的灵活性,并有效地减轻了噪声曲线图信号的负效应。此外,为了利用现实世界图数据中的异质性,具有我们新的帧卷积的异构图形神经网络提供了一种用于将元路径的内在拓扑信息与多级图分析嵌入的解决方案。进行了扩展实验实现了具有嘈杂节点特征和卓越性能结果的设置下的现实异构图和均匀图。
translated by 谷歌翻译
图形卷积网络对于从图形结构数据进行深入学习而变得必不可少。大多数现有的图形卷积网络都有两个大缺点。首先,它们本质上是低通滤波器,因此忽略了图形信号的潜在有用的中和高频带。其次,固定了现有图卷积过滤器的带宽。图形卷积过滤器的参数仅转换图输入而不更改图形卷积滤波器函数的曲率。实际上,除非我们有专家领域知识,否则我们不确定是否应该在某个点保留或切断频率。在本文中,我们建议自动图形卷积网络(AUTOGCN)捕获图形信号的完整范围,并自动更新图形卷积过滤器的带宽。虽然它基于图谱理论,但我们的自动环境也位于空间中,并具有空间形式。实验结果表明,AutoGCN比仅充当低通滤波器的基线方法实现了显着改善。
translated by 谷歌翻译
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood information of nodes. Though effective for various tasks, in this paper, we show that they are potentially a problematic factor underlying all GNN models for learning on certain datasets, as they force the node representations similar, making the nodes gradually lose their identity and become indistinguishable. Hence, we augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity. Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations. In practice, the proposed two-channel filters can be easily patched on existing GNN methods with diverse training strategies, including spectral and spatial (message passing) methods. In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
translated by 谷歌翻译
基于光谱的图形神经网络(SGNNS)在图表表示学习中一直吸引了不断的关注。然而,现有的SGNN是限于实现具有刚性变换的曲线滤波器(例如,曲线图傅立叶或预定义的曲线波小波变换)的限制,并且不能适应驻留在手中的图形和任务上的信号。在本文中,我们提出了一种新颖的图形神经网络,实现了具有自适应图小波的曲线图滤波器。具体地,自适应图表小波通过神经网络参数化提升结构学习,其中开发了基于结构感知的提升操作(即,预测和更新操作)以共同考虑图形结构和节点特征。我们建议基于扩散小波提升以缓解通过分区非二分类图引起的结构信息损失。通过设计,得到了所得小波变换的局部和稀疏性以及提升结构的可扩展性。我们进一步通过在学习的小波中学习稀疏图表表示来引导软阈值滤波操作,从而产生局部,高效和可伸缩的基于小波的图形滤波器。为了确保学习的图形表示不变于节点排列,在网络的输入中采用层以根据其本地拓扑信息重新排序节点。我们在基准引用和生物信息图形数据集中评估节点级和图形级别表示学习任务的所提出的网络。大量实验在准确性,效率和可扩展性方面展示了在现有的SGNN上的所提出的网络的优越性。
translated by 谷歌翻译
光谱图神经网络是基于图信号过滤器的一种图神经网络(GNN)。一些能够学习任意光谱过滤器的模型最近出现了。但是,很少有作品分析光谱GNN的表达能力。本文理论上研究了光谱GNNS的表现力。我们首先证明,即使没有非线性的光谱GNN也可以产生任意的图形信号,并给出了两个条件以达到普遍性。它们是:1)图Laplacian的多个特征值和2)节点特征中没有缺失的频率组件。我们还建立了光谱GNN的表达能力与图形同构(GI)测试之间的联系,后者通常用于表征空间GNNS的表达能力。此外,我们从优化的角度研究了具有相同表达能力的不同光谱GNN之间的经验性能差异,并激发了其重量函数对应于光谱中图信号密度的正交基础的使用。受分析的启发,我们提出了Jacobiconv,该雅各比基的正交性和灵活性使用了雅各比的基础,以适应广泛的重量功能。 Jacobiconv抛弃了非线性,同时在合成和现实世界数据集上都超过了所有基线。
translated by 谷歌翻译
图表神经网络(GNNS)在图形结构数据的表现中表现出巨大的成功。在捕获图形拓扑中,GNN中的层展图表卷积显示为强大。在此过程中,GNN通常由预定义的内核引导,例如拉普拉斯矩阵,邻接矩阵或其变体。但是,预定义的内核的采用可能会限制不同图形的必要性:图形和内核之间的不匹配将导致次优性能。例如,当高频信息对于图表具有重要意义时,聚焦在低频信息上的GNN可能无法实现令人满意的性能,反之亦然。为了解决这个问题,在本文中,我们提出了一种新颖的框架 - 即,即Adaptive Kernel图神经网络(AKGNN) - 这将在第一次尝试时以统一的方式适应最佳图形内核。在所提出的AKGNN中,我们首先设计一种数据驱动的图形内核学习机制,它通过修改图拉普拉斯的最大特征值来自适应地调制全通过和低通滤波器之间的平衡。通过此过程,AKGNN了解高频信号之间的最佳阈值以减轻通用问题。稍后,我们通过参数化技巧进一步减少参数的数量,并通过全局读出功能增强富有表现力。在确认的基准数据集中进行了广泛的实验,并且有希望的结果通过与最先进的GNNS比较,展示了我们所提出的Akgnn的出色表现。源代码在公开上可用:https://github.com/jumxglhf/akgnn。
translated by 谷歌翻译
几何深度学习取得了长足的进步,旨在概括从传统领域到非欧几里得群岛的结构感知神经网络的设计,从而引起图形神经网络(GNN),这些神经网络(GNN)可以应用于形成的图形结构数据,例如社会,例如,网络,生物化学和材料科学。尤其是受欧几里得对应物的启发,尤其是图形卷积网络(GCN)通过提取结构感知功能来成功处理图形数据。但是,当前的GNN模型通常受到各种现象的限制,这些现象限制了其表达能力和推广到更复杂的图形数据集的能力。大多数模型基本上依赖于通过本地平均操作对图形信号的低通滤波,从而导致过度平滑。此外,为了避免严重的过度厚度,大多数流行的GCN式网络往往是较浅的,并且具有狭窄的接收场,导致侵犯。在这里,我们提出了一个混合GNN框架,该框架将传统的GCN过滤器与通过几何散射定义的带通滤波器相结合。我们进一步介绍了一个注意框架,该框架允许该模型在节点级别上从不同过滤器的组合信息进行本地参与。我们的理论结果确定了散射过滤器的互补益处,以利用图表中的结构信息,而我们的实验显示了我们方法对各种学习任务的好处。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online. 1
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
translated by 谷歌翻译