图形神经网络(GNNS)已成为旨在对图形结构数据进行学习和推断的引人注目的模型,但是在理解GNN的基本限制方面几乎没有做出的工作,该限制可扩展到较大的图形并推广到分布外输入。 。在本文中,我们使用一个随机图生成器,该生成器使我们能够系统地研究图形大小和结构属性如何影响GNN的预测性能。我们提供的具体证据表明,在许多图形属性中,节点度分布的平均值和模态是确定GNN是否可以推广到看不见的图的关键特征。因此,我们使用多个节点更新功能和内部循环优化作为对汇总输入的单一类型的规范非线性转换的概括,提出了灵活的GNN(flex-gnn),并将内部循环优化作为概括,从而使网络可以灵活地适应新图。 Flex-GNN框架改善了几个推理任务的培训设置的概括。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
近年来,图形神经网络(GNNS)被出现为一个强大的神经结构,以学习在监督的端到端时尚中的节点和图表的矢量表示。到目前为止,只有经验评估GNNS - 显示有希望的结果。以下工作从理论的角度调查了GNN,并将它们与1美元 - 二维韦斯美犬 - Leman Graph同构Heuristic(1美元-WL)相关联。我们表明GNNS在区分非同义(子)图表中,GNN具有与1美元-WL相同的表现力。因此,这两种算法也具有相同的缺点。基于此,我们提出了GNN的概括,所谓的$ k $ -dimensional gnns($ k $ -gnns),这可以考虑多个尺度的高阶图结构。这些高阶结构在社交网络和分子图的表征中起重要作用。我们的实验评估证实了我们的理论调查结果,并确认了更高阶信息在图形分类和回归的任务中有用。
translated by 谷歌翻译
近年来,基于Weisfeiler-Leman算法的算法和神经架构,是一个众所周知的Graph同构问题的启发式问题,它成为具有图形和关系数据的机器学习的强大工具。在这里,我们全面概述了机器学习设置中的算法的使用,专注于监督的制度。我们讨论了理论背景,展示了如何将其用于监督的图形和节点表示学习,讨论最近的扩展,并概述算法的连接(置换 - )方面的神经结构。此外,我们概述了当前的应用和未来方向,以刺激进一步的研究。
translated by 谷歌翻译
组合优化是运营研究和计算机科学领域的一个公认领域。直到最近,它的方法一直集中在孤立地解决问题实例,而忽略了它们通常源于实践中的相关数据分布。但是,近年来,人们对使用机器学习,尤其是图形神经网络(GNN)的兴趣激增,作为组合任务的关键构件,直接作为求解器或通过增强确切的求解器。GNN的电感偏差有效地编码了组合和关系输入,因为它们对排列和对输入稀疏性的意识的不变性。本文介绍了对这个新兴领域的最新主要进步的概念回顾,旨在优化和机器学习研究人员。
translated by 谷歌翻译
Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this issue, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Theoretically, our framework generalizes message passing neural networks, resulting in provably more expressive models, and we show that some recent state-of-the-art models are special instances of this framework. Empirically, we verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the-art results.
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. * Equal contribution. † Work partially performed while in Tokyo, visiting Prof. Ken-ichi Kawarabayashi.
translated by 谷歌翻译
图表学习目的旨在将节点内容与图形结构集成以学习节点/图表示。然而,发现许多现有的图形学习方法在具有高异性级别的数据上不能很好地工作,这是不同类标签之间很大比例的边缘。解决这个问题的最新努力集中在改善消息传递机制上。但是,尚不清楚异质性是否确实会损害图神经网络(GNNS)的性能。关键是要展现一个节点与其直接邻居之间的关系,例如它们是异性还是同质性?从这个角度来看,我们在这里研究了杂质表示在披露连接节点之间的关系之前/之后的杂音表示的作用。特别是,我们提出了一个端到端框架,该框架既学习边缘的类型(即异性/同质性),并利用边缘类型的信息来提高图形神经网络的表现力。我们以两种不同的方式实施此框架。具体而言,为了避免通过异质边缘传递的消息,我们可以通过删除边缘分类器鉴定的异性边缘来优化图形结构。另外,可以利用有关异性邻居的存在的信息进行特征学习,因此,设计了一种混合消息传递方法来汇总同质性邻居,并根据边缘分类使异性邻居多样化。广泛的实验表明,在整个同质级别的多个数据集上,通过在多个数据集上提出的框架对GNN的绩效提高了显着提高。
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
在过去的几年中,已经开发了图形绘图技术,目的是生成美学上令人愉悦的节点链接布局。最近,利用可区分损失功能的使用已为大量使用梯度下降和相关优化算法铺平了道路。在本文中,我们提出了一个用于开发图神经抽屉(GND)的新框架,即依靠神经计算来构建有效且复杂的图的机器。 GND是图形神经网络(GNN),其学习过程可以由任何提供的损失函数(例如图形图中通常使用的损失函数)驱动。此外,我们证明,该机制可以由通过前馈神经网络计算的损失函数来指导,并根据表达美容特性的监督提示,例如交叉边缘的最小化。在这种情况下,我们表明GNN可以通过位置功能很好地丰富与未标记的顶点处理。我们通过为边缘交叉构建损失函数来提供概念验证,并在提议的框架下工作的不同GNN模型之间提供定量和定性的比较。
translated by 谷歌翻译
许多现代神经架构的核心的卷积运算符可以有效地被视为在输入矩阵和滤波器之间执行点产品。虽然这很容易适用于诸如图像的数据,其可以在欧几里德空间中表示为常规网格,延伸卷积操作者以在图形上工作,而是由于它们的不规则结构而被证明更具有挑战性。在本文中,我们建议使用图形内部产品的图形内核,即在图形上计算内部产品,以将标准卷积运算符扩展到图形域。这使我们能够定义不需要计算输入图的嵌入的完全结构模型。我们的架构允许插入任何类型和数量的图形内核,并具有在培训过程中学到的结构面具方面提供一些可解释性的额外益处,类似于传统卷积神经网络中的卷积掩模发生的事情。我们执行广泛的消融研究,调查模型超参数的影响,我们表明我们的模型在标准图形分类数据集中实现了竞争性能。
translated by 谷歌翻译
机器人中的一个重要挑战是了解机器人与由粒状材料组成的可变形地形之间的相互作用。颗粒状流量及其与刚体的互动仍然造成了几个开放的问题。有希望的方向,用于准确,且有效的建模使用的是使用连续体方法。此外,实时物理建模的新方向是利用深度学习。该研究推进了用于对刚性体驱动颗粒流建模的机器学习方法,用于应用于地面工业机器以及空间机器人(重力的效果是一个重要因素的地方)。特别是,该研究考虑了子空间机器学习仿真方法的开发。要生成培训数据集,我们利用我们的高保真连续体方法,材料点法(MPM)。主要成分分析(PCA)用于降低数据的维度。我们表明我们的高维数据的前几个主要组成部分几乎保持了数据的整个方差。培训图形网络模拟器(GNS)以学习底层子空间动态。然后,学习的GNS能够以良好的准确度预测颗粒位置和交互力。更重要的是,PCA在训练和卷展栏中显着提高了GNS的时间和记忆效率。这使得GNS能够使用具有中等VRAM的单个桌面GPU进行培训。这也使GNS实时在大规模3D物理配置(比我们的连续方法快700倍)。
translated by 谷歌翻译
大多数图形神经网络(GNNS)无法区分某些图形或图中的某些节点。这使得无法解决某些分类任务。但是,在这些模型中添加其他节点功能可以解决此问题。我们介绍了几种这样的增强,包括(i)位置节点嵌入,(ii)规范节点ID和(iii)随机特征。这些扩展是由理论结果激励的,并通过对合成子图检测任务进行广泛测试来证实。我们发现位置嵌入在这些任务中的其他扩展大大超过了其他扩展。此外,位置嵌入具有更好的样品效率,在不同的图形分布上表现良好,甚至超过了地面真实节点位置。最后,我们表明,不同的增强功能在既定的GNN基准中都具有竞争力,并建议何时使用它们。
translated by 谷歌翻译
Mapping the connectome of the human brain using structural or functional connectivity has become one of the most pervasive paradigms for neuroimaging analysis. Recently, Graph Neural Networks (GNNs) motivated from geometric deep learning have attracted broad interest due to their established power for modeling complex networked data. Despite their superior performance in many fields, there has not yet been a systematic study of how to design effective GNNs for brain network analysis. To bridge this gap, we present BrainGB, a benchmark for brain network analysis with GNNs. BrainGB standardizes the process by (1) summarizing brain network construction pipelines for both functional and structural neuroimaging modalities and (2) modularizing the implementation of GNN designs. We conduct extensive experiments on datasets across cohorts and modalities and recommend a set of general recipes for effective GNN designs on brain networks. To support open and reproducible research on GNN-based brain network analysis, we host the BrainGB website at https://braingb.us with models, tutorials, examples, as well as an out-of-box Python package. We hope that this work will provide useful empirical evidence and offer insights for future research in this novel and promising direction.
translated by 谷歌翻译
最近的研究表明,图形神经网络(GNNS)可以学习适用于典型的多层Perceptron(MLP)的运动控制的政策,具有卓越的转移和多任务性能(Wang等,2018; Huang Et al。,2020)。到目前为止,由于传感器和致动器的数量增长,GNN的性能随着传感器和执行器的数量而迅速变化,结果已经限于对小剂量的训练。在监督学习环境中使用GNN的关键动机是它们对大图的适用性,但尚未实现这种益处用于运动控制。我们将宽松的GNN架构中的弱点识别出导致这种较差的缩放:在网络中的MLP中过度拟合,用于编码,解码和传播消息。为了打击这一点,我们引入了雪花,一种用于高维连续控制的GNN训练方法,可以冻结受影响的网络部分中的参数。雪花显着提高了GNN在大型代理上的运动控制的性能,现在与MLP的性能相匹配,以及具有卓越的转移性能。
translated by 谷歌翻译
最新提出的基于变压器的图形模型的作品证明了香草变压器用于图形表示学习的不足。要了解这种不足,需要研究变压器的光谱分析是否会揭示其对其表现力的见解。类似的研究已经确定,图神经网络(GNN)的光谱分析为其表现力提供了额外的观点。在这项工作中,我们系统地研究并建立了变压器领域中的空间和光谱域之间的联系。我们进一步提供了理论分析,并证明了变压器中的空间注意机制无法有效捕获所需的频率响应,因此,固有地限制了其在光谱空间中的表现力。因此,我们提出了feta,该框架旨在在整个图形频谱(即图形的实际频率成分)上进行注意力类似于空间空间中的注意力。经验结果表明,FETA在标准基准的所有任务中为香草变压器提供均匀的性能增益,并且可以轻松地扩展到具有低通特性的基于GNN的模型(例如GAT)。
translated by 谷歌翻译
图形神经网络(GNN),图数据上深度神经网络的概括已被广泛用于各个领域,从药物发现到推荐系统。但是,当可用样本很少的情况下,这些应用程序的GNN是有限的。元学习一直是解决机器学习中缺乏样品的重要框架,近年来,研究人员已经开始将元学习应用于GNNS。在这项工作中,我们提供了对涉及GNN的不同元学习方法的综合调查,这些方法在各种图表中显示出使用这两种方法的力量。我们根据提出的架构,共享表示和应用程序分类文献。最后,我们讨论了几个激动人心的未来研究方向和打开问题。
translated by 谷歌翻译
Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs-a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.
translated by 谷歌翻译