流行的图神经网络模型在图表学习方面取得了重大进展。但是,在本文中,我们发现了一个不断被忽视的现象:用完整图测试的预训练的图表学习模型的表现不佳,该模型用良好的图表测试。该观察结果表明,图中存在混杂因素,这可能会干扰模型学习语义信息,而当前的图表表示方法并未消除其影响。为了解决这个问题,我们建议强大的因果图表示学习(RCGRL)学习可靠的图形表示,以防止混杂效应。 RCGRL引入了一种主动方法,可以在无条件的力矩限制下生成仪器变量,该方法使图表学习模型能够消除混杂因素,从而捕获与下游预测有因果关系的歧视性信息。我们提供定理和证明,以保证拟议方法的理论有效性。从经验上讲,我们对合成数据集和多个基准数据集进行了广泛的实验。结果表明,与最先进的方法相比,RCGRL实现了更好的预测性能和泛化能力。
translated by 谷歌翻译
尽管最近在欧几里得数据(例如图像)上使用不变性原理(OOD)概括(例如图像),但有关图数据的研究仍然受到限制。与图像不同,图形的复杂性质给采用不变性原理带来了独特的挑战。特别是,图表上的分布变化可以以多种形式出现,例如属性和结构,因此很难识别不变性。此外,在欧几里得数据上通常需要的域或环境分区通常需要的图形可能非常昂贵。为了弥合这一差距,我们提出了一个新的框架,以捕获图形的不变性,以在各种分配变化下进行保证的OOD概括。具体而言,我们表征了具有因果模型的图形上的潜在分布变化,得出结论,当模型仅关注包含有关标签原因最多信息的子图时,可以实现图形上的OOD概括。因此,我们提出了一个信息理论目标,以提取最大地保留不变的阶级信息的所需子图。用这些子图学习不受分配变化的影响。对合成和现实世界数据集进行的广泛实验,包括在AI ADED药物发现中充满挑战的环境,验证了我们方法的上等OOD概括能力。
translated by 谷歌翻译
建议图表神经网络(GNNS)在不考虑训练和测试图之间的不可知分布的情况下,诱导GNN的泛化能力退化在分布外(OOD)设置。这种退化的根本原因是大多数GNN是基于I.I.D假设开发的。在这种设置中,GNN倾向于利用在培训中存在的微妙统计相关性用于预测,即使它是杂散的相关性。然而,这种杂散的相关性可能在测试环境中改变,导致GNN的失败。因此,消除了杂散相关的影响对于稳定的GNN来说是至关重要的。为此,我们提出了一个普遍的因果代表框架,称为稳定凝球。主要思想是首先从图数据中提取高级表示,并诉诸因因果推理的显着能力,以帮助模型摆脱虚假相关性。特别是,我们利用图形池化层以提取基于子图的表示作为高级表示。此外,我们提出了一种因果变量区别,以纠正偏置训练分布。因此,GNN将更多地集中在稳定的相关性上。对合成和现实世界ood图数据集的广泛实验良好地验证了所提出的框架的有效性,灵活性和可解释性。
translated by 谷歌翻译
学习强大的表示是图形神经网络(GNN)的一个中心主题。它需要从输入图中炼制关键信息,而不是琐碎的模式,以丰富表示。为此,图表注意力和汇集方法占上风。他们主要遵循“学会参加”的范式。它最大限度地提高了上述子图和地面真理标签之间的相互信息。然而,这种训练范例易于捕获微级子图和标签之间的虚假相关性。这种杂散的相关性对分布(ID)测试评估有益,但在分布外(OOD)测试数据中引起差的概括。在这项工作中,我们从因果角度重新审视GNN建模。在我们的因果假设之上,琐碎的信息是关键信息和标签之间的混淆,它在它们之间打开了一个后门路径,使它们保持虚拟相关。因此,我们提出了一个新的解压缩训练范式(DTP),更好地减轻了批评信息的混淆效果并锁存,以提高表示和泛化能力。具体而言,我们采用注意模块解开关键的子图和微不足道的子图。然后我们使每个关键的子图相当与不同的琐碎子图相互作用,以实现稳定的预测。它允许GNN捕获一个更可靠的子图,其与标签的关系跨越不同的分布。我们对综合和现实世界数据集进行了广泛的实验,以证明有效性。
translated by 谷歌翻译
最近的作品以自我监督的方式探索学习图表表示。在图形对比学习中,基准方法应用各种图形增强方法。但是,大多数增强方法都是不可学习的,这导致发出不束缚的增强图。这种增强可以缩短曲线图对比学学习方法的表现能力。因此,我们激励我们的方法通过可学习的图形增强器来生成增强图,称为元图形增强器(Mega)。然后,我们阐明了“良好”的图形增强必须在特征级别的实例级别和信息性上具有均匀性。为此,我们提出了一种新颖的方法来学习图形增强者,可以以统一和信息性产生增强。图表增强器的目的是促进我们的特征提取网络,以学习更辨别的特征表示,这激励我们提出元学范式。经验上,多个基准数据集的实验表明,Mega优于图形自我监督学习任务中的最先进的方法。进一步的实验研究证明了巨型术语的有效性。
translated by 谷歌翻译
Graph machine learning has been extensively studied in both academia and industry. Although booming with a vast number of emerging methods and techniques, most of the literature is built on the in-distribution hypothesis, i.e., testing and training graph data are identically distributed. However, this in-distribution hypothesis can hardly be satisfied in many real-world graph scenarios where the model performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this critical problem, out-of-distribution (OOD) generalization on graphs, which goes beyond the in-distribution hypothesis, has made great progress and attracted ever-increasing attention from the research community. In this paper, we comprehensively survey OOD generalization on graphs and present a detailed review of recent advances in this area. First, we provide a formal problem definition of OOD generalization on graphs. Second, we categorize existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline, followed by detailed discussions for each category. We also review the theories related to OOD generalization on graphs and introduce the commonly used graph datasets for thorough evaluations. Finally, we share our insights on future research directions. This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
translated by 谷歌翻译
需要解释的图表学习是需要的,因为许多科学应用都取决于学习模型来从图形结构数据中收集见解。先前的工作主要集中在使用事后方法来解释预训练的模型(尤其是图形神经网络模型)。他们反对固有的可解释模型,因为对这些模型的良好解释通常是以其预测准确性为代价。而且,广泛使用的固有解释的注意力机制通常无法在图形学习任务中提供忠实的解释。在这项工作中,我们通过提出图形随机关注(GSAT)来解决这两个问题,这是一种来自信息瓶颈原理的注意机制。 GSAT利用随机关注来阻止从任务 - 核定图组件中的信息,同时学习降低随机性的注意力以选择与任务相关的子图以进行解释。 GSAT也可以通过随机注意机制应用于微调和解释预训练的模型。八个数据集的广泛实验表明,GSAT在解释AUC中的最高最高为20%$ \ uparrow $,而预测准确性则高于最高的最高$ \ uparrow $。
translated by 谷歌翻译
理由定义为最能解释或支持机器学习模型预测的输入功能的子集。基本原理识别改善了神经网络在视觉和语言数据上的普遍性和解释性。在诸如分子和聚合物属性预测之类的图应用中,识别称为图理由的代表性子图结构在图神经网络的性能中起着至关重要的作用。现有的图形合并和/或分发干预方法缺乏示例,无法学习确定最佳图理由。在这项工作中,我们介绍了一个名为“环境替代”的新的增强操作,该操作自动创建虚拟数据示例以改善基本原理识别。我们提出了一个有效的框架,该框架在潜在空间中对真实和增强的示例进行基本环境分离和表示学习,以避免显式图解码和编码的高复杂性。与最近的技术相比,对七个分子和四个聚合物实际数据集进行的实验证明了拟议的基于增强的图形合理化框架的有效性和效率。
translated by 谷歌翻译
Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and fail to provide consistent explanations. Applying them to explain weakly-performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causality perspective. Two typical reasons of spurious explanations are identified: confounding effect of latent variables like distribution shift, and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations, we propose a simple yet effective countermeasure by aligning embeddings. Concretely, concerning potential shifts in the high-dimensional space, we design a distribution-aware alignment algorithm based on anchors. This new objective is easy to compute and can be incorporated into existing techniques with no or little effort. Theoretical analysis shows that it is in effect optimizing a more faithful explanation objective in design, which further justifies the proposed approach.
translated by 谷歌翻译
图形对比学习(GCL)已成为学习图形无监督表示的有效工具。关键思想是通过数据扩展最大化每个图的两个增强视图之间的一致性。现有的GCL模型主要集中在给定情况下的所有图表上应用\ textit {相同的增强策略}。但是,实际图通常不是单态,而是各种本质的抽象。即使在相同的情况下(例如,大分子和在线社区),不同的图形可能需要各种增强来执行有效的GCL。因此,盲目地增强所有图表而不考虑其个人特征可能会破坏GCL艺术的表现。 {a} u Mentigation(GPA),通过允许每个图选择自己的合适的增强操作来推进常规GCL。本质上,GPA根据其拓扑属性和节点属性通过可学习的增强选择器为每个图定制了量身定制的增强策略,该策略是插件模块,可以通过端到端的下游GCL型号有效地训练。来自不同类型和域的11个基准图的广泛实验证明了GPA与最先进的竞争对手的优势。此外,通过可视化不同类型的数据集中学习的增强分布,我们表明GPA可以有效地识别最合适的数据集每个图的增强基于其特征。
translated by 谷歌翻译
大多数图形神经网络(GNN)通过学习输入图和标签之间的相关性来预测看不见的图的标签。但是,通过对具有严重偏见的训练图进行图形分类调查,我们发现GNN始终倾向于探索伪造的相关性以做出决定,即使因果关系始终存在。这意味着在此类偏见的数据集中接受培训的现有GNN将遭受概括能力差。通过在因果观点中分析此问题,我们发现从偏见图中解开和去偏置因果和偏见的潜在变量对于偏见至关重要。在此鼓舞下,我们提出了一个普遍的分解GNN框架,分别学习因果子结构和偏见子结构。特别是,我们设计了一个参数化的边蒙版生成器,以将输入图明确分为因果和偏置子图。然后,分别由因果/偏见感知损失函数监督的两个GNN模块进行培训,以编码因果关系和偏置子图表中的相应表示。通过分离的表示,我们合成了反事实无偏的训练样本,以进一步脱离因果变量和偏见变量。此外,为了更好地基于严重的偏见问题,我们构建了三个新的图形数据集,这些数据集具有可控的偏置度,并且更容易可视化和解释。实验结果很好地表明,我们的方法比现有基线实现了优越的概括性能。此外,由于学习的边缘面膜,该拟议的模型具有吸引人的解释性和可转让性。代码和数据可在以下网址获得:https://github.com/googlebaba/disc。
translated by 谷歌翻译
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译
无监督的图形表示学习是图形数据的非琐碎主题。在结构化数据的无监督代表学习中对比学习和自我监督学习的成功激发了图表上的类似尝试。使用对比损耗的当前无监督的图形表示学习和预培训主要基于手工增强图数据之间的对比度。但是,由于不可预测的不变性,图数据增强仍然没有很好地探索。在本文中,我们提出了一种新颖的协作图形神经网络对比学习框架(CGCL),它使用多个图形编码器来观察图形。不同视图观察的特征充当了图形编码器之间对比学习的图表增强,避免了任何扰动以保证不变性。 CGCL能够处理图形级和节点级表示学习。广泛的实验表明CGCL在无监督的图表表示学习中的优势以及图形表示学习的手工数据增强组合的非必要性。
translated by 谷歌翻译
图表神经网络(GNNS)在测试和训练图数据来自相同分布时取得了令人印象深刻的性能。然而,现有的GNN缺乏分发的泛化能力,使得它们的性能在测试和训练图数据之间存在分布时显着降低。为了解决这个问题,在这项工作中,我们提出了一个用于在具有训练图的不同分布的看不见的分布的看不见的令人满意的令人满意的令人满意的通用图形神经网络(OOD-GNN)。我们所提出的OOD-GNN采用新颖的非线性图形表示去序方法,利用随机傅里叶特征,这鼓励模型通过迭代优化样本图权重和图形编码器来消除相关和无关的图表表示之间的统计依赖性。我们进一步设计了一个全局重量估计器,以学习训练图的权重,使得图形表示中的变量被迫独立。学习权重有助于图形编码器摆脱虚假相关性,并且反过来,更集中学习鉴别图形表示与地面真理标签之间的真实连接。我们进行广泛的实验,以验证两个合成和12个现实世界数据集的分发外概括能力,分配换档。结果表明,我们所提出的OOD-GNN显着优于最先进的基线。
translated by 谷歌翻译
领先的图对比度学习(GCL)方法在两个时尚中执行图形增强:(1)随机损坏锚图,这可能会导致语义信息的丢失,或(2)使用域知识维护显着特征,这破坏了对概括的概括其他域。从不变性看GCL时,我们认为高性能的增强应保留有关实例歧视的锚图的显着语义。为此,我们将GCL与不变的理由发现联系起来,并提出了一个新的框架,即理由吸引图形对比度学习(RGCL)。具体而言,没有监督信号,RGCL使用基本原理生成器来揭示有关图形歧视的显着特征作为理由,然后为对比度学习创建理由吸引的视图。这种理由意识到的预训练方案赋予了骨干模型具有强大的表示能力,从而进一步促进了下游任务的微调。在MNIST-SUPERPIXEL和MUTAG数据集上,对发现的理由的视觉检查展示了基本原理生成器成功捕获了显着特征(即区分图中的语义节点)。在生化分子和社交网络基准数据集上,RGCL的最新性能证明了理由意识到对比度学习的有效性。我们的代码可在https://github.com/lsh0520/rgcl上找到。
translated by 谷歌翻译
最近,测试时间适应(TTA)由于其处理现实世界中的分销转移问题而引起了越来越多的关注。与用于图像数据的卷积神经网络(CNN)开发的内容不同,图形神经网络(GNN)的探索较少。仍然缺乏针对具有不规则结构的图的有效算法。在本文中,我们提出了一种新颖的测试时间适应策略,称为图形伪群体对比度(GAPGC),用于图神经网络TTA,以更好地适应非分布(OOD)测试数据。具体而言,GAPGC在TTA期间采用了对比度学习变体作为一项自制任务,配备了对抗性可学习的增强器和组伪阳性样本,以增强自我监督任务与主要任务之间的相关性,从而提高主要任务。此外,我们提供了理论上的证据,表明GAPGC可以从信息理论的角度提取主要任务的最小信息。关于分子支架OOD数据集的广泛实验表明,所提出的方法在GNN上实现了最先进的性能。
translated by 谷歌翻译
图神经网络的自我监督学习(SSL)正在成为利用未标记数据的有前途的方式。当前,大多数方法基于从图像域改编的对比度学习,该学习需要视图生成和足够数量的负样本。相比之下,现有的预测模型不需要负面抽样,但缺乏关于借口训练任务设计的理论指导。在这项工作中,我们提出了lagraph,这是基于潜在图预测的理论基础的预测SSL框架。 lagraph的学习目标被推导为自我监督的上限,以预测未观察到的潜在图。除了改进的性能外,Lagraph还为包括基于不变性目标的预测模型的最新成功提供了解释。我们提供了比较毛发与不同领域中相关方法的理论分析。我们的实验结果表明,劳拉在性能方面的优势和鲁棒性对于训练样本量减少了图形级别和节点级任务。
translated by 谷歌翻译
Most existing deep learning models are trained based on the closed-world assumption, where the test data is assumed to be drawn i.i.d. from the same distribution as the training data, known as in-distribution (ID). However, when models are deployed in an open-world scenario, test samples can be out-of-distribution (OOD) and therefore should be handled with caution. To detect such OOD samples drawn from unknown distribution, OOD detection has received increasing attention lately. However, current endeavors mostly focus on grid-structured data and its application for graph-structured data remains under-explored. Considering the fact that data labeling on graphs is commonly time-expensive and labor-intensive, in this work we study the problem of unsupervised graph OOD detection, aiming at detecting OOD graphs solely based on unlabeled ID data. To achieve this goal, we develop a new graph contrastive learning framework GOOD-D for detecting OOD graphs without using any ground-truth labels. By performing hierarchical contrastive learning on the augmented graphs generated by our perturbation-free graph data augmentation method, GOOD-D is able to capture the latent ID patterns and accurately detect OOD graphs based on the semantic inconsistency in different granularities (i.e., node-level, graph-level, and group-level). As a pioneering work in unsupervised graph-level OOD detection, we build a comprehensive benchmark to compare our proposed approach with different state-of-the-art methods. The experiment results demonstrate the superiority of our approach over different methods on various datasets.
translated by 谷歌翻译
本文研究了用于无监督场景的图形神经网络(GNN)的节点表示。具体地,我们推导了理论分析,并在不适当定义的监督信号时,在不同的图形数据集中提供关于GNN的非稳定性能的实证演示。 GNN的性能取决于节点特征平滑度和图形结构的局部性。为了平滑通过图形拓扑和节点功能测量的节点接近度的差异,我们提出了帆 - 一个小说\下划线{s} elf- \下划线{a} u段图对比度\下划线{i} ve \ nignline {l}收入框架,使用两个互补的自蒸馏正则化模块,\ emph {Ie},内部和图间知识蒸馏。我们展示了帆在各种图形应用中的竞争性能。即使使用单个GNN层,Sail也在各种基准数据集中持续竞争或更好的性能,与最先进的基线相比。
translated by 谷歌翻译
Out-of-distribution (OOD) generalization on graphs is drawing widespread attention. However, existing efforts mainly focus on the OOD issue of correlation shift. While another type, covariate shift, remains largely unexplored but is the focus of this work. From a data generation view, causal features are stable substructures in data, which play key roles in OOD generalization. While their complementary parts, environments, are unstable features that often lead to various distribution shifts. Correlation shift establishes spurious statistical correlations between environments and labels. In contrast, covariate shift means that there exist unseen environmental features in test data. Existing strategies of graph invariant learning and data augmentation suffer from limited environments or unstable causal features, which greatly limits their generalization ability on covariate shift. In view of that, we propose a novel graph augmentation strategy: Adversarial Causal Augmentation (AdvCA), to alleviate the covariate shift. Specifically, it adversarially augments the data to explore diverse distributions of the environments. Meanwhile, it keeps the causal features invariant across diverse environments. It maintains the environmental diversity while ensuring the invariance of the causal features, thereby effectively alleviating the covariate shift. Extensive experimental results with in-depth analyses demonstrate that AdvCA can outperform 14 baselines on synthetic and real-world datasets with various covariate shifts.
translated by 谷歌翻译