Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find \textit{untrained sparse subnetworks} at the initialization, that can match the performance of \textit{fully trained dense} GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).
translated by 谷歌翻译
关于稀疏神经网络训练(稀疏训练)的最新研究表明,通过从头开始训练本质上稀疏的神经网络可以实现绩效和效率之间的令人信服的权衡。现有的稀疏训练方法通常努力在一次跑步中找到最佳的稀疏子网,而无需涉及任何昂贵的密集或预训练步骤。例如,作为最突出的方向之一,动态稀疏训练(DST)能够通过在训练过程中迭代发展稀疏拓扑来实现竞争性训练的竞争性能。在本文中,我们认为最好分配有限的资源来创建多个低损失的稀疏子网并将其超级置于更强的基因,而不是完全分配所有资源以找到单个子网络。为了实现这一目标,需要两个Desiderata:(1)在一个培训过程中有效生产许多低损失的子网,即所谓的廉价门票,仅限于用于密集培训的标准培训时间; (2)将这些廉价的门票有效地超级为一个更强的子网,而无需超越约束参数预算。为了证实我们的猜想,我们提出了一种新颖的稀疏训练方法,称为\ textbf {sup-tickets},可以在单个稀疏到较小的训练过程中同时满足上述两个desiderata。在CIFAR-10/100和Imagenet上的各种现代体系结构中,我们表明,SUP-Tickets与现有的稀疏训练方法无缝集成,并显示出一致的性能提高。
translated by 谷歌翻译
少量分类任务旨在根据支持集中的一些标记示例对查询集中的图像进行分类。大多数研究通常假设任务中的每个图像都有一个单一和唯一的类关联。在这些假设下,当支持和查询类之间没有完全匹配时,这些算法可能无法识别适当的类分配。例如,给定几个狮子,自行车和苹果的图像来分类老虎。然而,在更普遍的环境中,我们可以考虑大型食肉动物的更高级别概念,以将老虎与狮子相匹配。由于基于标签的监督与复杂的概念关系不相容,现有研究很少考虑这种情况。在这项工作中,我们向这种更具挑战性的情景,基于语义的少量学习的少数镜头学习,并提出了一种通过互动心理学学习捕获内心语义关系来解决范例的方法。我们在CIFAR-100数据集上评估我们的方法。结果表明了我们所提出的方法的优点。
translated by 谷歌翻译
图形神经网络(GNNS)通过在相邻节点之间传递和聚合消息来学习节点表示。 GNN已成功应用于几个应用领域并取得了有希望的表现。然而,由于消息通过机制,GNN可以容易受到结构噪声的影响,其中可以通过整个图形传播噪声。尽管已经提出了一系列稳健的GNN,但它们被不同的结构噪声评估,并且它缺乏与一致设置进行系统的比较。在这项工作中,我们在一致的结构噪声设置下对不同类型的鲁棒GNN进行了全面和系统的比较研究。从噪声方面,我们设计三种不同的结构噪声,即本地,社区和全球噪声。从模型方面,我们从基于样本的,修订版和基于施工的鲁棒GNN中选择一些代表性模型。根据经验结果,我们为强大的GNNS选择提供了一些实际建议。
translated by 谷歌翻译
深度神经网络容易受到通过对输入对难以察觉的变化进行制作的对抗性示例。但是,这些对手示例在适用于模型及其参数的白盒设置中最成功。寻找可转移到其他模型或在黑匣子设置中开发的对抗性示例显着更加困难。在本文中,我们提出了可转移的对抗性实例的方向聚集的对抗性攻击。我们的方法在攻击过程中使用聚集方向,以避免产生的对抗性示例在白盒模型上过度拟合。关于Imagenet的广泛实验表明,我们的提出方法显着提高了对抗性实例的可转移性,优于最先进的攻击,特别是对抗对抗性稳健的模型。我们所提出的方法的最佳平均攻击成功率达到94.6 \%,针对三种对手训练模型和94.8%抵御五种防御方法。它还表明,目前的防御方法不会阻止可转移的对抗性攻击。
translated by 谷歌翻译
近年来,由于其在研究和实践中的重要性,对归属网络的异常检测问题有望的兴趣。虽然已经提出了各种方法来解决这个问题,但存在两种主要限制:(1)由于缺乏监控信号,未经监督的方法通常会效率低得多,(2)现有的异常检测方法仅使用本地语境信息来检测异常信息以检测异常信息节点,例如,单跳或两跳信息,但忽略全局上下文信息。由于异常节点与结构和属性中的正常节点不同,因此如果我们删除连接异常和正常节点的边缘,异常节点和其邻居之间的距离应该大于正常节点和其邻居之间的距离直观。因此,基于全局和本地上下文信息的跳数可以作为异常的指标。通过这种直觉激励,我们提出了一种基于跳数的模型(HCM)来通过建模本地和全局上下文信息来检测异常。为了更好地利用异常识别的跳跃计数,我们建议使用跳数预测作为自我监督任务。我们根据HOP计数通过HCM模型设计了两个异常的分数来识别异常。此外,我们雇用贝叶斯学习培训HCM模型,以捕获学习参数的不确定性,避免过度装备。关于现实世界归属网络的广泛实验表明,我们所提出的模型在异常检测中是有效的。
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets - e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). In addition, extensive evaluations show that COSMO is significantly more natural and consistent on unseen datasets than best-performing dialogue models - e.g., GODEL (Peng et al., 2022), BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020). Furthermore, it is sometimes even preferred to the original human-written gold responses. We make our data, models, and code public.
translated by 谷歌翻译
We propose a novel task, G4C (Goal-driven Guidance Generation in Grounded Communication), for studying goal-driven and grounded natural language interactions. Specifically, we choose Dungeons and Dragons (D&D) -- a role-playing game consisting of multiple player characters and a Dungeon Master (DM) who collaborate to achieve a set of goals that are beneficial to the players -- as a testbed for this task. Here, each of the player characters is a student, with their own personas and abilities, and the DM is the teacher, an arbitrator of the rules of the world and responsible for assisting and guiding the students towards a global goal. We propose a theory-of-mind-inspired methodology for training such a DM with reinforcement learning (RL), where a DM: (1) learns to predict how the players will react to its utterances using a dataset of D&D dialogue transcripts; and (2) uses this prediction as a reward function providing feedback on how effective these utterances are at guiding the players towards a goal. Human and automated evaluations show that a DM trained with RL to generate guidance by incorporating a theory-of-mind of the players significantly improves the players' ability to achieve goals grounded in their shared world.
translated by 谷歌翻译