最近,深度学习方法在交通预测方面取得了长足的进步,但它们的性能取决于大量的历史数据。实际上,我们可能会面临数据稀缺问题。在这种情况下,深度学习模型无法获得令人满意的性能。转移学习是解决数据稀缺问题的一种有前途的方法。但是,流量预测中现有的转移学习方法主要基于常规网格数据,这不适用于流量网络中固有的图形数据。此外,现有的基于图的模型只能在道路网络中捕获共享的流量模式,以及如何学习节点特定模式也是一个挑战。在本文中,我们提出了一种新颖的传输学习方法来解决流量预测,几乎可以将知识从数据富的源域转移到数据范围的目标域。首先,提出了一个空间图形神经网络,该网络可以捕获不同道路网络的节点特异性时空交通模式。然后,为了提高转移的鲁棒性,我们设计了一种基于模式的转移策略,我们利用基于聚类的机制来提炼源域中的常见时空模式,并使用这些知识进一步提高了预测性能目标域。现实世界数据集的实验验证了我们方法的有效性。
translated by 谷歌翻译
对于学习图表表示,并非图中的所有详细结构都与给定的图形任务相关。与任务相关的结构可以是$本地化的$或$稀疏$,仅参与子图或以子图的交互作用(层次结构的角度)。图神经网络应该能够有效提取与任务相关的结构并与无关的部分不变,这对于通用消息传递GNN来说是具有挑战性的。在这项工作中,我们建议从原始图的一系列子图中学习图表表示,以更好地捕获与任务相关的子结构或分层结构,并跳过$ noisy $零件。为此,我们设计了软遮罩GNN层,以通过掩模机制提取所需的子图。软遮罩是在连续空间中定义的,以维持不同部分的重量并表征不同部分的权重。与现有的子图或分层表示方法和图形合并操作相比,软掩模GNN层不受固定样品或降低比率的限制,因此更灵活地提取具有任意尺寸的子图。公共图基准测试的广泛实验表明,软罩机制可以提高性能。它还提供了可解释性,使每个层中掩码的值可视化,使我们能够深入了解模型所学的结构。
translated by 谷歌翻译
知识图嵌入(KGE)的有效性在很大程度上取决于建模固有关系模式和映射属性的能力。但是,现有方法只能以不足的建模能力捕获其中的一些。在这项工作中,我们提出了一个名为House的更强大的KGE框架,该框架涉及基于两种家庭转换的新型参数化:(1)住户旋转以实现建模关系模式的较高能力;(2)处理复杂关系映射属性的住户预测。从理论上讲,房屋能够同时建模关键的关系模式和映射属性。此外,房屋是对现有基于旋转的模型的概括,同时将旋转扩展到高维空间。从经验上讲,House在五个基准数据集上实现了新的最新性能。我们的代码可在https://github.com/anrep/house上找到。
translated by 谷歌翻译
从原始理论上明确定义的频谱图卷积到随后的空间扰动消息传递模型,空间局部(在顶点域中)充当大多数图形神经网络(GNN)的基本原理。在频谱图卷积中,过滤器由多项式近似,其中$ k $-oder多项式涵盖$ k $ -hop邻居。在消息传递中,聚合中使用的各种邻居定义实际上是对空间局部信息的广泛探索。对于学习节点表示,拓扑距离似乎是必要的,因为它表征了节点之间的基本关系。但是,对于学习整个图表的陈述,是必要的吗?在这项工作中,我们表明,不需要这样的原则,它会阻碍大多数现有的GNN,从有效地编码图形结构。通过删除它,以及多项式滤波器的限制,由此产生的新架构在学习图表表示上显着提高了性能。我们还研究了图谱对信号的影响,并将各种现有改进解释为不同的频谱平滑技术。它用作空间理解,以定量测量频谱对输入信号的影响,与众所周知的光谱理解为高/低通滤波器。更重要的是,它在开发强大的图形表示模型上阐明了光线。
translated by 谷歌翻译
变压器架构已成为许多域中的主导选择,例如自然语言处理和计算机视觉。然而,与主流GNN变体相比,它对图形水平预测的流行排行榜没有竞争表现。因此,它仍然是一个谜,变形金机如何对图形表示学习表现良好。在本文中,我们通过提出了基于标准变压器架构构建的Gragemer来解决这一神秘性,并且可以在广泛的图形表示学习任务中获得优异的结果,特别是在最近的OGB大规模挑战上。我们在图中利用变压器的关键洞察是有效地将图形的结构信息有效地编码到模型中。为此,我们提出了几种简单但有效的结构编码方法,以帮助Gramemormer更好的模型图形结构数据。此外,我们在数学上表征了Gramemormer的表现力,并展示了我们编码图形结构信息的方式,许多流行的GNN变体都可以被涵盖为GrameRormer的特殊情况。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
In this paper, we investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme, where data bits are embedded in the pilot sequences and the base station simultaneously detects active devices and their embedded data bits without explicit channel estimation. Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance. Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation. The DL-mAMPnet is constructed by unfolding the AMP algorithm into a feedforward neural network, which combines the principled mathematical model of the AMP algorithm with the powerful learning capability, thereby benefiting from the advantages of both techniques. Trainable parameters are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern and the large-scale fading coefficient. Moreover, a refinement module is designed to further advance the performance by utilizing the spatial feature caused by the correlated sparsity pattern. Simulation results demonstrate that the proposed DL-mAMPnet can significantly outperform traditional algorithms in terms of the symbol error rate performance.
translated by 谷歌翻译