Learning on Graphs (LoG) is widely used in multi-client systems when each client has insufficient local data, and multiple clients have to share their raw data to learn a model of good quality. One scenario is to recommend items to clients with limited historical data and sharing similar preferences with other clients in a social network. On the other hand, due to the increasing demands for the protection of clients' data privacy, Federated Learning (FL) has been widely adopted: FL requires models to be trained in a multi-client system and restricts sharing of raw data among clients. The underlying potential data-sharing conflict between LoG and FL is under-explored and how to benefit from both sides is a promising problem. In this work, we first formulate the Graph Federated Learning (GFL) problem that unifies LoG and FL in multi-client systems and then propose sharing hidden representation instead of the raw data of neighbors to protect data privacy as a solution. To overcome the biased gradient problem in GFL, we provide a gradient estimation method and its convergence analysis under the non-convex objective. In experiments, we evaluate our method in classification tasks on graphs. Our experiment shows a good match between our theory and the practice.
translated by 谷歌翻译
Is it possible to leverage large scale raw and raw parallel corpora to build a general learned metric? Existing learned metrics have gaps to human judgements, are model-dependent or are limited to the domains or tasks where human ratings are available. In this paper, we propose SEScore2, a model-based metric pretrained over million-scale synthetic dataset constructed by our novel retrieval augmented data synthesis pipeline. SEScore2 achieves high correlation to human judgements without any human rating supervisions. Importantly, our unsupervised SEScore2 can outperform supervised metrics, which are trained on the News human ratings, at the TED domain. We evaluate SEScore2 over four text generation tasks across three languages. SEScore2 outperforms all prior unsupervised evaluation metrics in machine translation, speech translation, data-to-text and dialogue generation, with average Kendall improvements 0.158. SEScore2 even outperforms SOTA supervised BLEURT at data-to-text, dialogue generation and overall correlation.
translated by 谷歌翻译
Nearest Neighbor Machine Translation (kNNMT) is a simple and effective method of augmenting neural machine translation (NMT) with a token-level nearest neighbor retrieval mechanism. The effectiveness of kNNMT directly depends on the quality of retrieved neighbors. However, original kNNMT builds datastores based on representations from NMT models, which would result in poor retrieval accuracy when NMT models are not good enough, leading to sub-optimal translation performance. In this paper, we propose PRED, a framework that leverages Pre-trained models for Datastores in kNN-MT. Better representations from pre-trained models allow us to build datastores of better quality. We also design a novel contrastive alignment objective to mitigate the representation gap between the NMT model and pre-trained models, enabling the NMT model to retrieve from better datastores. We conduct extensive experiments on both bilingual and multilingual translation benchmarks, including WMT17 English $\leftrightarrow$ Chinese, WMT14 English $\leftrightarrow$ German, IWSLT14 German $\leftrightarrow$ English, and IWSLT14 multilingual datasets. Empirical results demonstrate the effectiveness of PRED.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
部分标签学习(PLL)是一项奇特的弱监督学习任务,其中训练样本通常与一组候选标签而不是单个地面真理相关联。尽管在该域中提出了各种标签歧义方法,但他们通常假设在许多现实世界应用中可能不存在类平衡的方案。从经验上讲,我们在面对长尾分布和部分标记的组合挑战时观察到了先前方法的退化性能。在这项工作中,我们首先确定先前工作失败的主要原因。随后,我们提出了一种新型的基于最佳运输的框架太阳能,它允许完善被歧义的标签,以匹配边缘级别的先验分布。太阳能还结合了一种新的系统机制,用于估计PLL设置下的长尾类先验分布。通过广泛的实验,与先前的最先进的PLL方法相比,太阳能在标准化基准方面表现出基本优势。代码和数据可在以下网址获得:https://github.com/hbzju/solar。
translated by 谷歌翻译
在本文中,我们提出了Satformer,这是一种基于新颖的变压器解决方案,可用于布尔(SAT)解决方案。与现有的基于学习的SAT求解器不同,在问题实例级别上学习的satformer学习了难以满足的问题实例的最低限度不满意的内核(MUC),这些实例为这些问题的因果关系提供了丰富的信息。具体而言,我们应用图形神经网络(GNN)以在连接正常格式(CNF)中获得条款的嵌入。层次变压器体系结构应用于子句嵌入以捕获条款之间的关系,并且当组成UNSAT核心的条款在一起时,自我发项权的权重被学到了很高,并将其设置为低。通过这样做,Satformer有效地了解了SAT预测条款之间的相关性。实验结果表明,Satformer比现有的基于端到端学习的SAT求解器更强大。
translated by 谷歌翻译
本文提出了一种新的方法来提高单模式(LIDAR)3D对象检测器,以模拟遵循多模式(LIDAR图像)检测器的特征和响应。该方法仅在训练单模式检测器时才需要LIDAR-图像数据,并且一旦训练良好,它只需要推断时的LiDAR数据即可。我们设计了一个新颖的框架来实现这种方法:响应蒸馏以关注关键响应样本并避免背景样本;从估计的关键体素中学习体素语义和关系的稀疏 - 素蒸馏;精细颗粒到点蒸馏,以更好地了解小对象的特征;和实例蒸馏以进一步增强深度效果的一致性。 Nuscenes数据集的实验结果表明,我们的方法优于所有仅SOTA激光雷达3D检测器,甚至超过了关键NDS指标上的基线激光镜检测器,填充了单个和多模式检测器之间的72%MAP间隙。
translated by 谷歌翻译
具有相同任务的不同环境的概括对于在实际场景中成功应用视觉增强学习(RL)至关重要。然而,从高维观察中,视觉干扰(在真实场景中很常见)可能会对视觉RL中学习的表示形式有害,从而降低概括的性能。为了解决这个问题,我们提出了一种新颖的方法,即特征奖励序列预测(Cresp),以通过学习奖励序列分布(RSD)提取与任务相关的信息,因为奖励信号在RL中与任务相关,并且不变为Visual分心。具体而言,要通过RSD有效捕获与任务相关的信息,Cresp引入了一个辅助任务(即预测RSD的特征功能),以学习与任务相关的表示,因为我们可以很好地通过利用高维分布来实现高维分布相应的特征函数。实验表明,Cresp显着提高了在看不见的环境上的概括性能,在具有不同视觉分散注意力的DeepMind Control任务上表现优于几个最新的。
translated by 谷歌翻译
虽然近年来,深层加强学习代理人取得了前所未有的成功,但他们所学的政策可能是脆弱的,甚至无法概括到甚至略微修改他们的环境或不熟悉的情况。神经网络学习动态的黑匣子性质使得无法审核培训的深层代理并从这种失败中恢复过来。在本文中,我们提出了一种新颖的表示和学习方法来捕获环境动态而不使用神经网络。它起源于观察,在为人们设计的游戏中,动作的效果通常可以以连续的视觉观测的局部变化的形式感知。我们的算法旨在提取基于视觉的更改,并将其冷凝成一组依赖于依赖的描述性规则,我们调用“Visual Rewrite规则”(VRRS)。我们还提出了可以探索,扩展其规则集的VRR代理的初步结果,并通过规划与其学习的VRR世界模型来解决游戏。在若干古典游戏中,与几个主流深层代理相比,我们的非深度代理商证明了卓越的性能,极端样品效率和鲁棒泛化能力。
translated by 谷歌翻译
基于变压器的神经模型在许多AI应用中使用。培训这些模型很昂贵,因为它需要大量的GPU资源和较长的持续时间。这是具有挑战性的,因为诸如句子之类的典型数据具有可变的长度,而变压器的计算模式比卷积神经网络更为复杂。现有系统要么仅专注于模型推理,要么仅针对BERT样编码器模型进行优化。在本文中,我们提出了LightSeq2,该系统是为GPU上的一般变压器模型加速培训的系统。我们提出了一系列针对变压器模型的特定计算流量和内存访问模式量身定制的GPU优化技术。 LightSeq2支持许多模型体系结构,包括BERT(仅编码),GPT(仅解码器),变压器(编码器编码器)和视觉变压器。我们对各种模型和基准测试的实验表明,LightSeq2始终比不同GPU上的先前系统更快(1.4-3.5倍)。特别是,与大型公共机器翻译基准(WMT14英语 - 德国人)上的现有系统相比,它获得了308%的培训速度。
translated by 谷歌翻译