Large-scale pre-trained language models (PLMs) bring new opportunities to challenge problems, especially those that need high-level intelligence, such as the math word problem (MWPs). However, directly applying existing PLMs to MWPs can fail as the generation process lacks sufficient supervision and thus lacks fast adaptivity as humans. We notice that human reasoning has a dual reasoning framework that consists of an immediate reaction system (system 1) and a delicate reasoning system (system 2), where the entire reasoning is determined by their interaction. This inspires us to develop a cooperative reasoning-induced PLM for solving MWPs, called Cooperative Reasoning (CoRe), resulting in a human-like reasoning architecture with system 1 as the generator and system 2 as the verifier. In our approach, the generator is responsible for generating reasoning paths, and the verifiers are used to supervise the evaluation in order to obtain reliable feedback for the generator. We evaluate our CoRe framework on several mathematical reasoning datasets and achieve decent improvement over state-of-the-art methods, up to 9.8% increase over best baselines.
translated by 谷歌翻译
如今,基础模型已成为人工智能中的基本基础设施之一,铺平了通往通用情报的方式。但是,现实提出了两个紧急挑战:现有的基础模型由英语社区主导;用户通常会获得有限的资源,因此不能总是使用基础模型。为了支持中文社区的发展,我们介绍了一个名为Fengshenbang的开源项目,该项目由认知计算与自然语言研究中心(CCNL)领导。我们的项目具有全面的功能,包括大型预培训模型,用户友好的API,基准,数据集等。我们将所有这些都包装在三个子项目中:风水次模型,风水框架和狂热基准。 Fengshenbang的开源路线图旨在重新评估中国预培训的大型大型模型的开源社区,促使整个中国大型模型社区的发展。我们还希望构建一个以用户为中心的开源生态系统,以允许个人访问所需的模型以匹配其计算资源。此外,我们邀请公司,大学和研究机构与我们合作建立大型开源模型的生态系统。我们希望这个项目将成为中国认知情报的基础。
translated by 谷歌翻译
该报告描述了一个预先训练的语言模型Erlangshen,其倾向校正损失是线索语义匹配挑战中的第一名。在预训练阶段,我们基于掩盖语言建模(MLM)的知识构建动态掩盖策略,并具有整个单词掩盖。此外,通过观察数据集的特定结构,预先训练的Erlangshen在微调阶段应用了经倾向校正的损失(PCL)。总体而言,我们在F1得分中获得72.54分,测试集的准确性为78.90分。我们的代码可在以下网址公开获取:https://github.com/idea-ccnl/fengshenbang-lm/tree/hf-ds/fengshen/examples/clue_sim。
translated by 谷歌翻译
即使预训练的语言模型共享语义编码器,自然语言的理解也遭受了各种输出模式的影响。在本文中,我们提出了基于BERT框架的统一双向语言理解模型Ubert,它可以通过Biaffine网络普遍地对不同NLU任务的训练对象进行建模。具体而言,Ubert从各个方面编码先验知识,统一地构建了多个NLU任务的学习表示,这有利于增强捕获共同语义理解的能力。使用Biaffine来模拟原始文本的开始和末端位置对,可以将各种分类和提取结构转换为通用的跨度编码方法。实验表明,UBERT在7个NLU任务,14个数据集和零拍设置上实现了最先进的性能,并实现了广泛的信息提取和语言推理任务的统一。
translated by 谷歌翻译
本文提出了一种新颖而快速的自我监督解决方案,用于稀疏视图CBCT重建(锥束计算机断层扫描),不需要外部训练数据。具体而言,所需的衰减系数表示为3D空间坐标的连续函数,该功能由完全连接的深神经网络参数化。我们可以离散地综合预测并通过最大程度地减少真实和合成预测之间的误差来培训网络。采用基于学习的编码器需要哈希编码来帮助网络捕获高频细节。该编码器在具有更高的性能和效率方面优于常用的频域编码器,因为它利用了人体器官的平稳性和稀疏性。已经在人体器官和幻影数据集上进行了实验。所提出的方法可实现最先进的准确性,并花费相当短的计算时间。
translated by 谷歌翻译
网络完成是一个比链接预测更难的问题,因为它不仅尝试推断丢失的链接,还要推断节点。已经提出了不同的方法来解决此问题,但是很少有人使用结构信息 - 局部连接模式的相似性。在本文中,我们提出了一个名为C-GIN的模型,以根据图形自动编码器框架从网络的观察到的部分捕获局部结构模式,该框架配备了图形同构网络模型,并将这些模式推广到完成整个图形。对来自不同领域的合成和现实世界网络的实验和分析表明,C-Gin可以实现竞争性能,而所需的信息较少,并且在大多数情况下,与基线预测模型相比,可以获得更高的准确性。我们进一步提出了一个基于网络结构的“可达聚类系数(CC)”。实验表明,我们的模型在具有较高可及的CC的网络上表现更好。
translated by 谷歌翻译
This work focuses on unsupervised representation learning in person re-identification (ReID). Recent self-supervised contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same image. However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always favorable in id-sensitive ReID tasks. In this paper, we propose to replace traditional data augmentation with a generative adversarial network (GAN) that is targeted to generate augmented views for contrastive learning. A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features. Deviating from previous GAN-based ReID methods that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and id-related features. We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related augmentations. By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks.
translated by 谷歌翻译
This paper proposes a novel self-supervised based Cut-and-Paste GAN to perform foreground object segmentation and generate realistic composite images without manual annotations. We accomplish this goal by a simple yet effective self-supervised approach coupled with the U-Net based discriminator. The proposed method extends the ability of the standard discriminators to learn not only the global data representations via classification (real/fake) but also learn semantic and structural information through pseudo labels created using the self-supervised task. The proposed method empowers the generator to create meaningful masks by forcing it to learn informative per-pixel as well as global image feedback from the discriminator. Our experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on the standard benchmark datasets.
translated by 谷歌翻译
With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component for predictive and trustworthy decision-making. Thus, it is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications. Some GNNs explainers have been proposed recently. However, they lack to generate accurate and real explanations. To mitigate these limitations, we propose GANExplainer, based on Generative Adversarial Network (GAN) architecture. GANExplainer is composed of a generator to create explanations and a discriminator to assist with the Generator development. We investigate the explanation accuracy of our models by comparing the performance of GANExplainer with other state-of-the-art methods. Our empirical results on synthetic datasets indicate that GANExplainer improves explanation accuracy by up to 35\% compared to its alternatives.
translated by 谷歌翻译
This work addresses an alternative approach for query expansion (QE) using a generative adversarial network (GAN) to enhance the effectiveness of information search in e-commerce. We propose a modified QE conditional GAN (mQE-CGAN) framework, which resolves keywords by expanding the query with a synthetically generated query that proposes semantic information from text input. We train a sequence-to-sequence transformer model as the generator to produce keywords and use a recurrent neural network model as the discriminator to classify an adversarial output with the generator. With the modified CGAN framework, various forms of semantic insights gathered from the query document corpus are introduced to the generation process. We leverage these insights as conditions for the generator model and discuss their effectiveness for the query expansion task. Our experiments demonstrate that the utilization of condition structures within the mQE-CGAN framework can increase the semantic similarity between generated sequences and reference documents up to nearly 10% compared to baseline models
translated by 谷歌翻译