This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions.
translated by 谷歌翻译
Recently, the dominant DETR-based approaches apply central-concept spatial prior to accelerate Transformer detector convergency. These methods gradually refine the reference points to the center of target objects and imbue object queries with the updated central reference information for spatially conditional attention. However, centralizing reference points may severely deteriorate queries' saliency and confuse detectors due to the indiscriminative spatial prior. To bridge the gap between the reference points of salient queries and Transformer detectors, we propose SAlient Point-based DETR (SAP-DETR) by treating object detection as a transformation from salient points to instance objects. In SAP-DETR, we explicitly initialize a query-specific reference point for each object query, gradually aggregate them into an instance object, and then predict the distance from each side of the bounding box to these points. By rapidly attending to query-specific reference region and other conditional extreme regions from the image features, SAP-DETR can effectively bridge the gap between the salient point and the query-based Transformer detector with a significant convergency speed. Our extensive experiments have demonstrated that SAP-DETR achieves 1.4 times convergency speed with competitive performance. Under the standard training scheme, SAP-DETR stably promotes the SOTA approaches by 1.0 AP. Based on ResNet-DC-101, SAP-DETR achieves 46.9 AP.
translated by 谷歌翻译
我们介绍了一项对自然语言(NL)推理的人类通知,开放域和逻辑上复杂且多样的数据集,配备了一阶逻辑(fol)注释。对开本由1,435个示例(独特的结论)组成,每个示例与487组前提之一搭配,这些场所作为规则,可用于演绎理由,以理解每个结论的有效性。前提和结论的逻辑正确性是通过其平行注释来确保的,这些注释会自动由我们的FOL推理引擎验证。除了主要的NL推理任务外,对开本中的NL-FOL对自动构成了使用FOL作为逻辑形式的新的NL-FOL翻译数据集。我们对广泛的实验系统地评估了对中型语言模型(BERT,ROBERTA)进行微调的FOL推理能力,并且在大型语言模型(GPT-NEOX,OPT,OPT,GPT-3,Codex)上促成了很少的射击。对于NL-FOL翻译,我们尝试使用GPT-3和Codex。我们的结果表明,公开可用的最强大的大语言模型之一(LLM),GPT-3 Davinci,仅比随机结果略好,而在一部分集的一部分中,该模型尤其不好,并且在预测该模型方面尤其不好。纠正虚假和未知结论的真实价值。我们的数据集和代码可在https://github.com/yale-lily/folio上找到。
translated by 谷歌翻译
知识图(kgs)在许多应用程序中越来越重要的基础架构,同时患有不完整问题。 KG完成任务(KGC)自动根据不完整的KG预测缺失的事实。但是,现有方法在现实情况下表现不佳。一方面,他们的性能将巨大的降解,而kg的稀疏性越来越大。另一方面,预测的推理过程是一个不可信的黑匣子。本文提出了一个稀疏kgc的新型可解释模型,将高阶推理组合到图形卷积网络中,即HOGRN。它不仅可以提高减轻信息不足问题的概括能力,而且还可以在保持模型的有效性和效率的同时提供可解释性。有两个主要组件无缝集成以进行关节优化。首先,高阶推理成分通过捕获关系之间的内源性相关性来学习高质量的关系表示。这可以反映逻辑规则,以证明更广泛的事实是合理的。其次,更新组件的实体利用无重量的图形卷积网络(GCN)有效地模拟具有可解释性的KG结构。与常规方法不同,我们在没有其他参数的情况下在关系空间中进行实体聚合和基于设计组成的注意。轻巧的设计使HOGRN更适合稀疏设置。为了进行评估,我们进行了广泛的实验 - HOGRN对几个稀疏KG的结果表现出了令人印象深刻的改善(平均为9%的MRR增益)。进一步的消融和案例研究证明了主要成分的有效性。我们的代码将在接受后发布。
translated by 谷歌翻译
我们设计一个3D场景图表示,触点图+(CG+),以进行有效的顺序任务计划。此触点基于图形的表示,带有类似谓词的属性,带有简洁的几何信息和有效的机器人风格交互作用摘要场景布局。可以通过随机优化方法的遗传算法生成触点图上自然指定的目标配置。然后,通过计算初始触点图和目标配置之间的图形编辑距离(GED)来初始化任务计划,该图形配置生成了与可能的机器人操作相对应的图表编辑操作。我们通过强加约束来调节图形编辑操作的时间可行性,确保有效的任务和运动对应关系来最终确定任务计划。在一系列的模拟和实验中,机器人成功完成了使用常规规划语言(如计划域定义语言(PDDL))很难指定的复杂顺序重新安排任务,证明了机器人在接触图上的高可行性和潜力。
translated by 谷歌翻译
机器的图像编码(ICM)旨在压缩图像进行AI任务分析,而不是满足人类的看法。学习一种既是一般(用于AI任务)的特征,也是紧凑的(用于压缩)的功能,这对于其成功而言至关重要。在本文中,我们试图通过学习通用功能,同时考虑压缩来开发ICM框架。我们将诸如无所不能功能和相应框架的功能命名为Omni-ICM。考虑到自我监督学习(SSL)提高了特征的概括,我们将其与压缩任务集成到OMNI-ICM框架中,以学习无所不能的功能。但是,在SSL中协调语义建模并在压缩中删除冗余是不平凡的,因此我们通过合作实例区分和熵最小化以自适应掉落的信息来设计新颖的信息过滤(如果)模块,以较弱相关的信息执行AI任务(例如,某些纹理冗余)。与以前的特定解决方案不同,Omni-ICM可以直接基于学习的无能功能的AI任务分析,而无需联合培训或额外的转换。尽管简单而直观,但Omni-ICM在多个基本愿景任务上大大优于现有的传统和基于学习的编解码器。
translated by 谷歌翻译
我们提出了一个机器人学习和计划框架,该框架以最少的共同努力生成有效的工具使用策略,能够处理不同于培训的物体。利用有限元方法(FEM)基于模拟器,该模拟器在观察到的刀具使用事件给定的细粒度,连续的视觉和物理效果中,通过提出的迭代迭代符号深化回归(IDSR)算法来识别促成效果的基本物理特性。我们进一步设计了一种基于最佳控制的运动计划方案,以整合机器人和特定于工具的运动学和动力学,以产生有效的轨迹,从而实现学习性能。在模拟中,我们证明了所提出的框架可以产生更有效的工具使用策略,这与在两个示例任务中观察到的框架截然不同。
translated by 谷歌翻译