我们专注于创建强化学习代理的任务,这是固有的解释 - 能够通过大声思考,在执行任务并分析后HOC后产生因果解释的整个轨迹来产生直接的当地解释。这种分层解释的加强学习代理(Hex-RL),以互动虚构,基于文本的游戏环境运营,其中代理人使用文本自然语言对世界感知和行为。这些游戏通常被构造为具有长期依赖的谜题或任务,其中代理商必须完成一系列行动,以便在其中提供理想的环境,以测试代理商解释其行为的能力。我们的代理旨在使用基于提取的符号知识图形的状态表示来处理作为一流的公民的可解释性,其与分层图注意机制耦合,该方法指向大多数影响行动选择的内部图形表示中的事实。实验表明,该代理提供了对强强基线的显着改进的解释,这是人类参与者通常不熟悉环境的评分,同时也匹配最先进的任务表现。
translated by 谷歌翻译
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.
translated by 谷歌翻译
Taking into account background knowledge as the context has always been an important part of solving tasks that involve natural language. One representative example of such tasks is text-based games, where players need to make decisions based on both description text previously shown in the game, and their own background knowledge about the language and common sense. In this work, we investigate not simply giving common sense, as can be seen in prior research, but also its effective usage. We assume that a part of the environment states different from common sense should constitute one of the grounds for action selection. We propose a novel agent, DiffG-RL, which constructs a Difference Graph that organizes the environment states and common sense by means of interactive objects with a dedicated graph encoder. DiffG-RL also contains a framework for extracting the appropriate amount and representation of common sense from the source to support the construction of the graph. We validate DiffG-RL in experiments with text-based games that require common sense and show that it outperforms baselines by 17% of scores. The code is available at https://github.com/ibm/diffg-rl
translated by 谷歌翻译
自动化讲故事长期以来一直抓住了研究人员在日常生活中的叙述中的难以感受。但是,在用神经语言模型产生叙述时,保持一致性并保持对特定结束的特定结束挑战。在本文中,我们介绍了读者模型(Storm)的故事生成,这是一个框架,其中读者模型用于推理故事的推理应该进步。读者模型是人类读者相信关于虚构故事世界的概念,实体和关系的人。我们展示了如何作为知识图表所代表的明确读者模型提供故事一致性,并以实现给定的故事世界目标的形式提供可控性。实验表明,我们的模型产生了显着更加连贯和主题的故事,优于尺寸的基线,包括情节合理性并保持主题。我们的系统也优于在未订购的情况下在组成给定概念时占总引导的故事生成基线。
translated by 谷歌翻译
大型预先训练的生成语言模型的出现为AI故事的常见框架通过采样模型来创建持续故事的序列。然而,单独的抽样对故事产生不足。特别是,很难指导语言模型来创建故事以达到特定的目标事件。我们提出了两种在深增强学习和奖励塑造的自动化技术,以控制计算机生成的故事的情节。首先利用近端策略优化来微调现有的基于变换器的语言模型,以生成文本持续,而且是寻求目标。第二种提取来自展开故事的知识图,该故事由策略网络使用,具有图注意选择由语言模型生成的候选继续。我们报告了与故事如何实现给定的目标事件以及与基线和消融相比的一致性和整体故事质量的人类参与者排名的自动化指标报告。
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
基于文本的游戏(TBG)是复杂的环境,允许用户或计算机代理进行文本交互并实现游戏目标。为基于文本的游戏构建面向目标的计算机代理是一项挑战,尤其是当我们使用逐步反馈作为模型的唯一文本输入时。此外,代理商很难通过从更大的文本输入空间中评估灵活的长度和形式。在本文中,我们对应用于基于文本的游戏字段的深度学习方法进行了广泛的分析。
translated by 谷歌翻译
自2015年首次介绍以来,深入增强学习(DRL)方案的使用已大大增加。尽管在许多不同的应用中使用了使用,但他们仍然存在缺乏可解释性的问题。面包缺乏对研究人员和公众使用DRL解决方案的使用。为了解决这个问题,已经出现了可解释的人工智能(XAI)领域。这是各种不同的方法,它们希望打开DRL黑框,范围从使用可解释的符号决策树到诸如Shapley值之类的数值方法。这篇评论研究了使用哪些方法以及使用了哪些应用程序。这样做是为了确定哪些模型最适合每个应用程序,或者是否未充分利用方法。
translated by 谷歌翻译
Text-based games present a unique class of sequential decision making problem in which agents interact with a partially observable, simulated environment via actions and observations conveyed through natural language. Such observations typically include instructions that, in a reinforcement learning (RL) setting, can directly or indirectly guide a player towards completing reward-worthy tasks. In this work, we study the ability of RL agents to follow such instructions. We conduct experiments that show that the performance of state-of-the-art text-based game agents is largely unaffected by the presence or absence of such instructions, and that these agents are typically unable to execute tasks to completion. To further study and address the task of instruction following, we equip RL agents with an internal structured representation of natural language instructions in the form of Linear Temporal Logic (LTL), a formal language that is increasingly used for temporally extended reward specification in RL. Our framework both supports and highlights the benefit of understanding the temporal semantics of instructions and in measuring progress towards achievement of such a temporally extended behaviour. Experiments with 500+ games in TextWorld demonstrate the superior performance of our approach.
translated by 谷歌翻译
众所周知,在漫长的地平线和稀疏的奖励任务中,加强学习(RL)是困难的,需要大量的培训步骤。加快该过程的标准解决方案是利用额外的奖励信号,将其塑造以更好地指导学习过程。在语言条件的RL的背景下,语言输入的抽象和概括属性为更有效地塑造奖励的方式提供了机会。在本文中,我们利用这一想法并提出了一种自动奖励塑形方法,代理商从一般语言目标中提取辅助目标。这些辅助目标使用问题生成(QG)和问题答案(QA)系统:它们包括导致代理商尝试使用其自己的轨迹重建有关全球目标的部分信息的问题。当它成功时,它会获得与对答案的信心成正比的内在奖励。这激励代理生成轨迹,这些轨迹明确解释了一般语言目标的各个方面。我们的实验研究表明,这种方法不需要工程师干预来设计辅助目标,可以通过有效指导探索来提高样品效率。
translated by 谷歌翻译
文本冒险游戏由于其组合大的动作空间和稀疏奖励而导致加强学习方法具有独特的挑战。这两个因素的相互作用尤为苛刻,因为大型动作空间需要广泛的探索,而稀疏奖励提供有限的反馈。这项工作提出使用多级方法来解决探索 - 与利用困境,该方法明确地解除了每一集中的这两种策略。我们的算法称为Exploit-Dear-Descore(XTX),使用剥削策略开始每个剧集,该策略是从过去的一组有希望的轨迹开始,然后切换到旨在发现导致未经看不见状态空间的新动作的探索政策。该政策分解允许我们将全球决策结合在该空间中返回基于好奇的本地探索的全球决策,这是由人类可能接近这些游戏的情况。我们的方法在杰里科基准(Hausknecht等人,2020)中,在杰里科基准(Hausknecht等人,2020)中,在确定性和随机设置的比赛中显着优于27%和11%的平均正常化分数。在Zork1的游戏中,特别是,XTX获得103的得分,超过先前方法的2倍改善,并且在游戏中推过已经困扰先前的方法的游戏中的几个已知的瓶颈。
translated by 谷歌翻译
虽然深增强学习已成为连续决策问题的有希望的机器学习方法,但对于自动驾驶或医疗应用等高利害域来说仍然不够成熟。在这种情况下,学习的政策需要例如可解释,因此可以在任何部署之前检查它(例如,出于安全性和验证原因)。本调查概述了各种方法,以实现加固学习(RL)的更高可解释性。为此,我们将解释性(作为模型的财产区分开来和解释性(作为HOC操作后的讲话,通过代理的干预),并在RL的背景下讨论它们,并强调前概念。特别是,我们认为可译文的RL可能会拥抱不同的刻面:可解释的投入,可解释(转型/奖励)模型和可解释的决策。根据该计划,我们总结和分析了与可解释的RL相关的最近工作,重点是过去10年来发表的论文。我们还简要讨论了一些相关的研究领域并指向一些潜在的有前途的研究方向。
translated by 谷歌翻译
Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems, each of which is modeled with a knowledge graph. To evaluate this system and analyze the behavior of this agent, we designed and released our own reinforcement learning agent environment, "the Room", where an agent has to learn how to encode, store, and retrieve memories to maximize its return by answering questions. We show that our deep Q-learning based agent successfully learns whether a short-term memory should be forgotten, or rather be stored in the episodic or semantic memory systems. Our experiments indicate that an agent with human-like memory systems can outperform an agent without this memory structure in the environment.
translated by 谷歌翻译
随着人工智能的兴趣,与自主代的人类相互作用变得更加频繁。有效的人类代理协作要求用户了解代理人的行为,因为未能这样做可能会导致生产率降低,滥用或挫折。代理战略摘要方法用于通过演示将代理人的策略描述为其分发用户。摘要的目标是通过在选定的世界州中展示其行为来最大限度地提高用户对代理能力的理解。虽然显示是有用的,但我们表明当特定代理的特定代理人独立生成每个摘要时,当任务时,当前方法有限。在本文中,我们提出了一种新的方法,可以通过识别代理人对最佳行动方案的国家来说强调代理政策之间的差异。我们进行用户研究,以评估分歧的综述鉴定优异代理和传达代理差异的有用性。结果表明,与使用亮点生成的概要相比,基于分歧的摘要导致用户性能提高,该概述,一种独立地为每个代理生成摘要。
translated by 谷歌翻译
深度强化学习(DRL)赋予了各种人工智能领域,包括模式识别,机器人技术,推荐系统和游戏。同样,图神经网络(GNN)也证明了它们在图形结构数据的监督学习方面的出色表现。最近,GNN与DRL用于图形结构环境的融合引起了很多关注。本文对这些混合动力作品进行了全面评论。这些作品可以分为两类:(1)算法增强,其中DRL和GNN相互补充以获得更好的实用性; (2)特定于应用程序的增强,其中DRL和GNN相互支持。这种融合有效地解决了工程和生命科学方面的各种复杂问题。基于审查,我们进一步分析了融合这两个领域的适用性和好处,尤其是在提高通用性和降低计算复杂性方面。最后,集成DRL和GNN的关键挑战以及潜在的未来研究方向被突出显示,这将引起更广泛的机器学习社区的关注。
translated by 谷歌翻译
可解释的人工智能是一个研究领域,试图为自动智能系统提供更透明度。已经使用了解释性,尤其是在强化学习和机器人场景中,以更好地了解机器人决策过程。然而,以前的工作已广泛专注于提供技术解释,而不是非专家最终用户可以更好地理解的技术解释。在这项工作中,我们利用从成功的可能性中构建的类似人类的解释来完成自主机器人执行动作后显示的目标。这些解释旨在由没有或很少有人工智能方法经验的人来理解。本文提出了一项用户试验,以研究这些解释的重点是成功实现其目标的概率的解释是否构成了非专家最终用户的合适解释。获得的结果表明,与Q值产生的技术解释相比,非专业参与者的评分机器人的解释侧重于更高的成功概率和差异的差异,并且也赞成反事实解释而不是独立解释。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
事实证明,在学习环境中,社会智能代理(SIA)的部署在不同的应用领域具有多个优势。社会代理创作工具使场景设计师能够创造出对SIAS行为的高度控制的量身定制体验,但是,另一方面,这是有代价的,因为该方案及其创作的复杂性可能变得霸道。在本文中,我们介绍了可解释的社会代理创作工具的概念,目的是分析社会代理的创作工具是否可以理解和解释。为此,我们检查了创作工具Fatima-Toolkit是否可以理解,并且从作者的角度来看,其创作步骤可以解释。我们进行了两项用户研究,以定量评估Fatima-Toolkit的解释性,可理解性和透明度,从场景设计师的角度来看。关键发现之一是,法蒂玛 - 库尔基特(Fatima-Toolkit)的概念模型通常是可以理解的,但是基于情感的概念并不那么容易理解和使用。尽管关于Fatima-Toolkit的解释性有一些积极的方面,但仍需要取得进展,以实现完全可以解释的社会代理商创作工具。我们提供一组关键概念和可能的解决方案,可以指导开发人员构建此类工具。
translated by 谷歌翻译
最近,已提出文本世界游戏,以使人为代理能够理解和理解真实世界的情景。基于文本的游戏对人工代理有挑战性,因为它需要在部分可观察环境中使用自然语言的理解和相互作用。代理商通过旨在使甚至人类参与者挑战的文本描述来观察环境。过去的方法没有足够重视拟议代理人的语言理解能力。通常,这些方法从划痕中训练,一个代理商使用时间丢失函数在训练期间在线学习文本表示和游戏。鉴于RL方法的样本效率低下,学习足够的文本表示效率低下,以便能够在这种复杂的游戏环境设置中使用文本观察来理解和理由。在本文中,我们通过提出一个简单的RL来改善对代理的语义理解,其中我们使用具有深度RL模型的变压器的语言模型。我们对我们的框架进行了详细的研究,以展示我们的模型如何优于所有现有代理商,Zork1,达到44.7的分数,比最先进的模型高1.6。总体而言,我们建议的方法优于14个基于文本的游戏中的4场比赛,同时执行与剩余游戏中最先进的模型相当。
translated by 谷歌翻译