人们对如何分配其有限的计算资源的决定对人类智慧至关重要。这种元认知能力的一个重要组成部分决定是否继续考虑该做什么并继续下去决定。在这里,我们展示人们通过学习和反向工程师来获得这种能力的潜在的学习机制。使用外在人类规划的过程跟踪范式,我们发现人们迅速适应他们对规划成本和利益的规划。为了发现潜在的元认知学习机制,我们增强了一组具有元认知功能的加强学习模型,并执行了贝叶斯模型选择。我们的结果表明,调整规划量的元认知能力可能通过策略梯度机制来学习,该决策机制是通过传达规划价值的元认知伪奖励引导的。
translated by 谷歌翻译
人类决策受到许多系统错误的困扰。可以通过提供决策辅助工具来指导决策者参与重要信息并根据理性决策策略将其集成,从而避免使用这些错误。设计这样的决策辅助工具曾经是一个乏味的手动过程。认知科学的进步可能会使将来自动化这一过程。我们最近引入了机器学习方法,以自动发现人类决策的最佳策略,并自动向人们解释这些策略。通过这种方法构建的决策辅助工具能够改善人类决策。但是,遵循该方法产生的描述非常乏味。我们假设可以通过将自动发现的决策策略作为一系列自然语言指示来克服这个问题。实验1表明,人们确实确实比以前的方法更容易理解此类程序说明。在这一发现的鼓励下,我们开发了一种将我们先前方法的输出转化为程序指示的算法。我们应用了改进的方法来自动为自然主义计划任务(即计划旅行)和自然主义决策任务(即选择抵押)生成决策辅助工具。实验2表明,这些自动产生的决策AID可显着改善人们在计划公路旅行和选择抵押贷款方面的表现。这些发现表明,AI驱动的增强可能有可能改善现实世界中的人类决策。
translated by 谷歌翻译
有效计划的能力对于生物体和人造系统都是至关重要的。在认知神经科学和人工智能(AI)中广泛研究了基于模型的计划和假期,但是从不同的角度来看,以及难以调和的考虑(生物现实主义与可伸缩性)的不同意见(生物现实主义与可伸缩性)。在这里,我们介绍了一种新颖的方法来计划大型POMDP(Active Tree search(ACT)),该方法结合了神经科学中领先的计划理论的规范性特征和生物学现实主义(主动推论)和树木搜索方法的可扩展性AI。这种统一对两种方法都是有益的。一方面,使用树搜索可以使生物学接地的第一原理,主动推断的方法可应用于大规模问题。另一方面,主动推理为探索 - 开发困境提供了一种原则性的解决方案,该解决方案通常在树搜索方法中以启发性解决。我们的模拟表明,ACT成功地浏览了对基于抽样的方法,需要自适应探索的问题以及大型POMDP问题“ RockSample”的二进制树,其中ACT近似于最新的POMDP解决方案。此外,我们说明了如何使用ACT来模拟人类和其他解决大型计划问题的人类和其他动物的神经生理反应(例如,在海马和前额叶皮层)。这些数值分析表明,主动树搜索是神经科学和AI计划理论的原则性实现,既具有生物现实主义和可扩展性。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
This paper surveys the eld of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the eld and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but di ers considerably in the details and in the use of the word \reinforcement." The paper discusses central issues of reinforcement learning, including trading o exploration and exploitation, establishing the foundations of the eld via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
translated by 谷歌翻译
可接受的是指对象允许的可能动作的感知。尽管其与人计算机相互作用有关,但没有现有理论解释了支撑无力形成的机制;也就是说,通过交互发现和适应的充分性。基于认知科学的加固学习理论,提出了一种综合性的无力形成理论。关键假设是用户学习在存在增强信号(成功/故障)时将有前途的电机动作与经验相关联。他们还学会分类行动(例如,“旋转”拨号),使他们能够命名和理由的能力。在遇到新颖的小部件时,他们概括这些行动的能力决定了他们感受到的能力。我们在虚拟机器人模型中实现了这个理论,它展示了在交互式小部件任务中的人性化适应性。虽然其预测与人类数据的趋势对齐,但人类能够更快地适应能力,表明存在额外机制。
translated by 谷歌翻译
Human behavior emerges from planning over elaborate decompositions of tasks into goals, subgoals, and low-level actions. How are these decompositions created and used? Here, we propose and evaluate a normative framework for task decomposition based on the simple idea that people decompose tasks to reduce the overall cost of planning while maintaining task performance. Analyzing 11,117 distinct graph-structured planning tasks, we find that our framework justifies several existing heuristics for task decomposition and makes predictions that can be distinguished from two alternative normative accounts. We report a behavioral study of task decomposition ($N=806$) that uses 30 randomly sampled graphs, a larger and more diverse set than that of any previous behavioral study on this topic. We find that human responses are more consistent with our framework for task decomposition than alternative normative accounts and are most consistent with a heuristic -- betweenness centrality -- that is justified by our approach. Taken together, our results provide new theoretical insight into the computational principles underlying the intelligent structuring of goal-directed behavior.
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译
在本文中,我们研究了不确定性下的顺序决策任务中可读性的概念。以前的作品将易读性扩展到了机器人运动以外的方案,要么集中在确定性设置上,要么在计算上太昂贵。我们提出的称为POL-MDP的方法能够处理不确定性,同时保持计算障碍。在几种不同复杂性的模拟场景中,我们建立了反对最新方法的方法的优势。我们还展示了将我们的清晰政策用作反向加强学习代理的示范,并根据最佳政策建立了他们的优越性。最后,我们通过用户研究评估计算政策的可读性,在该研究中,要求人们通过观察其行动来推断移动机器人的目标。
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
分类加强学习(RL) - 其中代理人了解其行动的所有可能的长期后果,而不仅仅是预期的价值 - 最近的兴趣。分配视图的最重要可接受性之一是在结果不完全确定的情况下促进现代,测量的,风险的风险。相比之下,在风险下决策的心理和神经科学调查利用了各种更令人尊敬的理论模型,例如缺乏公理理想的性质,例如连贯性。在这里,我们考虑了用于建模人类和动物规划的风险措施,称为有条件的价值 - 风险(CVAR),这量化了最坏情况结果(例如,车辆事故或捕食)。我们首先在连续的情况下采用传统的分布方法,在序列环境中,在众所周知的两步任务中重新分析人类决策者的选择,揭示了在粘性和坚持下潜伏的大量风险厌恶。然后,我们考虑风险敏感性的进一步关键特性,即时间一致性,显示出这种形式的CVAR的替代品,享受这种理想的特征。我们使用模拟来检查各种形式的设置,其中各种形式因对人类和动物规划和行为而产生影响的方式。
translated by 谷歌翻译
在复杂的协作任务上共同努力需要代理商协调他们的行为。在实际交互之前明确或完全执行此操作并不总是可能也不充分。代理人还需要不断了解他人的当前行动,并迅速适应自己的行为。在这里,我们调查我们称之为信仰共鸣的精神状态(意图,目标)的效率,自动协调过程如何导致协作的解决问题。我们为协作剂(HAICA)提出了分层有源推断的模型。它将高效的贝叶斯理论与基于预测处理和主动推断的感知动作系统相结合。通过让一个药物的推断精神状态影响另一个代理人的预测信念来实现信仰共振,从而实现了他自己的目标和意图。这样,推断的精神状态影响了代理人自己的任务行为,没有明确的协作推理。我们在超核域中实施和评估此模型,其中两个代理具有不同程度的信仰共振组合,以满足膳食订单。我们的结果表明,基于HAICA的代理商实现了与最近现有技术方法相当的团队表现,同时产生了更低的计算成本。我们还表明,信仰共振在环境中特别有益,代理商是对环境的不对称知识。结果表明,信仰共振和有效推断允许快速高效的代理协调,因此可以用作合作认知剂的结构块。
translated by 谷歌翻译
不确定性下的实时计划对于在复杂的动态环境中运行的机器人至关重要。例如,考虑一下,汽车,摩托车,公共汽车等不受监管的城市交通不受监管的自动机器人车辆驾驶。机器人车辆必须在短期和长时间内计划,以便与许多具有不确定意图和不确定意图的交通参与者互动有效驾驶。然而,在很长一段时间内明确规划会产生过度的计算成本,并且在实时限制下是不切实际的。为了实现大规模计划的实时性能,这项工作从树木搜索驾驶(Lets-Drive)中引入了一种新的算法学习,该算法将计划和学习集成到封闭的循环中,并将其应用于拥挤的城市交通中的自动驾驶在模拟中。具体而言,让我们驱动器从在线规划者提供的数据中学习策略及其价值函数,该数据搜索了稀疏采样的信念树;在线规划师依次使用学习的策略和价值功能作为启发式方法来扩展其运行时性能,以实现实时机器人控制。重复这两个步骤以形成一个封闭的循环,以便计划者和学习者相互通知并同步改进。该算法以自我监督的方式自行学习,而无需人工努力明确的数据标记。实验结果表明,让驱动器的表现优于计划或学习,以及计划和学习的开环集成。
translated by 谷歌翻译
主动推断是建模生物学和人造药物的行为的概率框架,该框架源于最小化自由能的原理。近年来,该框架已成功地应用于各种情况下,其目标是最大程度地提高奖励,提供可比性,有时甚至是卓越的性能与替代方法。在本文中,我们通过演示如何以及何时进行主动推理代理执行最佳奖励的动作来阐明奖励最大化和主动推断之间的联系。确切地说,我们展示了主动推理为Bellman方程提供最佳解决方案的条件 - 这种公式是基于模型的增强学习和控制的几种方法。在部分观察到的马尔可夫决策过程中,标准的主动推理方案可以为计划视野1的最佳动作产生最佳动作,但不能超越。相比之下,最近开发的递归活跃推理方案(复杂的推理)可以在任何有限的颞范围内产生最佳作用。我们通过讨论主动推理和强化学习之间更广泛的关系来补充分析。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
我们考虑创建助手的问题,这些助手可以帮助代理人(通常是人类)解决新颖的顺序决策问题,假设代理人无法将奖励功能明确指定给助手。我们没有像目前的方法那样旨在自动化并代替代理人,而是赋予助手一个咨询角色,并将代理商作为主要决策者。困难是,我们必须考虑由代理商的限制或限制引起的潜在偏见,这可能导致其看似非理性地拒绝建议。为此,我们介绍了一种新颖的援助形式化,以模拟这些偏见,从而使助手推断和适应它们。然后,我们引入了一种计划助手建议的新方法,该方法可以扩展到大型决策问题。最后,我们通过实验表明我们的方法适应了这些代理偏见,并且比基于自动化的替代方案给代理带来了更高的累积奖励。
translated by 谷歌翻译
Humans are spectacular reinforcement learners, constantly learning from and adjusting to experience and feedback. Unfortunately, this doesn't necessarily mean humans are fast learners. When tasks are challenging, learning can become unacceptably slow. Fortunately, humans do not have to learn tabula rasa, and learning speed can be greatly increased with learning aids. In this work we validate a new type of learning aid -- reward shaping for humans via inverse reinforcement learning (IRL). The goal of this aid is to increase the speed with which humans can learn good policies for specific tasks. Furthermore this approach compliments alternative machine learning techniques such as safety features that try to prevent individuals from making poor decisions. To achieve our results we first extend a well known IRL algorithm via kernel methods. Afterwards we conduct two human subjects experiments using an online game where players have limited time to learn a good policy. We show with statistical significance that players who receive our learning aid are able to approach desired policies more quickly than the control group.
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译
Structural Health Monitoring (SHM) describes a process for inferring quantifiable metrics of structural condition, which can serve as input to support decisions on the operation and maintenance of infrastructure assets. Given the long lifespan of critical structures, this problem can be cast as a sequential decision making problem over prescribed horizons. Partially Observable Markov Decision Processes (POMDPs) offer a formal framework to solve the underlying optimal planning task. However, two issues can undermine the POMDP solutions. Firstly, the need for a model that can adequately describe the evolution of the structural condition under deterioration or corrective actions and, secondly, the non-trivial task of recovery of the observation process parameters from available monitoring data. Despite these potential challenges, the adopted POMDP models do not typically account for uncertainty on model parameters, leading to solutions which can be unrealistically confident. In this work, we address both key issues. We present a framework to estimate POMDP transition and observation model parameters directly from available data, via Markov Chain Monte Carlo (MCMC) sampling of a Hidden Markov Model (HMM) conditioned on actions. The MCMC inference estimates distributions of the involved model parameters. We then form and solve the POMDP problem by exploiting the inferred distributions, to derive solutions that are robust to model uncertainty. We successfully apply our approach on maintenance planning for railway track assets on the basis of a "fractal value" indicator, which is computed from actual railway monitoring data.
translated by 谷歌翻译