基于模型的强化学习方法在许多任务中实现了显着的样本效率,但它们的性能通常受模型错误的存在限制。为减少模型错误,以前的作品使用单一设计的网络来符合整个环境动态,将环境动态视为黑匣子。然而,这些方法缺乏考虑动态可能包含多个子动态的环境分解性,这可以单独建模,允许我们更准确地构建世界模型。在本文中,我们提出了环境动态分解(ED2),这是一种以分解方式模拟环境的新型世界模型施工框架。 ED2包含两个关键组件:子动力学发现(SD2)和动态分解预测(D2P)。 SD2发现环境中的子动力学,然后D2P构建子动力学后的分解世界模型。 ED2可以容易地与现有的MBRL算法和经验结果表明,ED2显着降低了模型误差,并提高了各种任务上最先进的MBRL算法的性能。
translated by 谷歌翻译
强化学习(RL)通过与环境相互作用的试验过程解决顺序决策问题。尽管RL在玩复杂的视频游戏方面取得了巨大的成功,但在现实世界中,犯错误总是不希望的。为了提高样本效率并从而降低错误,据信基于模型的增强学习(MBRL)是一个有前途的方向,它建立了环境模型,在该模型中可以进行反复试验,而无需实际成本。在这项调查中,我们对MBRL进行了审查,重点是Deep RL的最新进展。对于非壮观环境,学到的环境模型与真实环境之间始终存在概括性错误。因此,非常重要的是分析环境模型中的政策培训与实际环境中的差异,这反过来又指导了更好的模型学习,模型使用和政策培训的算法设计。此外,我们还讨论了其他形式的RL,包括离线RL,目标条件RL,多代理RL和Meta-RL的最新进展。此外,我们讨论了MBRL在现实世界任务中的适用性和优势。最后,我们通过讨论MBRL未来发展的前景来结束这项调查。我们认为,MBRL在被忽略的现实应用程序中具有巨大的潜力和优势,我们希望这项调查能够吸引更多关于MBRL的研究。
translated by 谷歌翻译
Recently, model-based agents have achieved better performance than model-free ones using the same computational budget and training time in single-agent environments. However, due to the complexity of multi-agent systems, it is tough to learn the model of the environment. The significant compounding error may hinder the learning process when model-based methods are applied to multi-agent tasks. This paper proposes an implicit model-based multi-agent reinforcement learning method based on value decomposition methods. Under this method, agents can interact with the learned virtual environment and evaluate the current state value according to imagined future states in the latent space, making agents have the foresight. Our approach can be applied to any multi-agent value decomposition method. The experimental results show that our method improves the sample efficiency in different partially observable Markov decision process domains.
translated by 谷歌翻译
基于模型的强化学习引起了广泛的样本效率。尽管到目前为止,它令人印象深刻,但仍然不清楚如何适当安排重要的超参数,以实现足够的性能,例如基于Dyna样式的算法中的政策优化的实际数据比。在本文中,我们首先分析了实际数据在政策培训中的作用,这表明逐渐增加了实际数据的比例会产生更好的性能。灵感来自分析,我们提出了一个名为autombpo的框架,以自动安排真实的数据比以及基于培训模型的策略优化(MBPO)算法的其他超参数,是基于模型的方法的代表性运行情况。在几个连续控制任务上,由AutomBPO安排的HyperParameters培训的MBPO实例可以显着超越原始的,并且AutomBPO找到的真实数据比例计划显示了与我们的理论分析的一致性。
translated by 谷歌翻译
在智能决策系统的核心上,如何代表和优化政策是一个基本问题。这个问题的根源挑战是政策空间的大规模和高复杂性,这加剧了政策学习的困难,尤其是在现实世界中。对于理想的替代政策领域,最近在低维潜在空间中的政策表示表明其在改善政策的评估和优化方面的潜力。这些研究所涉及的关键问题是,我们应根据哪些标准抽象出所需的压缩和泛化的政策空间。但是,文献中对政策抽象的理论和政策表示学习方法的研究较少。在这项工作中,我们做出了最初的努力来填补空缺。首先,我们提出了一个统一的政策抽象理论,其中包含与不同级别的政策特征相关的三种类型的策略抽象。然后,我们将它们推广到三个策略指标,以量化政策的距离(即相似性),以便在学习策略表示方面更方便使用。此外,我们建议基于深度度量学习的政策表示学习方法。对于实证研究,我们研究了拟议的政策指标和代表的功效,分别表征政策差异和传达政策概括。我们的实验均在政策优化和评估问题中进行,其中包含信任区域政策优化(TRPO),多样性引导的进化策略(DGES)和非政策评估(OPE)。自然而然地,实验结果表明,对于所有下游学习问题,都没有普遍的最佳抽象。虽然影响力 - 反应抽象可以是通常的首选选择。
translated by 谷歌翻译
安全已成为对现实世界系统应用深度加固学习的主要挑战之一。目前,诸如人类监督等外部知识的纳入唯一可以防止代理人访问灾难性状态的手段。在本文中,我们提出了一种基于安全模型的强化学习的新框架MBHI,可确保状态级安全,可以有效地避免“本地”和“非本地”灾难。监督学习者的合并在MBHI培训,以模仿人类阻止决策。类似于人类决策过程,MBHI将在执行对环境的动作之前在动态模型中推出一个想象的轨迹,并估算其安全性。当想象力遇到灾难时,MBHI将阻止当前的动作并使用高效的MPC方法来输出安全策略。我们在几个安全任务中评估了我们的方法,结果表明,与基线相比,MBHI在样品效率和灾难数方面取得了更好的性能。
translated by 谷歌翻译
在动态环境中,持续增强学习(CRL)的关键挑战是,随着环境在其生命周期的变化,同时最大程度地减少对学习的信息的灾难性忘记,随着环境在其一生中的变化而变化。为了应对这一挑战,在本文中,我们提出了Dacorl,即动态自动持续RL。 Dacorl使用渐进式上下文化学习了上下文条件条件的策略,该策略会逐步将动态环境中的一系列固定任务群集成一系列上下文,并选择一个可扩展的多头神经网络以近似策略。具体来说,我们定义了一组具有类似动力学的任务,并将上下文推理形式化为在线贝叶斯无限高斯混合物集群的过程,这些过程是在环境特征上,诉诸在线贝叶斯推断,以推断上下文的后端分布。在以前的中国餐厅流程的假设下,该技术可以将当前任务准确地分类为先前看到的上下文,或者根据需要实例化新的上下文,而无需依靠任何外部指标来提前向环境变化发出信号。此外,我们采用了可扩展的多头神经网络,其输出层与新实例化的上下文同步扩展,以及一个知识蒸馏正规化项来保留学习任务的性能。作为一个可以与各种深度RL算法结合使用的一般框架,Dacorl在稳定性,整体性能和概括能力方面具有一致的优势,而不是现有方法,这是通过对几种机器人导航和Mujoco Socomotion任务进行的广泛实验来验证的。
translated by 谷歌翻译
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
深度强化学习(DRL)和深度多机构的强化学习(MARL)在包括游戏AI,自动驾驶汽车,机器人技术等各种领域取得了巨大的成功。但是,众所周知,DRL和Deep MARL代理的样本效率低下,即使对于相对简单的问题设置,通常也需要数百万个相互作用,从而阻止了在实地场景中的广泛应用和部署。背后的一个瓶颈挑战是众所周知的探索问题,即如何有效地探索环境和收集信息丰富的经验,从而使政策学习受益于最佳研究。在稀疏的奖励,吵闹的干扰,长距离和非平稳的共同学习者的复杂环境中,这个问题变得更加具有挑战性。在本文中,我们对单格和多代理RL的现有勘探方法进行了全面的调查。我们通过确定有效探索的几个关键挑战开始调查。除了上述两个主要分支外,我们还包括其他具有不同思想和技术的著名探索方法。除了算法分析外,我们还对一组常用基准的DRL进行了全面和统一的经验比较。根据我们的算法和实证研究,我们终于总结了DRL和Deep Marl中探索的公开问题,并指出了一些未来的方向。
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
在现实世界中,感知的信号通常是高维且嘈杂的,并且在下游决策任务所需的必要和充分信息中找到和使用其表示形式,将有助于提高任务中的计算效率和概括能力。在本文中,我们专注于部分可观察到的环境,并建议学习一组最小的状态表示,以捕获足够的决策信息以进行决策,称为\ textIt {动作充足的状态表示}(ASRS)。我们为系统中变量之间的结构关系构建了生成环境模型,并提出了一种基于结构约束的ASRS来表征ASR的原则方法,以及在政策学习中最大程度地提高累积奖励的目标。然后,我们开发一个结构化的顺序变异自动编码器来估计环境模型并提取ASRS。我们关于载载和Vizdoom的经验结果证明了学习和使用ASRS进行政策学习的明显优势。此外,估计的环境模型和ASR允许从紧凑的潜在空间中想象的结果中学习行为,以提高样品效率。
translated by 谷歌翻译
Reinforcement learning (RL) gained considerable attention by creating decision-making agents that maximize rewards received from fully observable environments. However, many real-world problems are partially or noisily observable by nature, where agents do not receive the true and complete state of the environment. Such problems are formulated as partially observable Markov decision processes (POMDPs). Some studies applied RL to POMDPs by recalling previous decisions and observations or inferring the true state of the environment from received observations. Nevertheless, aggregating observations and decisions over time is impractical for environments with high-dimensional continuous state and action spaces. Moreover, so-called inference-based RL approaches require large number of samples to perform well since agents eschew uncertainty in the inferred state for the decision-making. Active inference is a framework that is naturally formulated in POMDPs and directs agents to select decisions by minimising expected free energy (EFE). This supplies reward-maximising (exploitative) behaviour in RL, with an information-seeking (exploratory) behaviour. Despite this exploratory behaviour of active inference, its usage is limited to discrete state and action spaces due to the computational difficulty of the EFE. We propose a unified principle for joint information-seeking and reward maximization that clarifies a theoretical connection between active inference and RL, unifies active inference and RL, and overcomes their aforementioned limitations. Our findings are supported by strong theoretical analysis. The proposed framework's superior exploration property is also validated by experimental results on partial observable tasks with high-dimensional continuous state and action spaces. Moreover, the results show that our model solves reward-free problems, making task reward design optional.
translated by 谷歌翻译
深度神经网络的强大学习能力使强化学习者能够直接从连续环境中学习有效的控制政策。从理论上讲,为了实现稳定的性能,神经网络假设I.I.D.不幸的是,在训练数据在时间上相关且非平稳的一般强化学习范式中,输入不存在。这个问题可能导致“灾难性干扰”和性能崩溃的现象。在本文中,我们提出智商,即干涉意识深度Q学习,以减轻单任务深度加固学习中的灾难性干扰。具体来说,我们求助于在线聚类,以实现在线上下文部门,以及一个多头网络和一个知识蒸馏正规化术语,用于保留学习上下文的政策。与现有方法相比,智商基于深Q网络,始终如一地提高稳定性和性能,并通过对经典控制和ATARI任务进行了广泛的实验。该代码可在以下网址公开获取:https://github.com/sweety-dm/interference-aware-ware-deep-q-learning。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
强化学习在许多应用中取得了巨大的成功。然而,样本效率仍然是一个关键挑战,突出的方法需要训练数百万(甚至数十亿)的环境步骤。最近,基于样本的基于图像的RL算法存在显着进展;然而,Atari游戏基准上的一致人级表现仍然是一个难以捉摸的目标。我们提出了一种在Muzero上建立了基于模式的基于模型的Visual RL算法,我们名称为高效零。我们的方法达到了194.3%的人类性能和Atari 100K基准的109.0%的中位数,只有两个小时的实时游戏体验,并且在DMControl 100k基准测试中的某些任务中优于状态萨克。这是第一次算法在atari游戏中实现超级人类性能,具有如此少的数据。高效零的性能也在2亿帧的比赛中靠近DQN的性能,而我们使用的数据减少了500倍。高效零的低样本复杂性和高性能可以使RL更接近现实世界的适用性。我们以易于理解的方式实现我们的算法,它可以在https://github.com/yewr/effionszero中获得。我们希望它将加速更广泛社区中MCT的RL算法的研究。
translated by 谷歌翻译
基于模型的增强学习(RL)通过学习动态模型来生成用于策略学习的样本,在实践中实现了实践中的样本效率更高。先前的作品学习了一个“全球”动力学模型,以适合所有历史政策的国家行动探视分布。但是,在本文中,我们发现学习全球动力学模型并不一定会受益于当前策略的模型预测,因为使用的策略正在不断发展。培训期间不断发展的政策将导致州行动探访分配变化。我们理论上分析了历史政策的分布如何影响模型学习和模型推出。然后,我们提出了一种基于模型的新型RL方法,名为\ textit {策略适应模型基于contor-Critic(PMAC)},该方法基于策略适应机制学习了一个基于策略适应的动力学模型。该机制会动态调整历史政策混合分布,以确保学习模型可以不断适应不断发展的政策的国家行动探视分布。在Mujoco中的一系列连续控制环境上进行的实验表明,PMAC可以实现最新的渐近性能,而样品效率几乎是基于模型的方法的两倍。
translated by 谷歌翻译
离线强化学习(RL)为从离线数据提供学习决策的框架,因此构成了现实世界应用程序作为自动驾驶的有希望的方法。自动驾驶车辆(SDV)学习策略,这甚至可能甚至优于次优数据集中的行为。特别是在安全关键应用中,作为自动化驾驶,解释性和可转换性是成功的关键。这激发了使用基于模型的离线RL方法,该方法利用规划。然而,目前的最先进的方法往往忽视了多种子体系统随机行为引起的溶液不确定性的影响。这项工作提出了一种新的基于不确定感知模型的离线强化学习利用规划(伞)的新方法,其解决了以可解释的基于学习的方式共同的预测,规划和控制问题。训练有素的动作调节的随机动力学模型捕获了交通场景的独特不同的未来演化。分析为我们在挑战自动化驾驶模拟中的效力和基于现实世界的公共数据集的方法提供了经验证据。
translated by 谷歌翻译
将基于模型的增强学习(MBRL)方法的概括为具有看不见的过渡动态的环境是一个重要但充满挑战的问题。现有方法试图从过去的过渡段中提取环境指定的信息$ z $,以使动态预测模型可推广到不同的动态。但是,由于未标记环境,提取的信息不可避免地包含与过渡片段无关的冗余信息,因此未能维持$ z $的关键属性:$ z $:$ z $在相同的环境中应该是相似的,并且在不同的环境中不同。结果,学习的动力学预测函数将偏离真正的概括能力。为了解决此问题,我们引入了一个介入预测模块,以估计两个估计的$ \ hat {z} _i,\ hat {z} _J $属于同一环境的概率。此外,通过在单个环境中利用$ z $的不变性,提出了一个关系负责人,以从同一环境中实施$ \ hat {{z}} $之间的相似性。结果,冗余信息将减少在$ \ hat {z} $中。我们从经验上表明,由我们的方法估计的$ \ hat {{z}} $比以前的方法享有的冗余信息少,而这样的$ \ hat {{z}} $可以显着减少动态预测错误并改善基于模型的模型的性能带有看不见的动力学的零击中新环境的RL方法。该方法的代码可在\ url {https://github.com/cr-gjx/ria}中获得。
translated by 谷歌翻译
学习协作对于多机构增强学习(MARL)至关重要。以前的作品通过最大化代理行为的相关性来促进协作,该行为的相关性通常以不同形式的相互信息(MI)为特征。但是,我们揭示了次最佳的协作行为,也出现了强烈的相关性,并且简单地最大化MI可以阻碍学习的学习能力。为了解决这个问题,我们提出了一个新颖的MARL框架,称为“渐进式信息协作(PMIC)”,以进行更有效的MI驱动协作。 PMIC使用全球国家和联合行动之间MI测量的新协作标准。基于此标准,PMIC的关键思想是最大程度地提高与优越的协作行为相关的MI,并最大程度地减少与下等方面相关的MI。这两个MI目标通过促进更好的合作,同时避免陷入次级优势,从而扮演互补的角色。与其他算法相比,在各种MARL基准测试的实验表明,PMIC的表现出色。
translated by 谷歌翻译