希望以优先的,有序的方式相结合,因为它允许模块化设计并通过知识传输来促进数据重用。在控制理论中,优先的组合物是通过空空间控制实现的,其中低优先级控制动作被投影到高优先级控制动作的空空间中。这种方法目前无法用于加强学习。我们为增强学习提出了一个新颖的,任务优先的组成框架,其中涉及一个新颖的概念:强化学习政策的冷漠空间。我们的框架有可能促进知识转移和模块化设计,同时大大提高数据效率和增强学习代理的数据重用。此外,我们的方法可以确保高优先级的限制满意度,这使得在机器人技术等安全 - 关键领域中学习有望。与零空间的控制不同,我们的方法允许通过在最初的复合策略构建后在高级政策的无差异空间中在线学习来学习复合任务的全球最佳策略。
translated by 谷歌翻译
In reinforcement learning (RL), the ability to utilize prior knowledge from previously solved tasks can allow agents to quickly solve new problems. In some cases, these new problems may be approximately solved by composing the solutions of previously solved primitive tasks (task composition). Otherwise, prior knowledge can be used to adjust the reward function for a new problem, in a way that leaves the optimal policy unchanged but enables quicker learning (reward shaping). In this work, we develop a general framework for reward shaping and task composition in entropy-regularized RL. To do so, we derive an exact relation connecting the optimal soft value functions for two entropy-regularized RL problems with different reward functions and dynamics. We show how the derived relation leads to a general result for reward shaping in entropy-regularized RL. We then generalize this approach to derive an exact relation connecting optimal value functions for the composition of multiple tasks in entropy-regularized RL. We validate these theoretical contributions with experiments showing that reward shaping and task composition lead to faster learning in various settings.
translated by 谷歌翻译
强化学习(RL)已经证明了通过利用非线性函数近似器来解决高维任务的能力。但是,这些成功主要通过模拟域中的“黑匣子”政策实现。将RL部署到现实世界时,可能会提出有关使用“黑匣子”策略的几个问题。为了使学习的政策更加透明,我们提出了一个策略迭代方案,该方案保留了一个复杂的函数近似器的内部值预测,而是限制策略基于a的简明,分层和人类可读结构。可解释专家的混合。每个专家根据与原型状态的距离选择原始操作。保持这些专家解释的关键设计决定是从轨迹数据中选择原型状态。本文的主要技术贡献是解决这一非可分子原型态选拔程序引入的挑战。我们认为,我们的建议算法可以在连续动作深度RL基准测试中学习引人注目的政策,匹配基于神经网络的策略的性能,但返回比神经网络或线性型政策更适合人类检查的策略。
translated by 谷歌翻译
This paper surveys the eld of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the eld and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but di ers considerably in the details and in the use of the word \reinforcement." The paper discusses central issues of reinforcement learning, including trading o exploration and exploitation, establishing the foundations of the eld via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
translated by 谷歌翻译
Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite/infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning.
translated by 谷歌翻译
安全是每个机器人平台的关键特性:任何控制政策始终遵守执行器限制,并避免与环境和人类发生冲突。在加强学习中,安全对于探索环境而不会造成任何损害更为基础。尽管有许多针对安全勘探问题的建议解决方案,但只有少数可以处理现实世界的复杂性。本文介绍了一种安全探索的新公式,用于强化各种机器人任务。我们的方法适用于广泛的机器人平台,即使在通过探索约束歧管的切线空间从数据中学到的复杂碰撞约束下也可以执行安全。我们提出的方法在模拟的高维和动态任务中实现了最先进的表现,同时避免与环境发生冲突。我们在Tiago ++机器人上展示了安全的现实部署,在操纵和人类机器人交互任务中取得了显着的性能。
translated by 谷歌翻译
在许多实际应用程序中,强化学习(RL)代理可能必须解决多个任务,每个任务通常都是通过奖励功能建模的。如果奖励功能是线性表达的,并且代理商以前已经学会了一组针对不同任务的策略,则可以利用后继功能(SFS)来组合此类策略并确定有关新问题的合理解决方案。但是,确定的解决方案不能保证是最佳的。我们介绍了一种解决此限制的新颖算法。它允许RL代理结合现有政策并直接确定任意新问题的最佳政策,而无需与环境进行任何进一步的互动。我们首先(在轻度假设下)表明,SFS解决的转移学习问题等同于学习在RL中优化多个目标的学习问题。然后,我们引入了基于SF的乐观线性支持算法的扩展,以学习一组SFS构成凸面覆盖范围集的策略。我们证明,该集合中的策略可以通过广义策略改进组合,以构建任何可表达的新任务的最佳行为,而无需任何其他培训样本。我们从经验上表明,在价值函数近似下,我们的方法在离散和连续域中优于最先进的竞争算法。
translated by 谷歌翻译
在本文中,我们研究了加强学习问题的安全政策的学习。这是,我们的目标是控制我们不知道过渡概率的马尔可夫决策过程(MDP),但我们通过经验访问样品轨迹。我们将安全性定义为在操作时间内具有高概率的期望安全集中的代理。因此,我们考虑受限制的MDP,其中限制是概率。由于没有直接的方式来优化关于加强学习框架中的概率约束的政策,因此我们提出了对问题的遍历松弛。拟议的放松的优点是三倍。 (i)安全保障在集界任务的情况下保持,并且它们保持在一个给定的时间范围内,以继续进行任务。 (ii)如果政策的参数化足够丰富,则约束优化问题尽管其非凸起具有任意小的二元间隙。 (iii)可以使用标准策略梯度结果和随机近似工具容易地计算与安全学习问题相关的拉格朗日的梯度。利用这些优势,我们建立了原始双算法能够找到安全和最佳的政策。我们在连续域中的导航任务中测试所提出的方法。数值结果表明,我们的算法能够将策略动态调整到环境和所需的安全水平。
translated by 谷歌翻译
Reinforcement-learning agents seek to maximize a reward signal through environmental interactions. As humans, our contribution to the learning process is through designing the reward function. Like programmers, we have a behavior in mind and have to translate it into a formal specification, namely rewards. In this work, we consider the reward-design problem in tasks formulated as reaching desirable states and avoiding undesirable states. To start, we propose a strict partial ordering of the policy space. We prefer policies that reach the good states faster and with higher probability while avoiding the bad states longer. Next, we propose an environment-independent tiered reward structure and show it is guaranteed to induce policies that are Pareto-optimal according to our preference relation. Finally, we empirically evaluate tiered reward functions on several environments and show they induce desired behavior and lead to fast learning.
translated by 谷歌翻译
在各种控制任务域中,现有控制器提供了基线的性能水平,虽然可能是次优的 - 应维护。依赖于国家和行动空间的广泛探索的强化学习(RL)算法可用于优化控制策略。但是,完全探索性的RL算法可能会在训练过程中降低低于基线水平的性能。在本文中,我们解决了控制政策的在线优化问题,同时最大程度地减少了遗憾的W.R.T基线政策绩效。我们提出了一个共同的仿制学习框架,表示乔尔。 JIRL中的学习过程假设了基线策略的可用性,并设计了两个目标\ textbf {(a)}利用基线的在线演示,以最大程度地减少培训期间的遗憾W.R.T的基线策略,\ textbf {(b) }最终超过了基线性能。 JIRL通过最初学习模仿基线策略并逐渐将控制从基线转移到RL代理来解决这些目标。实验结果表明,JIRR有效地实现了几个连续的动作空间域中的上述目标。结果表明,JIRL在最终性能中与最先进的算法相当,同时在所有提出的域中训练期间都会降低基线后悔。此外,结果表明,对于最先进的基线遗憾最小化方法,其基线后悔的减少因素最高为21美元。
translated by 谷歌翻译
Hierarchical Reinforcement Learning (HRL) algorithms have been demonstrated to perform well on high-dimensional decision making and robotic control tasks. However, because they solely optimize for rewards, the agent tends to search the same space redundantly. This problem reduces the speed of learning and achieved reward. In this work, we present an Off-Policy HRL algorithm that maximizes entropy for efficient exploration. The algorithm learns a temporally abstracted low-level policy and is able to explore broadly through the addition of entropy to the high-level. The novelty of this work is the theoretical motivation of adding entropy to the RL objective in the HRL setting. We empirically show that the entropy can be added to both levels if the Kullback-Leibler (KL) divergence between consecutive updates of the low-level policy is sufficiently small. We performed an ablative study to analyze the effects of entropy on hierarchy, in which adding entropy to high-level emerged as the most desirable configuration. Furthermore, a higher temperature in the low-level leads to Q-value overestimation and increases the stochasticity of the environment that the high-level operates on, making learning more challenging. Our method, SHIRO, surpasses state-of-the-art performance on a range of simulated robotic control benchmark tasks and requires minimal tuning.
translated by 谷歌翻译
使用强化学习解决复杂的问题必须将问题分解为可管理的任务,无论是明确或隐式的任务,并学习解决这些任务的政策。反过来,这些政策必须由采取高级决策的总体政策来控制。这需要培训算法在学习这些政策时考虑这种等级决策结构。但是,实践中的培训可能会导致泛化不良,要么在很少的时间步骤执行动作,要么将其全部转变为单个政策。在我们的工作中,我们介绍了一种替代方法来依次学习此类技能,而无需使用总体层次的政策。我们在环境的背景下提出了这种方法,在这种环境的背景下,学习代理目标的主要组成部分是尽可能长时间延长情节。我们将我们提出的方法称为顺序选择评论家。我们在我们开发的灵活的模拟3D导航环境中演示了我们在导航和基于目标任务的方法的实用性。我们还表明,我们的方法优于先前的方法,例如在我们的环境中,柔软的演员和软选择评论家,以及健身房自动驾驶汽车模拟器和Atari River RAID RAID环境。
translated by 谷歌翻译
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
translated by 谷歌翻译
We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actorcritic methods, which can be viewed performing approximate inference on the corresponding energy-based model.
translated by 谷歌翻译
强化学习表现出巨大的潜力,可以解决复杂的接触率丰富的机器人操纵任务。但是,在现实世界中使用RL的安全是一个关键问题,因为在培训期间或看不见的情况下,RL政策是不完善的,可能会发生意外的危险碰撞。在本文中,我们提出了一个接触安全的增强增强学习框架,用于接触良好的机器人操纵,该框架在任务空间和关节空间中保持安全性。当RL政策导致机器人组与环境之间的意外冲突时,我们的框架能够立即检测到碰撞并确保接触力量很小。此外,最终效应器被强制执行,同时对外部干扰保持强大的态度。我们训练RL政策以模拟并将其转移到真正的机器人中。关于机器人擦拭任务的现实世界实验表明,即使在策略处于看不见的情况下,我们的方法也能够使接触在任务空间和关节空间中保持较小,同时拒绝对主要任务的干扰。
translated by 谷歌翻译
连续控制的强化学习(RL)通常采用其支持涵盖整个动作空间的分布。在这项工作中,我们调查了培训的代理经常更喜欢在该空间的界限中普遍采取行动的俗称已知的现象。我们在最佳控制中汲取理论联系,以发出Bang-Bang行为的出现,并在各种最近的RL算法中提供广泛的实证评估。我们通过伯努利分布替换正常高斯,该分布仅考虑沿着每个动作维度的极端 - Bang-Bang控制器。令人惊讶的是,这在几种连续控制基准测试中实现了最先进的性能 - 与机器人硬件相比,能量和维护成本影响控制器选择。由于勘探,学习和最终解决方案纠缠在RL中,我们提供了额外的模仿学习实验,以减少探索对我们分析的影响。最后,我们表明我们的观察结果概括了旨在模拟现实世界挑战和评估因素来减轻Bang-Bang解决方案的因素的环境。我们的调查结果强调了对基准测试连续控制算法的挑战,特别是在潜在的现实世界应用中。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
找到同一问题的不同解决方案是与创造力和对新颖情况的适应相关的智能的关键方面。在钢筋学习中,一套各种各样的政策对于勘探,转移,层次结构和鲁棒性有用。我们提出了各种各样的连续政策,一种发现在继承人功能空间中多样化的政策的方法,同时确保它们接近最佳。我们将问题形式形式化为受限制的马尔可夫决策过程(CMDP),目标是找到最大化多样性的政策,其特征在于内在的多样性奖励,同时对MDP的外在奖励保持近乎最佳。我们还分析了最近提出的稳健性和歧视奖励的绩效,并发现它们对程序的初始化敏感,并且可以收敛到次优溶液。为了缓解这一点,我们提出了新的明确多样性奖励,该奖励旨在最大限度地减少集合中策略的继承人特征之间的相关性。我们比较深度控制套件中的不同多样性机制,发现我们提出的明确多样性的类型对于发现不同的行为是重要的,例如不同的运动模式。
translated by 谷歌翻译
在许多情况下,增强学习(RL)已被证明是有效的。但是,通常需要探索足够多的国家行动对,其中一些对不安全。因此,其应用于安全至关重要的系统仍然是一个挑战。解决安全性的越来越普遍的方法涉及将RL动作投射到安全的一组动作上的安全层。反过来,此类框架的困难是如何有效地将RL与安全层搭配以提高学习绩效。在本文中,我们将安全性作为基于型号的RL框架中的可区分强大控制式 - 助推器功能层。此外,我们还提出了一种模块化学习基本奖励驱动的任务的方法,独立于安全限制。我们证明,这种方法既可以确保安全性,又可以有效地指导一系列实验中的训练期间的探索,包括以模块化的方式学习奖励时,包括零拍传递。
translated by 谷歌翻译
在训练数据的分布中评估时,学到的模型和政策可以有效地概括,但可以在分布输入输入的情况下产生不可预测且错误的输出。为了避免在部署基于学习的控制算法时分配变化,我们寻求一种机制将代理商限制为类似于受过训练的国家和行动的机制。在控制理论中,Lyapunov稳定性和控制不变的集合使我们能够保证稳定系统周围系统的控制器,而在机器学习中,密度模型使我们能够估算培训数据分布。我们可以将这两个概念结合起来,产生基于学习的控制算法,这些算法仅使用分配动作将系统限制为分布状态?在这项工作中,我们建议通过结合Lyapunov稳定性和密度估计的概念来做到这一点,引入Lyapunov密度模型:控制Lyapunov函数和密度模型的概括,这些函数和密度模型可以保证代理商在其整个轨迹上保持分布的能力。
translated by 谷歌翻译