In off-policy reinforcement learning, a behaviour policy performs exploratory interactions with the environment to obtain state-action-reward samples which are then used to learn a target policy that optimises the expected return. This leads to a problem of off-policy evaluation, where one needs to evaluate the target policy from samples collected by the often unrelated behaviour policy. Importance sampling is a traditional statistical technique that is often applied to off-policy evaluation. While importance sampling estimators are unbiased, their variance increases exponentially with the horizon of the decision process due to computing the importance weight as a product of action probability ratios, yielding estimates with low accuracy for domains involving long-term planning. This paper proposes state-based importance sampling (SIS), which drops the action probability ratios of sub-trajectories with "neglible states" -- roughly speaking, those for which the chosen actions have no impact on the return estimate -- from the computation of the importance weight. Theoretical results show that this results in a reduction of the exponent in the variance upper bound as well as improving the mean squared error. An automated search algorithm based on covariance testing is proposed to identify a negligible state set which has minimal MSE when performing state-based importance sampling. Experiments are conducted on a lift domain, which include "lift states" where the action has no impact on the following state and reward. The results demonstrate that using the search algorithm, SIS yields reduced variance and improved accuracy compared to traditional importance sampling, per-decision importance sampling, and incremental importance sampling.
translated by 谷歌翻译
在钢筋学习(RL)中,代理必须探索最初未知的环境,以便学习期望的行为。当RL代理部署在现实世界环境中时,安全性是主要关注的。受约束的马尔可夫决策过程(CMDPS)可以提供长期的安全约束;但是,该代理人可能会违反探索其环境的制约因素。本文提出了一种称为显式探索,漏洞探索或转义($ e ^ {4} $)的基于模型的RL算法,它将显式探索或利用($ e ^ {3} $)算法扩展到强大的CMDP设置。 $ e ^ 4 $明确地分离开发,探索和逃脱CMDP,允许针对已知状态的政策改进的有针对性的政策,发现未知状态,以及安全返回到已知状态。 $ e ^ 4 $强制优化了从一组CMDP模型的最坏情况CMDP上的这些策略,该模型符合部署环境的经验观察。理论结果表明,在整个学习过程中满足安全限制的情况下,在多项式时间中找到近最优的约束政策。我们讨论了稳健约束的离线优化算法,以及如何基于经验推理和先验知识来结合未知状态过渡动态的不确定性。
translated by 谷歌翻译
许多连续的决策问题是使用使用其他一些策略收集的历史数据,需要使用历史数据的高赌注并要求新策略(OPE)。提供无偏估计的最常见的OPE技术之一是基于轨迹的重要性采样(是)。但是,由于轨迹的高方差是估计,最近通过了基于国家行动探索分布(SIS)的重要性采样方法。不幸的是,虽然SIS经常为长视野提供较低的方差估计,但估算状态行动分配比可能是具有挑战性的并且导致偏差估计。在本文中,我们对该偏差差异进行了新的视角,并显示了存在终点是SIS的估计频谱的存在。此外,我们还建立了这些估算器的双重强大和加权版本的频谱。我们提供了经验证据,即该频谱中的估计值可用于在IS和SIS的偏差和方差之间进行折衷,并且可以实现比两者和SIS更低的平均平方误差。
translated by 谷歌翻译
In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods-it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based estimates and importance sampling based estimates.
translated by 谷歌翻译
本文研究了马尔可夫决策过程(MDPS)中用于政策评估的数据收集问题。在政策评估中,我们获得了目标政策,并要求估计它将在正式作为MDP的环境中获得的预期累积奖励。我们通过首先得出了使用奖励分布方差知识的Oracle数据收集策略来开发在树结构MDPS中的最佳数据收集理论。然后,我们介绍了减少的方差采样(射击)算法,即当奖励方差未知并与Oracle策略相比,奖励方差未知并绑定其亚典型性时,它近似于Oracle策略。最后,我们从经验上验证了射手会导致与甲骨文策略相当的均衡误差进行政策评估,并且比仅仅运行目标策略要低得多。
translated by 谷歌翻译
在许多计算科学和工程应用中,与给定输入相对应的感兴趣系统的输出可以在不同的忠诚度中以不同的成本进行查询。通常,低保真数据便宜且丰富,而高保真数据却昂贵且稀缺。在这项工作中,我们研究了具有不同水平的保真度以针对给定的控制任务的多个环境中的强化学习(RL)问题。我们专注于通过多量数据数据提高RL代理的性能。具体而言,提出了利用低度和高保真回报之间的互相关的多重估计器,以减少状态行动值函数估计的差异。所提出的估计量基于控制变体的方法,用于设计一种多因素蒙特卡洛RL(MFMCRL)算法,该算法可改善高保真环境中代理的学习。理论上,通过使用概率范围来分析差异对政策评估和政策改进的影响。我们的理论分析和数值实验表明,对于高保真数据样本的有限预算,我们提出的MFMCRL代理与仅使用高保真环境数据来学习最佳策略的标准RL代理相比,具有出色的性能。
translated by 谷歌翻译
We consider the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of an evaluation policy, $\pi_e$, using a fixed dataset, $\mathcal{D}$, collected by one or more policies that may be different from $\pi_e$. Current OPE algorithms may produce poor OPE estimates under policy distribution shift i.e., when the probability of a particular state-action pair occurring under $\pi_e$ is very different from the probability of that same pair occurring in $\mathcal{D}$ (Voloshin et al. 2021, Fu et al. 2021). In this work, we propose to improve the accuracy of OPE estimators by projecting the high-dimensional state-space into a low-dimensional state-space using concepts from the state abstraction literature. Specifically, we consider marginalized importance sampling (MIS) OPE algorithms which compute state-action distribution correction ratios to produce their OPE estimate. In the original ground state-space, these ratios may have high variance which may lead to high variance OPE. However, we prove that in the lower-dimensional abstract state-space the ratios can have lower variance resulting in lower variance OPE. We then highlight the challenges that arise when estimating the abstract ratios from data, identify sufficient conditions to overcome these issues, and present a minimax optimization problem whose solution yields these abstract ratios. Finally, our empirical evaluation on difficult, high-dimensional state-space OPE tasks shows that the abstract ratios can make MIS OPE estimators achieve lower mean-squared error and more robust to hyperparameter tuning than the ground ratios.
translated by 谷歌翻译
We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL to real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator's accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the inherent hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.
translated by 谷歌翻译
强化学习算法的实用性由于相对于问题大小的规模差而受到限制,因为学习$ \ epsilon $ -optimal策略的样本复杂性为$ \ tilde {\ omega} \ left(| s | s || a || a || a || a | h^3 / \ eps^2 \ right)$在MDP的最坏情况下,带有状态空间$ S $,ACTION SPACE $ A $和HORIZON $ H $。我们考虑一类显示出低级结构的MDP,其中潜在特征未知。我们认为,价值迭代和低级别矩阵估计的自然组合导致估计误差在地平线上呈指数增长。然后,我们提供了一种新算法以及统计保证,即有效利用了对生成模型的访问,实现了$ \ tilde {o} \ left的样本复杂度(d^5(d^5(| s |+| a |)\),我们有效利用低级结构。对于等级$ d $设置的Mathrm {Poly}(h)/\ EPS^2 \ right)$,相对于$ | s |,| a | $和$ \ eps $的缩放,这是最小值的最佳。与线性和低级别MDP的文献相反,我们不需要已知的功能映射,我们的算法在计算上很简单,并且我们的结果长期存在。我们的结果提供了有关MDP对过渡内核与最佳动作值函数所需的最小低级结构假设的见解。
translated by 谷歌翻译
Monte Carlo规划者通常可以退回次优步骤,即使它们保证收敛于无限样本的极限。已知的渐近遗憾界限不提供任何方法来衡量搜索结束时推荐行动的信心。在这项工作中,我们证明了非稳定性匪徒和马尔可夫决策过程的Monte Carlo估计的次级最优性的范围。可以在搜索结束时直接计算这些界限,不需要了解真正的动作值。所提出的绑定蒙特卡罗溶解剂符合温和收敛条件的一般蒙特卡洛溶剂。我们经验通过多武装强盗和一个简单的求解器和蒙特卡罗树搜索的多武装强盗和离散马尔可夫决策过程的实验来测试界限的紧密性。
translated by 谷歌翻译
我们介绍了一种改进政策改进的方法,该方法在基于价值的强化学习(RL)的贪婪方法与基于模型的RL的典型计划方法之间进行了插值。新方法建立在几何视野模型(GHM,也称为伽马模型)的概念上,该模型对给定策略的折现状态验证分布进行了建模。我们表明,我们可以通过仔细的基本策略GHM的仔细组成,而无需任何其他学习,可以评估任何非马尔科夫策略,以固定的概率在一组基本马尔可夫策略之间切换。然后,我们可以将广义政策改进(GPI)应用于此类非马尔科夫政策的收集,以获得新的马尔可夫政策,通常将其表现优于其先驱。我们对这种方法提供了彻底的理论分析,开发了转移和标准RL的应用,并在经验上证明了其对标准GPI的有效性,对充满挑战的深度RL连续控制任务。我们还提供了GHM培训方法的分析,证明了关于先前提出的方法的新型收敛结果,并显示了如何在深度RL设置中稳定训练这些模型。
translated by 谷歌翻译
在强化学习中,蒙特卡洛算法通过平均偶发回报来更新Q功能。在Monte Carlo UCB(MC-UCB)算法中,在每个状态下采取的动作是最大化Q函数加上UCB勘探项的动作,该术语偏向于选择频率较低的动作的选择。尽管在为MC-UCB建立遗憾界限方面已经进行了重要的工作,但大多数工作都集中在该问题的有限培训版本上,每个情节都在不断数量的步骤后终止。对于此类有限的Horizo​​n问题,最佳策略既取决于当前状态和情节中的时间。但是,对于许多自然的情节问题,例如GO,CHESS和机器人任务等游戏,该情节是随机的,最佳政策是静止的。对于此类环境,MC-UCB中的Q功能是否会收敛到最佳Q函数,这是一个空旷的问题。我们猜想,与Q学习不同,它并不是所有MDP的收敛。尽管如此,我们表明,对于大型MDP,其中包括二十一点和确定性MDP等随机MDP,例如GO,MC-UCB中的Q功能几乎可以肯定地收敛到最佳Q函数。该结果的直接推论是,它几乎肯定会为所有有限的Horizo​​n MDP收敛。我们还提供了数值实验,为MC-UCB提供了进一步的见解。
translated by 谷歌翻译
在线强化学习(RL)中的挑战之一是代理人需要促进对环境的探索和对样品的利用来优化其行为。无论我们是否优化遗憾,采样复杂性,状态空间覆盖范围或模型估计,我们都需要攻击不同的勘探开发权衡。在本文中,我们建议在分离方法组成的探索 - 剥削问题:1)“客观特定”算法(自适应)规定哪些样本以收集到哪些状态,似乎它可以访问a生成模型(即环境的模拟器); 2)负责尽可能快地生成规定样品的“客观无关的”样品收集勘探策略。建立最近在随机最短路径问题中进行探索的方法,我们首先提供一种算法,它给出了每个状态动作对所需的样本$ B(S,a)$的样本数量,需要$ \ tilde {o} (bd + d ^ {3/2} s ^ 2 a)收集$ b = \ sum_ {s,a} b(s,a)$所需样本的$时间步骤,以$ s $各国,$ a $行动和直径$ d $。然后我们展示了这种通用探索算法如何与“客观特定的”策略配对,这些策略规定了解决各种设置的样本要求 - 例如,模型估计,稀疏奖励发现,无需无成本勘探沟通MDP - 我们获得改进或新颖的样本复杂性保证。
translated by 谷歌翻译
最近有兴趣了解地平线依赖于加固学习(RL)的样本复杂性。值得注意的是,对于具有Horizo​​ n长度$ H $的RL环境,之前的工作表明,使用$ \ mathrm {polylog}(h)有可能学习$ o(1)$ - 最佳策略的可能大致正确(pac)算法$当州和行动的数量固定时的环境交互剧集。它尚不清楚$ \ mathrm {polylog}(h)$依赖性是必要的。在这项工作中,我们通过开发一种算法来解决这个问题,该算法在仅使用ONTO(1)美元的环境交互的同时实现相同的PAC保证,完全解决RL中样本复杂性的地平线依赖性。我们通过(i)在贴现和有限地平线马尔可夫决策过程(MDP)和(ii)在MDP中的新型扰动分析中建立价值函数之间的联系。我们相信我们的新技术具有独立兴趣,可在RL中应用相关问题。
translated by 谷歌翻译
在最大的状态熵探索框架中,代理商与无奖励环境进行交互,以学习最大程度地提高其正在引起的预期国有访问的熵的政策。 Hazan等。 (2019年)指出,马尔可夫随机策略类别足以满足最大状态熵目标,而在这种情况下,利用非马克维亚性通常被认为是毫无意义的。在本文中,我们认为非马克维亚性是有限样本制度中最大状态熵探索至关重要的。尤其是,我们重新阐明了目标在一次试验中针对诱发的国有访问的预期熵的目标。然后,我们表明,非马克维亚确定性政策的类别足以满足引入的目标,而马尔可夫政策总体上遭受了非零的遗憾。但是,我们证明找到最佳的非马克维亚政策的问题是NP-HARD。尽管结果有负面的结果,但我们讨论了以一种可行的方式解决该问题的途径,以及非马克维亚探索如何使未来工作中在线增强学习的样本效率受益。
translated by 谷歌翻译
This paper studies systematic exploration for reinforcement learning with rich observations and function approximation. We introduce a new model called contextual decision processes, that unifies and generalizes most prior settings. Our first contribution is a complexity measure, the Bellman rank , that we show enables tractable learning of near-optimal behavior in these processes and is naturally small for many well-studied reinforcement learning settings. Our second contribution is a new reinforcement learning algorithm that engages in systematic exploration to learn contextual decision processes with low Bellman rank. Our algorithm provably learns near-optimal behavior with a number of samples that is polynomial in all relevant parameters but independent of the number of unique observations. The approach uses Bellman error minimization with optimistic exploration and provides new insights into efficient exploration for reinforcement learning with function approximation.
translated by 谷歌翻译
一种简单自然的增强学习算法(RL)是蒙特卡洛探索开始(MCES),通过平均蒙特卡洛回报来估算Q功能,并通过选择最大化Q当前估计的行动来改进策略。 -功能。探索是通过“探索开始”来执行的,即每个情节以随机选择的状态和动作开始,然后遵循当前的策略到终端状态。在Sutton&Barto(2018)的RL经典书中,据说建立MCES算法的收敛是RL中最重要的剩余理论问题之一。但是,MCE的收敛问题证明是非常细微的。 Bertsekas&Tsitsiklis(1996)提供了一个反例,表明MCES算法不一定会收敛。 TSITSIKLIS(2002)进一步表明,如果修改了原始MCES算法,以使Q-功能估计值以所有状态行动对以相同的速率更新,并且折现因子严格少于一个,则MCES算法收敛。在本文中,我们通过Sutton&Barto(1998)中给出的原始,更有效的MCES算法取得进展政策。这样的MDP包括大量的环境,例如所有确定性环境和所有具有时间步长的情节环境或作为状态的任何单调变化的值。与以前使用随机近似的证据不同,我们引入了一种新型的感应方法,该方法非常简单,仅利用大量的强规律。
translated by 谷歌翻译
我们介绍了一种普遍的策略,可实现有效的多目标勘探。它依赖于adagoal,一种基于简单约束优化问题的新的目标选择方案,其自适应地针对目标状态,这既不是太困难也不是根据代理目前的知识达到的。我们展示了Adagoal如何用于解决学习$ \ epsilon $ -optimal的目标条件的政策,以便在$ L $ S_0 $ S_0 $奖励中获得的每一个目标状态,以便在$ S_0 $中获取。免费马尔可夫决策过程。在标准的表格外壳中,我们的算法需要$ \ tilde {o}(l ^ 3 s a \ epsilon ^ { - 2})$探索步骤,这几乎很少最佳。我们还容易在线性混合Markov决策过程中实例化Adagoal,其产生具有线性函数近似的第一目标导向的PAC保证。除了强大的理论保证之外,迈克纳队以现有方法的高级别算法结构为锚定,为目标条件的深度加固学习。
translated by 谷歌翻译
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP. Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of $\mathcal{O}(C)$, where $C$ is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.
translated by 谷歌翻译
由于策略梯度定理导致的策略设置存在各种理论上 - 声音策略梯度算法,其为梯度提供了简化的形式。然而,由于存在多重目标和缺乏明确的脱助政策政策梯度定理,截止策略设置不太明确。在这项工作中,我们将这些目标统一到一个违规目标,并为此统一目标提供了政策梯度定理。推导涉及强调的权重和利息职能。我们显示多种策略来近似梯度,以识别权重(ACE)称为Actor评论家的算法。我们证明了以前(半梯度)脱离政策演员 - 评论家 - 特别是offpac和DPG - 收敛到错误的解决方案,而Ace找到最佳解决方案。我们还强调为什么这些半梯度方法仍然可以在实践中表现良好,表明ace中的方差策略。我们经验研究了两个经典控制环境的若干ACE变体和基于图像的环境,旨在说明每个梯度近似的权衡。我们发现,通过直接逼近强调权重,ACE在所有测试的所有设置中执行或优于offpac。
translated by 谷歌翻译