Large language models trained for code generation can be applied to speaking virtual worlds into existence (creating virtual worlds). In this work we show that prompt-based methods can both accelerate in-VR level editing, as well as can become part of gameplay rather than just part of game development. As an example, we present Codex VR Pong which shows non-deterministic game mechanics using generative processes to not only create static content but also non-trivial interactions between 3D objects. This demonstration naturally leads to an integral discussion on how one would evaluate and benchmark experiences created by generative models - as there are no qualitative or quantitative metrics that apply in these scenarios. We conclude by discussing impending challenges of AI-assisted co-creation in VR.
translated by 谷歌翻译
Many problems can be viewed as forms of geospatial search aided by aerial imagery, with examples ranging from detecting poaching activity to human trafficking. We model this class of problems in a visual active search (VAS) framework, which takes as input an image of a broad area, and aims to identify as many examples of a target object as possible. It does this through a limited sequence of queries, each of which verifies whether an example is present in a given region. We propose a reinforcement learning approach for VAS that leverages a collection of fully annotated search tasks as training data to learn a search policy, and combines features of the input image with a natural representation of active search state. Additionally, we propose domain adaptation techniques to improve the policy at decision time when training data is not fully reflective of the test-time distribution of VAS tasks. Through extensive experiments on several satellite imagery datasets, we show that the proposed approach significantly outperforms several strong baselines. Code and data will be made public.
translated by 谷歌翻译
隐私已成为机器学习的主要问题。实际上,联合学习是出于隐私问题而激发的,因为它不允许传输私人数据,而仅传输中间更新。但是,联邦学习并不总是保证隐私保护,因为中间更新也可能揭示敏感信息。在本文中,我们对高斯混合模型的联合期望最大化算法进行了明确的信息理论分析,并证明了中间更新可能导致严重的隐私泄漏。为了解决隐私问题,我们提出了一个完全分散的隐私解决方案,该解决方案能够安全地计算每个最大化步骤中的更新。此外,我们考虑了两种不同类型的安全攻击:诚实但有趣而窃听的对手模型。数值验证表明,就准确性和隐私水平而言,与现有方法相比,所提出的方法具有优越的性能。
translated by 谷歌翻译
几乎没有学习的学习是一种新兴的学习范式,它试图以低样本复杂性学习以模仿人类仅基于几个示例学习,概括和推断的方式。尽管FSL试图模仿这些人类特征,但从根本上讲,FSL的任务常规描述和建模,并使用基于情节的训练进行元学习并不完全与人类与知识的获取和理性的方式完全吻合。 FSL进行了情节培训,虽然仅使用每个测试课的$ K $实例,但仍需要大量的分离培训课程中的标记实例。在本文中,我们介绍了限制几次学习的新颖任务(CFSL),这是FSL的特殊情况,其中每个班级的培训实例数量被限制为小于某些值$ m $,从而在期间应用了类似的限制培训和测试。我们提出了一种使用一种受认知理论(例如模糊痕迹理论和原型理论)启发的新型分类对比损失来利用CFSL利用CFSL的方法。
translated by 谷歌翻译
强大的增强学习(RL)考虑了在一组可能的环境参数值中最坏情况下表现良好的学习政策的问题。在现实世界环境中,选择可靠RL的可能值集可能是一项艰巨的任务。当指定该集合太狭窄时,代理将容易受到不称职的合理参数值的影响。如果规定过于广泛,则代理商将太谨慎。在本文中,我们提出了可行的对抗性鲁棒RL(FARR),这是一种自动确定环境参数值集的方法。 Farr隐式将可行的参数值定义为代理可以在足够的培训资源的情况下获得基准奖励的参数值。通过将该问题作为两人零和游戏的配方,Farr共同学习了对参数值的对抗分布,并具有可行的支持,并且在此可行参数集中进行了强大的策略。使用PSRO算法在这款FARR游戏中找到近似的NASH平衡,我们表明,接受FARR训练的代理人对可行的对抗性参数选择比现有的minimax,domain randanmization,域名和遗憾的目标更强大控制环境。
translated by 谷歌翻译
在竞争激烈的两种环境中,基于\ emph {double oracle(do)}算法的深度强化学习(RL)方法,例如\ emph {policy space响应oracles(psro)}和\ emph {任何时间psro(apsro)},迭代地将RL最佳响应策略添加到人群中。最终,这些人口策略的最佳混合物将近似于NASH平衡。但是,这些方法可能需要在收敛之前添加所有确定性策略。在这项工作中,我们介绍了\ emph {selfplay psro(sp-psro)},这种方法可在每次迭代中的种群中添加大致最佳的随机策略。SP-PSRO并不仅对对手的最少可剥削人口混合物添加确定性的最佳反应,而是学习了大致最佳的随机政策,并将其添加到人群中。结果,SPSRO从经验上倾向于比APSRO快得多,而且在许多游戏中,仅在几次迭代中收敛。
translated by 谷歌翻译
Off-policy evaluation methods are important in recommendation systems and search engines, where data collected under an existing logging policy is used to estimate the performance of a new proposed policy. A common approach to this problem is weighting, where data is weighted by a density ratio between the probability of actions given contexts in the target and logged policies. In practice, two issues often arise. First, many problems have very large action spaces and we may not observe rewards for most actions, and so in finite samples we may encounter a positivity violation. Second, many recommendation systems are not probabilistic and so having access to logging and target policy densities may not be feasible. To address these issues, we introduce the featurized embedded permutation weighting estimator. The estimator computes the density ratio in an action embedding space, which reduces the possibility of positivity violations. The density ratio is computed leveraging recent advances in normalizing flows and density ratio estimation as a classification problem, in order to obtain estimates which are feasible in practice.
translated by 谷歌翻译