光流估计的最新方法取决于深度学习,这需要复杂的顺序训练方案才能在现实世界中达到最佳性能。在这项工作中,我们介绍了组合深网,该网络明确利用了传统方法中使用的亮度恒定(BC)模型。由于卑诗省是在几种情况下违反的一个近似物理模型,因此我们建议训练一个与数据驱动网络相辅相成的物理约束网络。我们在物理先验和数据驱动的补体之间引入了独特而有意义的流动分解,包括对BC模型的不确定性量化。我们得出了一个联合培训计划,用于学习分解的不同组成部分,以确保在受监督的情况下,但在半监督的环境中进行最佳合作。实验表明,组合可以改善对最先进的监督网络的性能,例如木筏在几个基准测试中达到最先进的结果。我们强调组合如何利用BC模型并适应其局限性。最后,我们表明我们的半监督方法可以显着简化训练程序。
translated by 谷歌翻译
预测在环境中只有部分了解其动态的综合动态现象是各种科学领域的普遍存在问题。虽然纯粹的数据驱动方法在这种情况下可以说是不充分的,但是基于标准的物理建模的方法往往是过于简单的,诱导不可忽略的错误。在这项工作中,我们介绍了适当性框架,是一种具有深度数据驱动模型的微分方程所描述的不完整物理动态的原则方法。它包括将动态分解为两个组件:对我们有一些先验知识的动态的物理组件,以及物理模型错误的数据驱动组件核对。仔细制定学习问题,使得物理模型尽可能多地解释数据,而数据驱动组件仅描述了物理模型不能捕获的信息,不再少。这不仅为这种分解提供了存在和唯一性,而且还确保了可解释性和益处泛化。在三个重要用例中进行的实验,每个代表不同的现象,即反应 - 扩散方程,波动方程和非线性阻尼摆锤,表明,空间程度可以有效地利用近似物理模型来准确地预测系统的演变并正确识别相关的物理参数。
translated by 谷歌翻译
The increasing complexity of gameplay mechanisms in modern video games is leading to the emergence of a wider range of ways to play games. The variety of possible play-styles needs to be anticipated by designers, through automated tests. Reinforcement Learning is a promising answer to the need of automating video game testing. To that effect one needs to train an agent to play the game, while ensuring this agent will generate the same play-styles as the players in order to give meaningful feedback to the designers. We present CARMI: a Configurable Agent with Relative Metrics as Input. An agent able to emulate the players play-styles, even on previously unseen levels. Unlike current methods it does not rely on having full trajectories, but only summary data. Moreover it only requires little human data, thus compatible with the constraints of modern video game production. This novel agent could be used to investigate behaviors and balancing during the production of a video game with a realistic amount of training time.
translated by 谷歌翻译
Modern video games are becoming richer and more complex in terms of game mechanics. This complexity allows for the emergence of a wide variety of ways to play the game across the players. From the point of view of the game designer, this means that one needs to anticipate a lot of different ways the game could be played. Machine Learning (ML) could help address this issue. More precisely, Reinforcement Learning is a promising answer to the need of automating video game testing. In this paper we present a video game environment which lets us define multiple play-styles. We then introduce CARI: a Configurable Agent with Reward as Input. An agent able to simulate a wide continuum range of play-styles. It is not constrained to extreme archetypal behaviors like current methods using reward shaping. In addition it achieves this through a single training loop, instead of the usual one loop per play-style. We compare this novel training approach with the more classic reward shaping approach and conclude that CARI can also outperform the baseline on archetypes generation. This novel agent could be used to investigate behaviors and balancing during the production of a video game with a realistic amount of training time.
translated by 谷歌翻译
In Novel Class Discovery (NCD), the goal is to find new classes in an unlabeled set given a labeled set of known but different classes. While NCD has recently gained attention from the community, no framework has yet been proposed for heterogeneous tabular data, despite being a very common representation of data. In this paper, we propose TabularNCD, a new method for discovering novel classes in tabular data. We show a way to extract knowledge from already known classes to guide the discovery process of novel classes in the context of tabular data which contains heterogeneous variables. A part of this process is done by a new method for defining pseudo labels, and we follow recent findings in Multi-Task Learning to optimize a joint objective function. Our method demonstrates that NCD is not only applicable to images but also to heterogeneous tabular data.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
本文提出了第二版的头部和颈部肿瘤(Hecktor)挑战的概述,作为第24届医学图像计算和计算机辅助干预(Miccai)2021的卫星活动。挑战由三个任务组成与患有头颈癌(H&N)的患者的PET / CT图像的自动分析有关,专注于oropharynx地区。任务1是FDG-PET / CT图像中H&N主肿瘤肿瘤体积(GTVT)的自动分割。任务2是来自同一FDG-PET / CT的进展自由生存(PFS)的自动预测。最后,任务3与任务2的任务2与参与者提供的地面真理GTVT注释相同。这些数据从六个中心收集,总共325个图像,分为224个培训和101个测试用例。通过103个注册团队和448个结果提交的重要参与,突出了对挑战的兴趣。在第一任务中获得0.7591的骰子相似度系数(DSC),分别在任务2和3中的0.7196和0.6978的一致性指数(C-Index)。在所有任务中,发现这种方法的简单性是确保泛化性能的关键。 PFS预测性能在任务2和3中的比较表明,提供GTVT轮廓对于实现最佳结果,这表明可以使用完全自动方法。这可能避免了对GTVT轮廓的需求,用于可重复和大规模的辐射瘤研究的开头途径,包括千元潜在的受试者。
translated by 谷歌翻译
目标传播(TP)算法计算目标,而不是神经网络的梯度,并以与梯度反向传播(BP)类似但不同的方式向后传播它们。首先将该想法作为扰动替代的反向传播,当训练多层神经网络时可能在梯度评估中获得更高的准确性(Lecun等,1989)。然而,TP仍然是具有许多变体的模板算法,而不是良好识别的算法。重新审视Lecun等人的见解,(1989),最近的Lee等人。 (2015),我们介绍了一个简单版本的目标传播,基于网络层的正则化反转,可在可差异的编程框架中实现。我们将其计算复杂性与BP之一进行了比较,并与BP相比,描绘了TP可以吸引的制度。我们展示了我们的TP如何用于培训具有关于各种序列建模问题的长序列的经常性神经网络。实验结果强调了在实践中在TP中规范化的重要性。
translated by 谷歌翻译
本文探讨了提高语言模型的零次学习能力的简单方法。我们表明,指令调整 - 通过对说明书中所述的任务集合微调语言模型 - 大幅提升零射门上看不见任务中的表现。我们采取预训练的语言模型和指令调整它通过自然语言指令模板语言表达了60NLP任务137B参数。我们评估这种指令调整模型,我们称之为FLAN,在看不见的任务类型。FLAN显着改善其未修饰的对应的性能和超过25的20个任务,我们评估零射门175BGPT-3。FLAN甚至GPT-3通过在安利,RTE,BoolQ,AI2-ARC,OpenbookQA和StoryCloze大比分胜过几拍。消融研究显示任务和模型的规模,这个数字是指令调整取得成功的关键组成部分。
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译