深度强化学习是在不需要领域知识的不受控制环境中学习政策的有前途的方法。不幸的是,由于样本效率低下,深度RL应用主要集中在模拟环境上。在这项工作中,我们证明了机器学习算法和库的最新进步与精心调整的机器人控制器相结合,导致在现实世界中仅20分钟内学习四倍的运动。我们在几个室内和室外地形上评估了我们的方法,这些室内和室外地形对基于古典模型的控制器来说是具有挑战性的。我们观察机器人能够在所有这些地形上始终如一地学习步态。最后,我们在模拟环境中评估我们的设计决策。
translated by 谷歌翻译
通过制造不精确和装置随机性来阻碍用于储存神经晶体系统中重量的模拟抗性状态,限制突触重量的精度。通过使用自旋转移扭矩磁阻随机接入存储器(STT-MRAM)的二进制状态的随机切换来模拟模拟行为来解决该挑战。然而,基于STT-MRAM的先前方法以异步方式操作,这难以通过实验实施。本文提出了一种具有时钟电路的同步尖峰神经网络系统,其执行无监督的学习利用STT-MRAM的随机切换。所提出的系统使单层网络能够在MNIST数据集上实现90%的推理准确性。
translated by 谷歌翻译
语言的感知毒性可能会因某人的身份和信仰而有所不同,但是在收集有毒语言数据集时往往忽略这种变化,从而导致数据集和模型偏差。我们寻求理解谁,为什么,以及毒性注释的偏见背后。在两个在线研究中具有人口统计地和政治上的参与者,我们调查了注释者身份(世卫组织)和信仰的影响(为什么),从社会心理学研究中汲取仇恨言语,自由言论,种族主义信念,政治倾向等。我们解除了通过考虑三个特征的帖子作为毒性的毒性:反黑色语言,非洲裔美国英语(AAE)方言和粗俗。我们的结果显示了注释者身份和信仰之间的强有力的协会及其毒性评级。值得注意的是,更保守的注释者和那些对我们的种族信仰规模的评分的人不太可能对毒黑语言归因于毒性,但更有可能将AAE归因于毒性。我们还提供了一个案例研究,说明了流行的毒性检测系统的评级如何自然地反映特定的信念和观点。我们的调查结果要求社会变量中的毒性标签,这提高了对有毒语言注释和检测的巨大影响。
translated by 谷歌翻译
强化学习(RL)需要访问刺激行为正确的行为的奖励功能,但这些都是非常难以指定复杂的任务。基于偏好RL提供了一种替代方案:用学习老师的偏好,而不用预先定义的奖励,从而克服与奖赏有关的工程关注的政策。然而,这是很难量化基于偏好-RL的进展,由于缺乏一个普遍采用的基准。在本文中,我们介绍了B-县:基准专为基于偏好-RL设计。这样的标杆的一个关键挑战是提供快速评估候选算法的能力,这使得依靠真正的人类输入的评价望而却步。与此同时,人类模拟输入作为给完美的喜好地面实况奖励功能是不现实的。 B-县通过一系列广泛的非理性的模拟教师缓解这一,并提出不仅仅是性能也为稳健性这些潜在的不合理性指标。我们用它来分析算法的设计选择,如选择信息查询,为国家的最先进的基于偏好的RL算法展示B-县的效用。我们希望B-县可以作为起点,以诚为本偏好研究RL更系统常见的。源代码可以在https://github.com/rll-research/B-Pref。
translated by 谷歌翻译
元强化学习(RL)方法可以使用比标准RL少的数据级的元培训策略,但元培训本身既昂贵又耗时。如果我们可以在离线数据上进行元训练,那么我们可以重复使用相同的静态数据集,该数据集将一次标记为不同任务的奖励,以在元测试时间适应各种新任务的元训练策略。尽管此功能将使Meta-RL成为现实使用的实用工具,但离线META-RL提出了除在线META-RL或标准离线RL设置之外的其他挑战。 Meta-RL学习了一种探索策略,该策略收集了用于适应的数据,并元培训策略迅速适应了新任务的数据。由于该策略是在固定的离线数据集上进行了元训练的,因此当适应学识渊博的勘探策略收集的数据时,它可能表现得不可预测,这与离线数据有系统地不同,从而导致分布变化。我们提出了一种混合脱机元元素算法,该算法使用带有奖励的脱机数据来进行自适应策略,然后收集其他无监督的在线数据,而无需任何奖励标签来桥接这一分配变化。通过不需要在线收集的奖励标签,此数据可以便宜得多。我们将我们的方法比较了在模拟机器人的运动和操纵任务上进行离线元rl的先前工作,并发现使用其他无监督的在线数据收集可以显着提高元训练政策的自适应能力,从而匹配完全在线的表现。在一系列具有挑战性的域上,需要对新任务进行概括。
translated by 谷歌翻译
High content imaging assays can capture rich phenotypic response data for large sets of compound treatments, aiding in the characterization and discovery of novel drugs. However, extracting representative features from high content images that can capture subtle nuances in phenotypes remains challenging. The lack of high-quality labels makes it difficult to achieve satisfactory results with supervised deep learning. Self-Supervised learning methods, which learn from automatically generated labels has shown great success on natural images, offer an attractive alternative also to microscopy images. However, we find that self-supervised learning techniques underperform on high content imaging assays. One challenge is the undesirable domain shifts present in the data known as batch effects, which may be caused by biological noise or uncontrolled experimental conditions. To this end, we introduce Cross-Domain Consistency Learning (CDCL), a novel approach that is able to learn in the presence of batch effects. CDCL enforces the learning of biological similarities while disregarding undesirable batch-specific signals, which leads to more useful and versatile representations. These features are organised according to their morphological changes and are more useful for downstream tasks - such as distinguishing treatments and mode of action.
translated by 谷歌翻译
Machine learning methods have seen increased application to geospatial environmental problems, such as precipitation nowcasting, haze forecasting, and crop yield prediction. However, many of the machine learning methods applied to mosquito population and disease forecasting do not inherently take into account the underlying spatial structure of the given data. In our work, we apply a spatially aware graph neural network model consisting of GraphSAGE layers to forecast the presence of West Nile virus in Illinois, to aid mosquito surveillance and abatement efforts within the state. More generally, we show that graph neural networks applied to irregularly sampled geospatial data can exceed the performance of a range of baseline methods including logistic regression, XGBoost, and fully-connected neural networks.
translated by 谷歌翻译
Large "instruction-tuned" language models (finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily on human-written instruction data that is limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model. We introduce Self-Instruct, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off its own generations. Our pipeline generates instruction, input, and output samples from a language model, then prunes them before using them to finetune the original model. Applying our method to vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT_001, which is trained with private user data and human annotations. For further evaluation, we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind InstructGPT_001. Self-Instruct provides an almost annotation-free method for aligning pre-trained language models with instructions, and we release our large synthetic dataset to facilitate future studies on instruction tuning.
translated by 谷歌翻译
Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas. In deep learning, a known solution is error backpropagation, which however requires biologically implausible weight transport from feed-forward to feedback paths. We introduce Phaseless Alignment Learning (PAL), a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies. This is achieved by exploiting the noise naturally found in biophysical systems as an additional carrier of information. In our dynamical system, all weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses. Our method is completely phase-free (no forward and backward passes or phased learning) and allows for efficient error propagation across multi-layer cortical hierarchies, while maintaining biologically plausible signal transport and learning. Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment: compared to random synaptic feedback, it can solve complex tasks with less neurons and learn more useful latent representations. We demonstrate this on various classification tasks using a cortical microcircuit model with prospective coding.
translated by 谷歌翻译
Language models (LMs) have demonstrated remarkable performance on downstream tasks, using in-context exemplars or human instructions. Recent works have shown that chain-of-thought (CoT) prompting can elicit models to solve complex reasoning tasks, step-by-step. However, the efficacy of prompt-based CoT methods is restricted to very large LMs such as GPT-3 (175B), thus limiting deployability. In this paper, we revisit the fine-tuning approach to enable complex reasoning in smaller LMs, optimized to efficiently perform a specific task. We propose Fine-tune-CoT, a method that leverages the capabilities of very large LMs to generate reasoning samples and teach smaller models via fine-tuning. We evaluate our method on publicly available LMs across a wide range of complex tasks and model sizes. We find that Fine-tune-CoT enables substantial reasoning capability in small models, whereas previous prompt-based baselines exhibit near-random performance. Student models can even outperform the teacher in some tasks while reducing model size requirements by several orders of magnitude. We conduct extensive ablations and sample studies to understand the reasoning capabilities of student models. We also identify several important nuances that have been overlooked in concurrent fine-tuning works on CoT and address them in our analysis.
translated by 谷歌翻译