Parameter-efficient fine-tuning (PEFT) methods can adapt large language models to downstream tasks by training a small amount of newly added parameters. In multi-task settings, PEFT adapters typically train on each task independently, inhibiting transfer across tasks, or on the concatenation of all tasks, which can lead to negative interference. To address this, Polytropon (Ponti et al.) jointly learns an inventory of PEFT adapters and a routing function to share variable-size sets of adapters across tasks. Subsequently, adapters can be re-combined and fine-tuned on novel tasks even with limited data. In this paper, we investigate to what extent the ability to control which adapters are active for each task leads to sample-efficient generalization. Thus, we propose less expressive variants where we perform weighted averaging of the adapters before few-shot adaptation (Poly-mu) instead of learning a routing function. Moreover, we introduce more expressive variants where finer-grained task-adapter allocation is learned through a multi-head routing function (Poly-S). We test these variants on three separate benchmarks for multi-task learning. We find that Poly-S achieves gains on all three (up to 5.3 points on average) over strong baselines, while incurring a negligible additional cost in parameter count. In particular, we find that instruction tuning, where models are fully fine-tuned on natural language instructions for each task, is inferior to modular methods such as Polytropon and our proposed variants.
translated by 谷歌翻译
常见的策略梯度方法依赖于代理函数序列的最大化。近年来,已经提出了许多这样的代理功能,大多数没有强烈的理论担保,导致TRPO,PPO或MPO等算法。我们而不是设计另一个代理函数,而是根据功能镜中的函数提出一般框架(FMA-PG),这导致了整个代理功能。我们构建了使策略改进保证能够担保的代理功能,这是由最现有的代理职能共享的属性。至关重要,无论政策参数化的选择如何,这些保证都会持有。此外,FMA-PG的特定实例化恢复了重要的实施启发式(例如,使用前向VS反向KL发散),导致TRPO的变体具有额外的理想性质。通过对简单强盗问题的实验,我们评估FMA-PG实例化的算法。拟议的框架还提出了一种改进的PPO变体,其鲁棒性和效率我们在Mujoco套件上证明。
translated by 谷歌翻译
我们研究了随机双线性最小利益的优化问题,呈现了恒定步长的相同样本随机以(SEG)方法的分析,并呈现了产生有利收敛的方法的变化。在锐度对比度与基本的SEG方法相比,其最后迭代仅对纳什均衡的固定邻域,SEG以相同的标准设置在相同的标准设置下可被提供给NASH均衡的迭代,并且通过结合预定,进一步提高了这种速率重新启动程序。在插值环境中,噪声在纳什均衡消失时,我们达到了最佳的常量收敛速度。我们展示了验证我们理论发现的数值实验,并在配备迭代平均和重启时证明SEG方法的有效性。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
Recent research has shown remarkable performance in leveraging multiple extraneous conditional and non-mutually exclusive semantic concepts for sound source separation, allowing the flexibility to extract a given target source based on multiple different queries. In this work, we propose a new optimal condition training (OCT) method for single-channel target source separation, based on greedy parameter updates using the highest performing condition among equivalent conditions associated with a given target source. Our experiments show that the complementary information carried by the diverse semantic concepts significantly helps to disentangle and isolate sources of interest much more efficiently compared to single-conditioned models. Moreover, we propose a variation of OCT with condition refinement, in which an initial conditional vector is adapted to the given mixture and transformed to a more amenable representation for target source extraction. We showcase the effectiveness of OCT on diverse source separation experiments where it improves upon permutation invariant models with oracle assignment and obtains state-of-the-art performance in the more challenging task of text-based source separation, outperforming even dedicated text-only conditioned models.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在分析此类数据中,高光谱脉冲仍然是最具挑战性的任务之一。深度学习一直在田野上盛开,并被证明超过了其他经典的不混合技术,并且可以有效地部署在配备高光谱成像器的地球观察卫星上。在这封信中,我们遵循这一研究途径,并提出了一个多分支卷积神经网络,该网络受益于融合过程中的光谱,空间和光谱空间特征。我们的实验结果得到了消融研究的支持,表明我们的技术从文献中优于其他人,而导致了更高质量的分数丰度估计。此外,我们研究了减少训练集对所有算法及其对噪音的稳健性的影响的影响,因为捕获大型且代表性的地面真相集是耗时且在实践中成本高昂的,尤其是在新兴的地球观察方案中。
translated by 谷歌翻译
通过优化农业管理实践来维持农场的可持续性有助于建立更适合星球的环境。新兴的卫星任务可以获取多光谱图像,从而捕获有关扫描区域的更详细的光谱信息,因此,在农业应用中的分析过程中,我们可以从细微的光谱特征中受益。我们介绍了一种从10 m Sentinel-2多光谱图像系列中提取2.5 m栽培地图的方法,该图像受益于紧凑型卷积神经网络。实验表明,与U-NET相比,我们的模型不仅通过提供更高质量的分割图来超过经典和深度的机器学习技术,而且还可以大大减少内存足迹(我们的模型的几乎可训练的参数,最多具有31m参数的参数U-nets)。在任务中,这种记忆节俭是关键的,这使我们能够在轨道进入轨道后将模型链接到AI驱动的卫星,因为由于时间限制,不可能发送大型网。
translated by 谷歌翻译
变化自动编码器(VAE)的最新进展使学习潜流歧管成为紧凑的谎言组,例如$ SO(d)$。由于这种方法假定数据在于谎言组本身同构的子空间,因此我们在这里研究了该假设如何在图像的背景下通过预测$ d $二维量产生的图像,而$ d $ d $ d $二维构成$ so so so so(d)$。在检查小组和图像空间的不同理论候选者后,我们表明,定义对数据空间的组动作的尝试通常会失败,因为它需要对卷上的更具体的几何约束。使用几何VAE,我们的实验证实了此约束是适当姿势推断的关键,我们讨论了这些结果对应用和未来工作的潜力。
translated by 谷歌翻译
光流估计的最新方法取决于深度学习,这需要复杂的顺序训练方案才能在现实世界中达到最佳性能。在这项工作中,我们介绍了组合深网,该网络明确利用了传统方法中使用的亮度恒定(BC)模型。由于卑诗省是在几种情况下违反的一个近似物理模型,因此我们建议训练一个与数据驱动网络相辅相成的物理约束网络。我们在物理先验和数据驱动的补体之间引入了独特而有意义的流动分解,包括对BC模型的不确定性量化。我们得出了一个联合培训计划,用于学习分解的不同组成部分,以确保在受监督的情况下,但在半监督的环境中进行最佳合作。实验表明,组合可以改善对最先进的监督网络的性能,例如木筏在几个基准测试中达到最先进的结果。我们强调组合如何利用BC模型并适应其局限性。最后,我们表明我们的半监督方法可以显着简化训练程序。
translated by 谷歌翻译