Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
We present BotSIM, a data-efficient end-to-end Bot SIMulation toolkit for commercial text-based task-oriented dialog (TOD) systems. BotSIM consists of three major components: 1) a Generator that can infer semantic-level dialog acts and entities from bot definitions and generate user queries via model-based paraphrasing; 2) an agenda-based dialog user Simulator (ABUS) to simulate conversations with the dialog agents; 3) a Remediator to analyze the simulated conversations, visualize the bot health reports and provide actionable remediation suggestions for bot troubleshooting and improvement. We demonstrate BotSIM's effectiveness in end-to-end evaluation, remediation and multi-intent dialog generation via case studies on two commercial bot platforms. BotSIM's "generation-simulation-remediation" paradigm accelerates the end-to-end bot evaluation and iteration process by: 1) reducing manual test cases creation efforts; 2) enabling a holistic gauge of the bot in terms of NLU and end-to-end performance via extensive dialog simulation; 3) improving the bot troubleshooting process with actionable suggestions. A demo of our system can be found at https://tinyurl.com/mryu74cd and a demo video at https://youtu.be/qLi5iSoly30. We have open-sourced the toolkit at https://github.com/salesforce/botsim
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
作者归因是确定给定文本的作者的任务。大多数现有方法都使用手动设计的功能来捕获数据集的内容和样式。但是,这种依赖数据集的方法会产生不一致的性能。因此,我们建议使用对比度学习和监督学习(Contra-X)的结合来微调预训练的语言表示。我们表明,Contra-X在多个人类和机器作者身份归因基准上提高了最先进的方法,从而提高了高达6.8%的改善。我们还表明,在不同的数据方案中,Contra-X始终优于跨凝性微调。至关重要的是,我们介绍了这些改进的定性和定量分析。我们博学的表示形成了不同作者的高度可分开的群集。但是,我们发现对比度学习以牺牲某些作者的牺牲成本提高了整体准确性。解决这种紧张关系将是未来工作的重要方向。据我们所知,我们是第一个分析将对比度学习与跨凝性微调相结合的作者归因的效果。
translated by 谷歌翻译
具有平均社会认知水平的人类可以仅根据非语言交流信号(例如,目光,手势,姿势和上下文信息)来推断他人的信念。这种预测人类信念和意图的社会认知能力对于确保安全的人类机器人互动和协作比以往任何时候都更为重要。本文使用了心理理论(TOM)和对象文本关系的结合知识来研究在禁止语言交流的环境中增强人与自主系统之间协作的方法。我们提出了一个新颖而富有挑战性的多模式视频数据集,用于评估人工智能(AI)系统在对象文化场景中预测人类信念状态方面的能力。所提出的数据集包括对人类信念的精确标记状态基地真实和​​多模式输入,这些输入复制了人类感知捕获的所有非语言交流输入。我们通过现有的深度学习模型进一步评估数据集,并提供有关各种输入模式和对象语言关系对基线模型性能的影响的新见解。
translated by 谷歌翻译
实际上,寻求帮助通常比搜索整个空间更有效,以找到一个未知位置的对象。我们提出了一个学习框架,该框架使代理商能够在此类具体的视觉导航任务中积极寻求帮助,其中反馈将其视为目标的位置。为了模仿老师可能并不总是在场的现实情况,我们提出了一项培训课程,而反馈并不总是可用。我们制定了目标的不确定性度量,并使用经验结果表明,通过这种方法,代理商将在没有反馈时保持有效的帮助,同时保持强大的帮助。
translated by 谷歌翻译
有时将儿童的认知能力视为AI基准。在自然主义儿童的环境中,如何学习最常见的1,000个概念(每天使用的89%)?儿童的认知发展是关于质量的,可以通过简单的例子传达新概念。我们的知识脚手架方法使用简单的对象和动作来传达概念,例如如何教授孩子。我们介绍了ABCDE,这是一种以典型的儿童游戏室为基础的交互式3D环境。它带有300多个唯一的3D对象资产(主要是玩具),以及一个宽敞的动作空间,可供孩子和父代理与对象互动。ABCDE是旨在模仿儿童认知发展的自然主义环境的第一个环境。没有其他环境通过学习者的互动来研究高级概念学习。可以在https://pypi.org/project/abcdesim/1.0.0/上找到模拟器
translated by 谷歌翻译
我们想要模型的文本单位是什么?从字节到多字表达式,可以在许多粒度下分析和生成文本。直到最近,大多数自然语言处理(NLP)模型通过单词操作,将那些作为离散和原子令牌处理,但从字节对编码(BPE)开始,基于次字的方法在许多领域都变得占主导地位,使得仍然存在小词汇表允许快速推断。是道路字符级模型的结束或字节级处理吗?在这项调查中,我们通过展示和评估基于学习分割的词语和字符以及基于子字的方法的混合方法以及基于学习的分割的杂交方法,连接多行工作。我们得出结论,对于所有应用来说,并且可能永远不会成为所有应用的银子弹奇异解决方案,并且严重思考令牌化对许多应用仍然很重要。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
已经开发了许多Visio语言(V + L)表示学习方法,但现有数据集不会评估它们在统一空间中代表视觉和语言概念的程度。灵感来自于奇妙的转移和精神语言学文献,我们提出了一个新的V + L型号的评价设置:零射频跨模型转移。现有的V + L基准也经常在整个数据集上报告全局精度分数,渲染难以确定模型失败并成功的具体推理任务。要解决此问题并启用对跨模型传输的评估,我们存在TRAVLR,包括四个V + L推理任务的合成数据集。每个示例对场景进行了双倍,使得在训练/测试期间可以丢弃无论是没有相关信息的丢失。 Travlr的培训和测试分布也沿任务相关维度约束,从而可以评估分配外概括。我们评估了四个最先进的V + L型号,发现它们在从同一模态的测试集上表现良好,但所有型号都无法转移交叉模态,并且成功有限,容纳一个模态的添加或删除。在与事先工作的对齐中,我们还发现这些模型需要大量数据来学习简单的空间关系。我们将Travlr释放为研究界的开放挑战。
translated by 谷歌翻译