Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameterreduction techniques to lower memory consumption and increase the training speed of BERT (Devlin et al., 2019). Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at https://github.com/google-research/ALBERT. * Work done as an intern at Google Research, driving data processing and downstream task evaluations.
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve models' generalization. We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understand (NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, outperforming the human baseline by a decent margin (90.3 versus 89.8). The pre-trained DeBERTa models and the source code were released at: https://github.com/microsoft/DeBERTa 1 .
translated by 谷歌翻译
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
本文提出了一种新的预先接受训练的语言模型Debertav3,它通过用更换的令牌检测(RTD)更换掩模语言建模(MLM)来改善原始的Deberta模型,更高的预训练任务。我们的分析表明,Vanilla嵌入了电力中的共享损害培训效率和模型性能。这是因为鉴别器的培训损失和发电机的销售损失在不同的方向上拉动令牌嵌入,从而创造“拔河”动态。因此,我们提出了一种新的梯度 - 解开嵌入共享方法,避免了战争动态,提高了训练效率和预训练模型的质量。我们使用与Deberta相同的设置预先接受了培训的Debertav3,以展示其在广泛的下游自然语言理解(NLU)任务上的特殊表现。以八个任务为例,Debertav3大型模型以八个任务为例,平均得分为91.37%,杜伯塔省的1.37%和电力1.91%,在模型中设置新的最先进(SOTA)具有类似的结构。此外,我们预先培训了多语思伯类Mdeberta,与英语模型相比,对强基线的更大改善。例如,Mdeberta基地达到XNLI的79.8%零射频精度和超过XLM-R基础的3.6%的改进,在此基准上创建了一个新的Sota。我们在HTTPS://github.com/microsoft/deberta公开提供我们预先接受的模型和推理码。
translated by 谷歌翻译
来自变压器(BERT)的双向编码器表示显示了各种NLP任务的奇妙改进,并且已经提出了其连续的变体来进一步提高预先训练的语言模型的性能。在本文中,我们的目标是首先介绍中国伯特的全文掩蔽(WWM)策略,以及一系列中国预培训的语言模型。然后我们还提出了一种简单但有效的型号,称为Macbert,这在几种方面提高了罗伯塔。特别是,我们提出了一种称为MLM作为校正(MAC)的新掩蔽策略。为了展示这些模型的有效性,我们创建了一系列中国预先培训的语言模型,作为我们的基线,包括BERT,Roberta,Electra,RBT等。我们对十个中国NLP任务进行了广泛的实验,以评估创建的中国人托管语言模型以及提议的麦克白。实验结果表明,Macbert可以在许多NLP任务上实现最先进的表演,我们还通过几种可能有助于未来的研究的调查结果来消融细节。我们开源我们的预先培训的语言模型,以进一步促进我们的研究界。资源可用:https://github.com/ymcui/chinese-bert-wwm
translated by 谷歌翻译
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. Span-BERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1 * Equal contribution. 1 Our code and pre-trained models are available at https://github.com/facebookresearch/ SpanBERT.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
近年来,基于变压器的预训练模型已获得了很大的进步,成为自然语言处理中最重要的骨干之一。最近的工作表明,变压器内部的注意力机制可能不需要,卷积神经网络和基于多层感知器的模型也已被研究为变压器替代方案。在本文中,我们考虑了一个用于语言模型预训练的图形循环网络,该网络通过本地令牌级通信为每个序列构建一个图形结构,以及与其他代币解耦的句子级表示。原始模型在受监督培训下的特定领域特定文本分类中表现良好,但是,其通过自我监督的方式学习转移知识的潜力尚未得到充分利用。我们通过优化体系结构并验证其在更通用的语言理解任务(英语和中文)中的有效性来填补这一空白。至于模型效率,我们的模型在基于变压器的模型中而不是二次复杂性,而是具有线性复杂性,并且在推断过程中的性能更有效。此外,我们发现与现有基于注意力的模型相比,我们的模型可以生成更多样化的输出,而背景化的功能冗余性较小。
translated by 谷歌翻译
变压器注意机制的二次计算和内存复杂性限制了对长序列建模的可扩展性。在本文中,我们提出了Luna,一种线性统一嵌套关注机制,使Softmax注意力具有两个嵌套线性关注功能,仅产生线性(与二次)的时间和空间复杂度相反。具体地,通过第一注意功能,LUNA将输入序列包装成固定长度的序列。然后,使用第二关注功能未包装包装序列。与更传统的关注机制相比,LUNA引入具有固定长度的附加序列作为输入和额外的相应输出,允许LUNA线性地进行关注操作,同时还存储足够的上下文信息。我们对三个序列建模任务的基准进行了广泛的评估:长上下文序列建模,神经机平移和大型预磨损的屏蔽语言建模。竞争甚至更好的实验结果表明了Luna的有效性和效率与各种各样相比
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
Recent trends in language modeling have focused on increasing performance through scaling, and have resulted in an environment where training language models is out of reach for most researchers and practitioners. While most in the community are asking how to push the limits of extreme computation, we ask the opposite question: How far can we get with a single GPU in just one day? We investigate the downstream performance achievable with a transformer-based language model trained completely from scratch with masked language modeling for a single day on a single consumer GPU. Aside from re-analyzing nearly all components of the pretraining pipeline for this scenario and providing a modified pipeline with performance close to BERT, we investigate why scaling down is hard, and which modifications actually improve performance in this scenario. We provide evidence that even in this constrained setting, performance closely follows scaling laws observed in large-compute settings. Through the lens of scaling laws, we categorize a range of recent improvements to training and architecture and discuss their merit and practical applicability (or lack thereof) for the limited compute setting.
translated by 谷歌翻译
激活功能可以对降低输入数据的拓扑复杂性产生重大影响,从而提高模型的性能。选择合适的激活函数是神经模型设计中的重要步骤。但是,在基于变压器的语言模型中很少讨论或探索激活功能的选择。事先选择它们的激活功能,然后从预训练中固定到微调。结果,在这个漫长的生命周期中,无法调整它们对模型的电感偏见。此外,随后开发的模型(例如Roberta,Bart和GPT-3)经常跟进先前的工作(例如BERT),以使用相同的激活函数而无需合理。在本文中,我们研究了变压器体系结构中使用理性激活函数(RAF)(RAF)的有效性。与常规,预定义的激活功能相反,RAF可以根据输入数据自适应地学习最佳激活功能。我们的实验表明,基于RAF的变压器(RAFT)比具有GELU函数的香草BERT的验证性更低。我们进一步评估了低和全数据设置中下游任务的筏。我们的结果表明,筏在大多数任务和设置上都优于对应模型。例如,在低数据表情况下(有100个训练示例),木筏在胶水基准上的表现平均高出5.71点,在全数据设置的小队中,平均得分为2.05分。对学到的RAF的形状的分析进一步揭示了它们在预训练模型的不同层之间有很大的变化,并且看起来与常规激活函数大多不同。 RAFT为根据学习的激活功能打开了一个新的研究方向,用于分析和解释预训练的模型。
translated by 谷歌翻译
大型神经模型的培训和推断很昂贵。但是,对于许多应用程序域,虽然新任务和模型经常出现,但建模的基础文档主要保持不变。我们研究如何通过嵌入回收利用(ER)来降低此类设置的计算成本:在执行训练或推理时从以前的模型中重新使用激活。与以前的工作相反,重点是冻结小型分类头进行填充,这通常会导致绩效显着下降,我们提出了从预告片的模型中缓存中间层的输出,并为新任务的剩余层进行填充。我们表明,我们的方法在训练过程中提供了100%的速度和55-86%的推理,并且对科学领域中文本分类和实体识别任务的准确性产生了可观的影响。对于通用域的问答任务,ER提供了类似的加速和少量准确性。最后,我们确定了ER的几个开放挑战和未来的方向。
translated by 谷歌翻译
大型的语言模型(PRELMS)正在彻底改变所有基准的自然语言处理。但是,它们的巨大尺寸对于小型实验室或移动设备上的部署而言是过分的。修剪和蒸馏等方法可减少模型尺寸,但通常保留相同的模型体系结构。相反,我们探索了蒸馏预告片中的更有效的架构,单词的持续乘法(CMOW),该构造将每个单词嵌入为矩阵,并使用矩阵乘法来编码序列。我们扩展了CMOW体系结构及其CMOW/CBOW-HYBRID变体,具有双向组件,以提供更具表现力的功能,在预绘制期间进行一般(任务无义的)蒸馏的单次表示,并提供了两种序列编码方案,可促进下游任务。句子对,例如句子相似性和自然语言推断。我们的基于矩阵的双向CMOW/CBOW-HYBRID模型在问题相似性和识别文本范围内的Distilbert具有竞争力,但仅使用参数数量的一半,并且在推理速度方面快三倍。除了情感分析任务SST-2和语言可接受性任务COLA外,我们匹配或超过ELMO的ELMO分数。但是,与以前的跨架结构蒸馏方法相比,我们证明了检测语言可接受性的分数增加了一倍。这表明基于基质的嵌入可用于将大型预赛提炼成竞争模型,并激励朝这个方向进行进一步的研究。
translated by 谷歌翻译
用于预培训语言模型的自我监督学习的核心包括预训练任务设计以及适当的数据增强。语言模型中的大多数数据增强都是独立于上下文的。最近在电子中提出了一个开创性的增强,并通过引入辅助生成网络(发电机)来实现最先进的性能,以产生用于培训主要辨别网络(鉴别者)的上下文化数据增强。然而,这种设计引入了发电机的额外计算成本,并且需要调整发电机和鉴别器之间的相对能力。在本文中,我们提出了一种自增强策略(SAS),其中单个网络用于审视以后的时期的培训常规预训练和上下文化数据增强。基本上,该策略消除了单独的发电机,并使用单个网络共同执行具有MLM(屏蔽语言建模)和RTD(替换令牌检测)头的两个预训练任务。它避免了寻找适当大小的发电机的挑战,这对于在电子中证明的性能至关重要,以及其随后的变体模型至关重要。此外,SAS是一项常规策略,可以与最近或将来的许多新技术无缝地结合,例如杜伯塔省的解除关注机制。我们的实验表明,SAS能够在具有相似或更少的计算成本中优于胶水任务中的电磁和其他最先进的模型。
translated by 谷歌翻译
对于大多数自然语言处理任务,主要的实践是使用较小的下游数据集对大型预验证变压器模型(例如BERT)。尽管这种方法取得了成功,但尚不清楚这些收益在多大程度上归因于用于预处理而不是训练预处理的目标本身所采用的大量背景语料库。本文介绍了一项大规模的自我预测研究,其中相同的(下游)训练数据都用于预训练和填充。在解决Electra和Roberta型号以及10个不同下游数据集的实验中,我们观察到在BookWiki语料库上进行自我预测的竞争对手标准预告片(尽管使用了$ 10 \ times $ $ -500 \ times $ -500 \ times $少的数据),在7美元上以7美元的价格优于$ 7 $和$ 5 $数据集。令人惊讶的是,这些特定于任务的预预性模型通常在其他任务(包括胶水基准)上表现良好。我们的结果表明,在许多情况下,可归因于预处理的绩效收益主要是由预处理目标本身驱动的,并不总是归因于大规模数据集的合并。考虑到网络规模预处理数据中对知识产权和进攻内容的担忧,这些发现尤其重要。
translated by 谷歌翻译
基于变压器的NLP模型是使用数亿甚至数十亿个参数训练的,从而限制了其在计算受限环境中的适用性。尽管参数的数量通常与性能相关,但尚不清楚下游任务是否需要整个网络。在最新的修剪和提炼预培训模型的工作中,我们探索了在预训练模型中放下层的策略,并观察修剪对下游胶水任务的影响。我们能够修剪Bert,Roberta和XLNet型号高达40%,同时保持其原始性能的98%。此外,我们证明,在大小和性能方面,您的修剪模型与使用知识蒸馏的型号相提并论。我们的实验产生有趣的观察结果,例如(i)下层对于维持下游任务性能最重要,(ii)某些任务(例如释义检测和句子相似性)对于降低层的降低和(iii)经过训练的模型更强大。使用不同的目标函数表现出不同的学习模式,并且层掉落。
translated by 谷歌翻译
变压器模型是置换等分之一的。要提供输入令牌的顺序和类型信息,通常将位置和段嵌入式添加到输入中。最近的作品提出了具有相对位置编码的位置编码的变化,实现了更好的性能。我们的分析表明,增益实际上来自从输入中将位置信息移动到注意层。由此激励,我们介绍了变压器(饮食)的解耦的位置注意,一个简单但有效的机制,将位置和分段信息编码为变压器模型。该方法具有更快的培训和推理时间,同时在胶水,Xtreme和WMT基准上实现竞争性能。我们进一步概括了我们的方法到远程变压器并显示性能增益。
translated by 谷歌翻译