我们提出了一种简单而有效的方法,用于培训命名实体识别(NER)模型,该模型在业务电话交易记录上运行,该转录本包含噪音,这是由于口语对话的性质和自动语音识别的工件。我们首先通过有限数量的成绩单微调卢克(Luke),这是一种最先进的命名实体识别(NER)模型弱标记的数据和少量的人类注销数据。该模型可以达到高精度,同时还满足了将包含在商业电话产品中的实际限制:在具有成本效益的CPU而不是GPU上部署时实时性能。
translated by 谷歌翻译
将最新的变压器模型蒸馏成轻量级的学生模型是降低推理时计算成本的有效方法。学生模型通常是紧凑的变压器,参数较少,而昂贵的操作(例如自我发项)持续存在。因此,对于实时或大量用例,提高的推理速度仍然不令人满意。在本文中,我们旨在通过将教师模型提炼成更大,更稀疏的学生模型来进一步推动推理速度的极限 - 更大的是它们扩展到数十亿个参数;稀疏,大多数模型参数是N-gram嵌入。我们对六个单词文本分类任务的实验表明,这些学生模型平均保留了罗伯塔大师教师表现的97%,同时推理时GPU和CPU的加速速度最高为600倍。进一步的调查表明,我们的管道也有助于句子对分类任务和域泛化设置。
translated by 谷歌翻译
我们从任务特定的BERT基教师模型执行知识蒸馏(KD)基准到各种学生模型:Bilstm,CNN,Bert-Tiny,Bert-Mini和Bert-small。我们的实验涉及在两个任务中分组的12个数据集:印度尼西亚语言中的文本分类和序列标记。我们还比较蒸馏的各个方面,包括使用Word Embeddings和未标记的数据增强的使用。我们的实验表明,尽管基于变压器的模型的普及程度不断上升,但是使用Bilstm和CNN学生模型,与修剪的BERT模型相比,使用Bilstm和CNN学生模型提供了性能和计算资源(CPU,RAM和存储)之间的最佳权衡。我们进一步提出了一些快速胜利,通过涉及涉及丢失功能,Word Embeddings和未标记的数据准备的简单选择的高效KD培训机制来生产小型NLP模型。
translated by 谷歌翻译
本文提出了一种用于对话序列标记的新型知识蒸馏方法。对话序列标签是监督的学习任务,估计目标对话文档中每个话语的标签,并且对于许多诸如对话法估计的许多应用是有用的。准确的标签通常通过分层结构化的大型模型来实现,这些大型模型组成的话语级和对话级网络,分别捕获话语内和话语之间的上下文。但是,由于其型号大小,因此无法在资源受限设备上部署此类模型。为了克服这种困难,我们专注于通过蒸馏了大型和高性能教师模型的知识来列举一个小型模型的知识蒸馏。我们的主要思想是蒸馏知识,同时保持教师模型捕获的复杂环境。为此,所提出的方法,等级知识蒸馏,通过蒸馏来列举小型模型,而不是通过培训模型在教师模型中培训的话语水平和对话级环境的知识模拟教师模型在每个级别的输出。对话法案估算和呼叫场景分割的实验证明了该方法的有效性。
translated by 谷歌翻译
In this modern era of technology with e-commerce developing at a rapid pace, it is very important to understand customer requirements and details from a business conversation. It is very crucial for customer retention and satisfaction. Extracting key insights from these conversations is very important when it comes to developing their product or solving their issue. Understanding customer feedback, responses, and important details of the product are essential and it would be done using Named entity recognition (NER). For extracting the entities we would be converting the conversations to text using the optimal speech-to-text model. The model would be a two-stage network in which the conversation is converted to text. Then, suitable entities are extracted using robust techniques using a NER BERT transformer model. This will aid in the enrichment of customer experience when there is an issue which is faced by them. If a customer faces a problem he will call and register his complaint. The model will then extract the key features from this conversation which will be necessary to look into the problem. These features would include details like the order number, and the exact problem. All these would be extracted directly from the conversation and this would reduce the effort of going through the conversation again.
translated by 谷歌翻译
口语语言理解(SLU)任务涉及从语音音频信号映射到语义标签。鉴于此类任务的复杂性,可能预期良好的性能需要大量标记的数据集,这很难为每个新任务和域收集。但是,最近的自我监督讲话表现的进步使得考虑使用有限标记的数据学习SLU模型是可行的。在这项工作中,我们专注于低资源讨论(ner)并解决问题:超越自我监督的预培训,我们如何使用未为任务注释的外部语音和/或文本数据?我们借鉴了各种方法,包括自我训练,知识蒸馏和转移学习,并考虑其对端到端模型和管道(语音识别后跟文本型号)的适用性。我们发现,这些方法中的几种方法可以在资源受限的环境中提高绩效,超出了训练有素的表示的福利。与事先工作相比,我们发现改进的F1分数高达16%。虽然最好的基线模型是一种管道方法,但使用外部数据时最终通过端到端模型实现的最佳性能。我们提供了详细的比较和分析,例如,端到端模型能够专注于更加立列人的单词。
translated by 谷歌翻译
我们介绍了FastCoref,这是一个用于快速,准确且易于使用的英语核心分辨率的Python软件包。该软件包是可以安装的,并且允许两种模式:基于LingMess体系结构的精确模式,提供最新的核心精度,以及基本更快的模型F-Coref,这是本工作的重点。\ Model {}允许在V100 GPU上25秒内处理2.8K Ontonotes文档(相比之下,LingMess模型为6分钟,而流行的AllennLP Coreference模型的12分钟仅适度精度下降。快速速度是通过将紧凑模型从Lingmess模型中蒸馏而成的,以及使用我们称为“剩余批处理”的技术的有效批处理实现。https://github.com/shon-otmazgin/fastcoref
translated by 谷歌翻译
Zero-shot cross-lingual named entity recognition (NER) aims at transferring knowledge from annotated and rich-resource data in source languages to unlabeled and lean-resource data in target languages. Existing mainstream methods based on the teacher-student distillation framework ignore the rich and complementary information lying in the intermediate layers of pre-trained language models, and domain-invariant information is easily lost during transfer. In this study, a mixture of short-channel distillers (MSD) method is proposed to fully interact the rich hierarchical information in the teacher model and to transfer knowledge to the student model sufficiently and efficiently. Concretely, a multi-channel distillation framework is designed for sufficient information transfer by aggregating multiple distillers as a mixture. Besides, an unsupervised method adopting parallel domain adaptation is proposed to shorten the channels between the teacher and student models to preserve domain-invariant features. Experiments on four datasets across nine languages demonstrate that the proposed method achieves new state-of-the-art performance on zero-shot cross-lingual NER and shows great generalization and compatibility across languages and fields.
translated by 谷歌翻译
我们介绍了一个大规模实验,该实验对编码器进行了预处理,其参数计数范围从700m到9.3b不等,随后蒸馏到较小的型号中,范围为17m-170亿参数,其应用到自然语言理解(NLU)组件(NLU)组件(虚拟助手系统。尽管我们使用70%的口语数据训练,但在对书面形式的跨语性自然语言推论(XNLI)语料库进行评估时,我们的教师模型与XLM-R和MT5相当。我们使用系统中的内域数据对教师模型进行了第二阶段的训练,以提高了3.86%的相对分类,而相对7.01%的插槽填充。我们发现,即使是从我们的2阶段教师模型中提取的170亿参数模型,与仅接受公共数据的2.3B参数老师相比,与2.3B参数老师相比,意图分类更好2.88%,并且7.69%的插槽填充错误率更好(第1阶段),强调了。内域数据对训练的重要性。当使用标记的NLU数据进行离线评估时,我们的17m参数阶段2蒸馏模型的表现分别优于XLM-R碱基(85m Params)和Distillbert(42m Params),分别优于4.23%至6.14%。最后,我们介绍了一个完整的虚拟助手实验平台的结果,在该平台中,我们发现使用经过预训练和蒸馏管道训练的模型超过了从8500万参数教师蒸馏的模型,在自动测量全系统用户不满的自动测量中,从8500万参数教师蒸馏出3.74%-4.91%。
translated by 谷歌翻译
We present DualNER, a simple and effective framework to make full use of both annotated source language corpus and unlabeled target language text for zero-shot cross-lingual named entity recognition (NER). In particular, we combine two complementary learning paradigms of NER, i.e., sequence labeling and span prediction, into a unified multi-task framework. After obtaining a sufficient NER model trained on the source data, we further train it on the target data in a {\it dual-teaching} manner, in which the pseudo-labels for one task are constructed from the prediction of the other task. Moreover, based on the span prediction, an entity-aware regularization is proposed to enhance the intrinsic cross-lingual alignment between the same entities in different languages. Experiments and analysis demonstrate the effectiveness of our DualNER. Code is available at https://github.com/lemon0830/dualNER.
translated by 谷歌翻译
最先进的自动语音识别(ASR)系统经过数以万计的标记语音数据训练。人类转录很昂贵且耗时。诸如转录的质量和一致性之类的因素可以极大地影响使用这些数据训练的ASR模型的性能。在本文中,我们表明我们可以通过利用最近的自学和半监督学习技术来培训强大的教师模型来生产高质量的伪标签。具体来说,我们仅使用(无监督/监督培训)和迭代嘈杂的学生教师培训来培训6亿个参数双向教师模型。该模型在语音搜索任务上达到了4.0%的单词错误率(WER),比基线相对好11.1%。我们进一步表明,通过使用这种强大的教师模型来生成用于训练的高质量伪标签,与使用人类标签相比,流媒体模型可以实现13.6%的相对减少(5.9%至5.1%)。
translated by 谷歌翻译
大型的语言模型(PRELMS)正在彻底改变所有基准的自然语言处理。但是,它们的巨大尺寸对于小型实验室或移动设备上的部署而言是过分的。修剪和蒸馏等方法可减少模型尺寸,但通常保留相同的模型体系结构。相反,我们探索了蒸馏预告片中的更有效的架构,单词的持续乘法(CMOW),该构造将每个单词嵌入为矩阵,并使用矩阵乘法来编码序列。我们扩展了CMOW体系结构及其CMOW/CBOW-HYBRID变体,具有双向组件,以提供更具表现力的功能,在预绘制期间进行一般(任务无义的)蒸馏的单次表示,并提供了两种序列编码方案,可促进下游任务。句子对,例如句子相似性和自然语言推断。我们的基于矩阵的双向CMOW/CBOW-HYBRID模型在问题相似性和识别文本范围内的Distilbert具有竞争力,但仅使用参数数量的一半,并且在推理速度方面快三倍。除了情感分析任务SST-2和语言可接受性任务COLA外,我们匹配或超过ELMO的ELMO分数。但是,与以前的跨架结构蒸馏方法相比,我们证明了检测语言可接受性的分数增加了一倍。这表明基于基质的嵌入可用于将大型预赛提炼成竞争模型,并激励朝这个方向进行进一步的研究。
translated by 谷歌翻译
Distantly-Supervised Named Entity Recognition (DS-NER) effectively alleviates the data scarcity problem in NER by automatically generating training samples. Unfortunately, the distant supervision may induce noisy labels, thus undermining the robustness of the learned models and restricting the practical application. To relieve this problem, recent works adopt self-training teacher-student frameworks to gradually refine the training labels and improve the generalization ability of NER models. However, we argue that the performance of the current self-training frameworks for DS-NER is severely underestimated by their plain designs, including both inadequate student learning and coarse-grained teacher updating. Therefore, in this paper, we make the first attempt to alleviate these issues by proposing: (1) adaptive teacher learning comprised of joint training of two teacher-student networks and considering both consistent and inconsistent predictions between two teachers, thus promoting comprehensive student learning. (2) fine-grained student ensemble that updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise. To verify the effectiveness of our proposed method, we conduct experiments on four DS-NER datasets. The experimental results demonstrate that our method significantly surpasses previous SOTA methods.
translated by 谷歌翻译
由于从大规模预先训练的语言模型的转移学习在自然语言处理中普遍存在,在计算受限环境中运行这些模型仍然是一个具有挑战性的问题。已经提出了包括知识蒸馏,网络量化或网络修剪的几种解决方案;然而,这些方法主要关注英语,从而在考虑低资源语言时扩大差距。在这项工作中,我们为罗马尼亚语推出了三种轻型和快速版本的罗马尼亚语言:Distil-Bert-Base-Ro,Distil-Robert-Base和DistilMulti-Bert-Bas-Ro。前两种模型因单独蒸馏在文献中提供的两个基础版本的罗马尼亚伯爵的知识,而最后一个是通过蒸馏它们的集合来获得的。为了我们的知识,这是第一次尝试创建公开可用的罗马尼亚蒸馏BERT模型,这是在五个任务上进行彻底评估的:语音标记,名为实体识别,情感分析,语义文本相似性和方言识别。这些基准测试的实验结果证明,我们的三种蒸馏模型在与老师的准确性方面保持最大的表现,而GPU的两倍于GPU和〜35 \%较小。此外,我们进一步测试了我们的学生和他们的老师之间的相似性,通过测量其标签和概率忠诚度以及回归忠诚度 - 在这项工作中引入的新指标。
translated by 谷歌翻译
在生物医学语料库中预先培训的语言模型,例如Biobert,最近在下游生物医学任务上显示出令人鼓舞的结果。另一方面,由于嵌入尺寸,隐藏尺寸和层数等因素,许多现有的预训练模型在资源密集型和计算上都是沉重的。自然语言处理(NLP)社区已经制定了许多策略来压缩这些模型,利用修剪,定量和知识蒸馏等技术,从而导致模型更快,更小,随后更易于使用。同样,在本文中,我们介绍了六种轻型模型,即Biodistilbert,Biotinybert,BioMobilebert,Distilbiobert,Tinybiobert和Cmpactactbiobert,并通过掩护的语言在PubMed DataSet上通过掩护数据进行了知识蒸馏而获得的知识蒸馏来获得。建模(MLM)目标。我们在三个生物医学任务上评估了所有模型,并将它们与Biobert-V1.1进行比较,以创建有效的轻量级模型,以与较大的对应物相同。所有模型将在我们的HuggingFace配置文件上公开可用,网址为https://huggingface.co/nlpie,用于运行实验的代码将在https://github.com/nlpie-research/compact-compact-biomedical-transformers上获得。
translated by 谷歌翻译
当前的领先错误发音检测和诊断(MDD)系统通过端到端音素识别实现有希望的性能。这种端到端解决方案的一个挑战是在自然L2语音上缺乏人类注销的音素。在这项工作中,我们通过伪标记(PL)程序利用未标记的L2语音,并扩展基于预先训练的自我监督学习(SSL)模型的微调方法。具体而言,我们使用WAV2VEC 2.0作为我们的SSL模型,并使用原始标记的L2语音样本以及创建的伪标记的L2语音样本进行微调。我们的伪标签是动态的,是由在线模型的合奏生成的,这确保了我们的模型对伪标签的噪声具有强大的功能。我们表明,使用伪标签进行微调可实现5.35%的音素错误率降低和2.48%的MDD F1得分在仅标签样本的基线基线。提出的PL方法还显示出优于常规的离线PL方法。与最先进的MDD系统相比,我们的MDD解决方案会产生更准确,一致的语音误差诊断。此外,我们对单独的UTD-4ACCENTS数据集进行了开放测试,在该数据集中,我们的系统识别输出基于重音和清晰度,与人类感知有着密切的相关性。
translated by 谷歌翻译
最近,蒙面的预测预训练在自我监督的学习(SSL)方面取得了显着的进展,以进行语音识别。它通常需要以无监督的方式获得的代码簿,从而使其准确和难以解释。我们提出了两种监督指导的代码书生成方法,以提高自动语音识别(ASR)的性能以及预训练效率,要么通过使用混合ASR系统来解码以生成音素级别对准(命名为PBERT),要么通过在上进行集群进行聚类。从端到端CTC模型(命名CTC聚类)提取的监督语音功能。混合动力和CTC模型均经过与微调相同的少量标记语音训练。实验表明,我们的方法对各种SSL和自我训练基准的优势具有显着优势,相对减少了17.0%。我们的预训练模型在非ASR语音任务中还显示出良好的可传递性。
translated by 谷歌翻译
大规模的语音自我监督学习(SSL)已经出现到语音处理的主要领域,但是,由于其巨大规模而引起的计算成本问题是对学术界的高障碍。此外,语音SSL模型的现有蒸馏技术通过减少层来压缩模型,从而在语言模式识别任务(例如音素识别(PR))中引起性能降解。在本文中,我们提出了Fithubert,它几乎在几乎所有模型组件中都使尺寸较薄,并且与先前的语音SSL蒸馏作品相比,层层更深。此外,我们采用缩短时间来加快推理时间,并提出一种基于提示的蒸馏方法,以减少性能降解。与休伯特相比,我们的方法将模型降低到23.8%,推理时间为35.9%。此外,我们在优越的基准上达到了12.1%的单词错误率和13.3%的音素错误率,这比先前的工作优越。
translated by 谷歌翻译
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On Im-ageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger Efficient-Net as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. 1 * This work was conducted at Google.
translated by 谷歌翻译
Real-world tasks are largely composed of multiple models, each performing a sub-task in a larger chain of tasks, i.e., using the output from a model as input for another model in a multi-model pipeline. A model like MATRa performs the task of Crosslingual Transliteration in two stages, using English as an intermediate transliteration target when transliterating between two indic languages. We propose a novel distillation technique, EPIK, that condenses two-stage pipelines for hierarchical tasks into a single end-to-end model without compromising performance. This method can create end-to-end models for tasks without needing a dedicated end-to-end dataset, solving the data scarcity problem. The EPIK model has been distilled from the MATra model using this technique of knowledge distillation. The MATra model can perform crosslingual transliteration between 5 languages - English, Hindi, Tamil, Kannada and Bengali. The EPIK model executes the task of transliteration without any intermediate English output while retaining the performance and accuracy of the MATra model. The EPIK model can perform transliteration with an average CER score of 0.015 and average phonetic accuracy of 92.1%. In addition, the average time for execution has reduced by 54.3% as compared to the teacher model and has a similarity score of 97.5% with the teacher encoder. In a few cases, the EPIK model (student model) can outperform the MATra model (teacher model) even though it has been distilled from the MATra model.
translated by 谷歌翻译