Prior work has shown that it is possible to expand pretrained Masked Language Models (MLMs) to new languages by learning a new set of embeddings, while keeping the transformer body frozen. Despite learning a small subset of parameters, this approach is not compute-efficient, as training the new embeddings requires a full forward and backward pass over the entire model. In this work, we propose mini-model adaptation, a compute-efficient alternative that builds a shallow mini-model from a fraction of a large model's parameters. New language-specific embeddings can then be efficiently trained over the mini-model, and plugged into the aligned large model for rapid cross-lingual transfer. We explore two approaches to learn mini-models: MiniJoint, which jointly pretrains the primary model and the mini-model using a single transformer with a secondary MLM head at a middle layer; and MiniPost, where we start from a regular pretrained model and build a mini-model by extracting and freezing a few layers and learning a small number of parameters on top. Experiments on XNLI, MLQA and PAWS-X show that mini-model adaptation matches the performance of the standard approach using up to 2.4x less compute.
translated by 谷歌翻译
While prior work has established that the use of parallel data is conducive for cross-lingual learning, it is unclear if the improvements come from the data itself, or if it is the modeling of parallel interactions that matters. Exploring this, we examine the usage of unsupervised machine translation to generate synthetic parallel data, and compare it to supervised machine translation and gold parallel data. We find that even model generated parallel data can be useful for downstream tasks, in both a general setting (continued pretraining) as well as the task-specific setting (translate-train), although our best results are still obtained using real parallel data. Our findings suggest that existing multilingual models do not exploit the full potential of monolingual data, and prompt the community to reconsider the traditional categorization of cross-lingual learning approaches.
translated by 谷歌翻译
Scaling up language models has led to unprecedented performance gains, but little is understood about how the training dynamics change as models get larger. How do language models of different sizes learn during pre-training? Why do larger language models demonstrate more desirable behaviors? In this paper, we analyze the intermediate training checkpoints of differently sized OPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-token prediction, sequence-level generation, and downstream tasks. We find that 1) at a given perplexity and independent of model sizes, a similar subset of training tokens see the most significant reduction in loss, with the rest stagnating or showing double-descent behavior; 2) early in training, all models learn to reduce the perplexity of grammatical sequences that contain hallucinations, with small models halting at this suboptimal distribution and larger ones eventually learning to assign these sequences lower probabilities; 3) perplexity is a strong predictor of in-context learning performance on 74 multiple-choice tasks from BIG-Bench, and this holds independent of the model size. Together, these results show that perplexity is more predictive of model behaviors than model size or training computation.
translated by 谷歌翻译
预先训练的蒙版语言模型通过将下游任务作为文本填充来成功执行几次学习。但是,作为全镜头环境中的强大替代方案,诸如Electra之类的判别预训练模型不适合范式。在这项工作中,我们调整了基于及时的几次学习来进行电信,并表明它在广泛的任务中优于蒙面的语言模型。Electra是预先训练的,以区分令牌是产生还是原始。我们自然地将其扩展到基于迅速的几次学习,通过培训来评分目标选项的原创性,而无需引入新参数。我们的方法很容易适应涉及多token预测的任务,而无需额外的计算开销。分析表明,Electra学习分布与下游任务更好。
translated by 谷歌翻译
Multilingual machine translation suffers from negative interference across languages. A common solution is to relax parameter sharing with language-specific modules like adapters. However, adapters of related languages are unable to transfer information, and their total number of parameters becomes prohibitively expensive as the number of languages grows. In this work, we overcome these drawbacks using hyper-adapters -- hyper-networks that generate adapters from language and layer embeddings. While past work had poor results when scaling hyper-networks, we propose a rescaling fix that significantly improves convergence and enables training larger hyper-networks. We find that hyper-adapters are more parameter efficient than regular adapters, reaching the same performance with up to 12 times less parameters. When using the same number of parameters and FLOPS, our approach consistently outperforms regular adapters. Also, hyper-adapters converge faster than alternative approaches and scale better than regular dense networks. Our analysis shows that hyper-adapters learn to encode language relatedness, enabling positive transfer across languages.
translated by 谷歌翻译
大型语言模型经常经过数十万个计算天的训练,已经显示出零和少数学习的显着功能。鉴于它们的计算成本,如果没有大量资本,这些模型很难复制。对于通过API可用的少数产品,没有访问完整的模型权重,因此很难学习。我们提供开放训练的预训练变压器(OPT),这是一套仅解码器预训练的变压器,范围从12500万到175b参数,我们旨在与感兴趣的研究人员完全和负责任地分享。我们表明,OPT-175B与GPT-3相当,而仅需要1/7碳足迹才能开发。我们还释放了日志,详细介绍了我们面临的基础架构挑战,以及用于尝试所有发布模型的代码。
translated by 谷歌翻译
专家层(MOES)的混合物通过条件计算实现语言模型的高效缩放。本文提出了一个详细的实证研究,自回归鞋语言模型与广泛的设置中的密集模型相比:在域外语言建模,零和少量射击和全部微调。除了微调外,我们发现Moes基本上更加计算效率。在更适度的培训预算下,MOES可以使用$ \ SIM值4倍的计算,符合密集模型的性能。该差距在比例下变窄,但我们最大的MOE模型(1.1T参数)始终如一地优于计算等效的密集模型(6.7b参数)。总体而言,这种表现差距在任务和域中有很大差异,表明MOE和密集模型以不值得研究的方式概括不同的方式。我们使我们的代码和模型公开可用于研究使用。
translated by 谷歌翻译
GPT-3等大型自回归语言模型是几秒钟的学习者,可以在没有微调的情况下执行各种语言任务。虽然已知这些模型能够共同代表许多不同的语言,但他们的培训数据由英语主导,可能限制了它们的交叉概括。在这项工作中,我们在覆盖多种语言的平衡语料库上培训多语言自回归语言模型,并在广泛的任务中研究他们几乎没有零点的学习能力。我们最大的模型,具有75亿参数,在20多种代表语言中,在几种代表语言中,在几种代表性语言中,在几种代表性语言中,在多语言型号推理中表现出可比大小的GPT-3(在0次设置和0次拍摄设置中的绝对精度改善+ 7.4% 4-拍摄设置中的9.4%)和自然语言推理(每次拍摄和4次设置中的每一个+ 5.4%)。在Flores-101机器翻译基准测试中,我们的模型优于GPT-3在182个翻译方向上有32个培训例子,同时超过45个方向的官方监督基线。我们介绍了模型成功和失败的位置的详细分析,特别是它尤其显示在某些任务中实现交叉语境的内容学习,而仍然存在改善表面的鲁棒性和适应没有a的任务的余地自然冻结形式。最后,我们评估我们在仇恨语音检测中以五种语言的仇恨语音检测的模型,并发现它具有与可比大小的GPT-3模型类似的限制。
translated by 谷歌翻译
随着网络攻击和网络间谍活动的增长,如今需要更好,更强大的入侵检测系统(IDS)的需求更加有必要。 ID的基本任务是在检测Internet的攻击方面充当第一道防线。随着入侵者的入侵策略变得越来越复杂且难以检测,研究人员已经开始应用新颖的机器学习(ML)技术来有效地检测入侵者,从而保留互联网用户对整个互联网网络安全的信息和整体信任。在过去的十年中,基于ML和深度学习(DL)架构的侵入检测技术的爆炸激增,这些架构在各种基于网络安全的数据集上,例如DARPA,KDDCUP'99,NSL-KDD,CAIDA,CAIDA,CTU--- 13,UNSW-NB15。在这项研究中,我们回顾了当代文献,并提供了对不同类型的入侵检测技术的全面调查,该技术将支持向量机(SVMS)算法作为分类器。我们仅专注于在网络安全中对两个最广泛使用的数据集进行评估的研究,即KDDCUP'99和NSL-KDD数据集。我们提供了每种方法的摘要,确定了SVMS分类器的作用以及研究中涉及的所有其他算法。此外,我们以表格形式对每种方法进行了批判性综述,突出了所调查的每种方法的性能指标,优势和局限性。
translated by 谷歌翻译
车载传感器的车载系统正在增强连接。这使信息共享能够实现对环境的更全面的理解。但是,通过公共蜂窝网络的同行通信带来了多个网络障碍以解决,需要网络系统来中继通信并连接无法直接连接的各方。 Web实时通信(WEBRTC)是跨车辆流媒体流媒体的良好候选者,因为它可以使延迟通信较低,同时将标准协议带到安全握手中,发现公共IP和横向网络地址转换(NAT)系统。但是,在基础架构中的端到端服务质量(QOS)适应,在该基础架构中,传输和接收是通过继电器解耦的,需要一种机制来有效地使视频流适应网络容量。为此,本文通过利用实时运输控制协议(RTCP)指标(例如带宽和往返时间)来调查解决分辨率,帧和比特率更改的机制。该解决方案旨在确保接收机上系统及时获得相关信息。在实际的5G测试台中分析了应用不同方法适应方法时对端到端吞吐量效率和反应时间的影响。
translated by 谷歌翻译