输入分布转移是无监督域适应(UDA)中的重要问题之一。最受欢迎的UDA方法集中在域不变表示学习上,试图将不同域中的功能调整为相似的特征分布。但是,这些方法忽略了域之间的输入单词分布的直接对齐,这是单词级分类任务(例如跨域NER)的重要因素。在这项工作中,我们通过引入子词级解决方案X-Pience来为输入单词级分布移动,从而为跨域NER开发了新的灯光。具体而言,我们将源域的输入单词重新划分以接近目标子词分布,该分布是作为最佳运输问题制定和解决的。由于这种方法着重于输入级别,因此它也可以与先前的DIRL方法相结合,以进一步改进。实验结果表明,基于四个基准NER数据集的Bert-Tagger所提出的方法的有效性。同样,事实证明,所提出的方法受益于诸如Dann之类的DIRL方法。
translated by 谷歌翻译
自然语言处理(NLP)算法正在迅速改善,但在应用于分布的示例时通常会挣扎。减轻域间隙的突出方法是域的适应性,其中在源域上训练的模型适应了新的目标域。我们提出了一种新的学习设置,``从头开始适应域名'',我们认为这对于以隐私的方式将NLP的覆盖范围扩展到敏感域至关重要。在此设置中,我们旨在有效地从一组源域中注释数据,以便训练有素的模型在敏感的目标域上表现良好,从而从中无法从中获得注释。我们的研究将这种具有挑战性的设置的几种方法比较,从数据选择和域适应算法到主动学习范式,在两个NLP任务上:情感分析和命名实体识别。我们的结果表明,使用上述方法可以缓解域间隙,并将其组合进一步改善结果。
translated by 谷歌翻译
在计算机视觉中,面对域转移是很常见的:具有相同类但采集条件不同的图像。在域适应性(DA)中,人们希望使用源标记的图像对未标记的目标图像进行分类。不幸的是,在源训练集中训练的深度神经网络在不属于训练领域的目标图像上表现不佳。改善这些性能的一种策略是使用最佳传输(OT)在嵌入式空间中对齐源和目标图像分布。但是,OT会导致负转移,即与不同标签的样品对齐,这导致过度拟合,尤其是在域之间存在标签移动的情况下。在这项工作中,我们通过将其解释为针对目标图像的嘈杂标签分配来减轻负相位。然后,我们通过适当的正则化来减轻其效果。我们建议将混合正则化\ citep {zhang2018mixup}与噪音标签强大的损失,以提高域的适应性性能。我们在一项广泛的消融研究中表明,这两种技术的结合对于提高性能至关重要。最后,我们在几个基准和现实世界DA问题上评估了称为\ textsc {mixunbot}的方法。
translated by 谷歌翻译
睡眠分期在诊断和治疗睡眠障碍中非常重要。最近,已经提出了许多数据驱动的深度学习模型,用于自动睡眠分期。他们主要在一个大型公共标签的睡眠数据集上训练该模型,并在较小的主题上对其进行测试。但是,他们通常认为火车和测试数据是从相同的分布中绘制的,这可能在现实世界中不存在。最近已经开发了无监督的域适应性(UDA)来处理此域移位问题。但是,以前用于睡眠分期的UDA方法具有两个主要局限性。首先,他们依靠一个完全共享的模型来对齐,该模型可能会在功能提取过程中丢失特定于域的信息。其次,它们仅在全球范围内将源和目标分布对齐,而无需考虑目标域中的类信息,从而阻碍了测试时模型的分类性能。在这项工作中,我们提出了一个名为Adast的新型对抗性学习框架,以解决未标记的目标域中的域转移问题。首先,我们开发了一个未共享的注意机制,以保留两个领域中的域特异性特征。其次,我们设计了一种迭代自我训练策略,以通过目标域伪标签提高目标域上的分类性能。我们还建议双重分类器,以提高伪标签的鲁棒性和质量。在六个跨域场景上的实验结果验证了我们提出的框架的功效及其优于最先进的UDA方法。源代码可在https://github.com/emadeldeen24/adast上获得。
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译
Aspect-based sentiment analysis (ABSA) aims at extracting opinionated aspect terms in review texts and determining their sentiment polarities, which is widely studied in both academia and industry. As a fine-grained classification task, the annotation cost is extremely high. Domain adaptation is a popular solution to alleviate the data deficiency issue in new domains by transferring common knowledge across domains. Most cross-domain ABSA studies are based on structure correspondence learning (SCL), and use pivot features to construct auxiliary tasks for narrowing down the gap between domains. However, their pivot-based auxiliary tasks can only transfer knowledge of aspect terms but not sentiment, limiting the performance of existing models. In this work, we propose a novel Syntax-guided Domain Adaptation Model, named SDAM, for more effective cross-domain ABSA. SDAM exploits syntactic structure similarities for building pseudo training instances, during which aspect terms of target domain are explicitly related to sentiment polarities. Besides, we propose a syntax-based BERT mask language model for further capturing domain-invariant features. Finally, to alleviate the sentiment inconsistency issue in multi-gram aspect terms, we introduce a span-based joint aspect term and sentiment analysis module into the cross-domain End2End ABSA. Experiments on five benchmark datasets show that our model consistently outperforms the state-of-the-art baselines with respect to Micro-F1 metric for the cross-domain End2End ABSA task.
translated by 谷歌翻译
域的适应性(DA)旨在将知识从标记的源域中学习的知识转移到未标记或标记较小但相关的目标域的知识。理想情况下,源和目标分布应彼此平等地对齐,以实现公正的知识转移。但是,由于源和目标域中注释数据的数量之间存在显着不平衡,通常只有目标分布与源域保持一致,从而使不必要的源特定知识适应目标域,即偏置域的适应性。为了解决此问题,在这项工作中,我们通过对基于对抗性的DA方法进行建模来对歧视器的不确定性进行建模,以优化无偏见转移。我们理论上分析了DA中提出的无偏可传递性学习方法的有效性。此外,为了减轻注释数据不平衡的影响,我们利用了目标域中未标记样品的伪标签选择的估计不确定性,这有助于实现更好的边际和条件分布在域之间的分布。对各种DA基准数据集的广泛实验结果表明,可以轻松地将所提出的方法纳入各种基于对抗性的DA方法中,从而实现最新的性能。
translated by 谷歌翻译
众包被视为有效监督学习的一个潜在解决方案,旨在通过人群工人建立大规模的注释培训数据。以前的研究重点是减少来自众包注释的噪音的影响。我们在这项工作中涉及不同的观点,关于所有众包作为个人注册人的金标。通过这种方式,我们发现众群可能与域适应高度相似,然后近域方法的最近进步几乎可以直接应用于众包。在这里,我们将命名实体识别(ner)作为一项研究案例,建议由尝试捕获有效域感知功能的域适配方法的吸引人感知表示学习模型。我们调查无监督和监督的众群学习,假设没有或只有小型专家注释。基准众包的实验结果表明,我们的方法非常有效,导致新的最先进的性能。此外,在监督环境下,我们只能通过非常小的专家注释来实现令人印象深刻的性能。
translated by 谷歌翻译
在这项工作中,我们以一种充满挑战的自我监督方法研究无监督的领域适应性(UDA)。困难之一是如何在没有目标标签的情况下学习任务歧视。与以前的文献直接使跨域分布或利用反向梯度保持一致,我们建议域混淆对比度学习(DCCL),以通过域难题桥接源和目标域,并在适应后保留歧视性表示。从技术上讲,DCCL搜索了最大的挑战方向,而精美的工艺领域将增强型混淆为正对,然后对比鼓励该模型向其他领域提取陈述,从而学习更稳定和有效的域名。我们还研究对比度学习在执行其他数据增强时是否必然有助于UDA。广泛的实验表明,DCCL明显优于基准。
translated by 谷歌翻译
Domain adaptation methods reduce domain shift typically by learning domain-invariant features. Most existing methods are built on distribution matching, e.g., adversarial domain adaptation, which tends to corrupt feature discriminability. In this paper, we propose Discriminative Radial Domain Adaptation (DRDR) which bridges source and target domains via a shared radial structure. It's motivated by the observation that as the model is trained to be progressively discriminative, features of different categories expand outwards in different directions, forming a radial structure. We show that transferring such an inherently discriminative structure would enable to enhance feature transferability and discriminability simultaneously. Specifically, we represent each domain with a global anchor and each category a local anchor to form a radial structure and reduce domain shift via structure matching. It consists of two parts, namely isometric transformation to align the structure globally and local refinement to match each category. To enhance the discriminability of the structure, we further encourage samples to cluster close to the corresponding local anchors based on optimal-transport assignment. Extensively experimenting on multiple benchmarks, our method is shown to consistently outperforms state-of-the-art approaches on varied tasks, including the typical unsupervised domain adaptation, multi-source domain adaptation, domain-agnostic learning, and domain generalization.
translated by 谷歌翻译
Cross-domain few-shot relation extraction poses a great challenge for the existing few-shot learning methods and domain adaptation methods when the source domain and target domain have large discrepancies. This paper proposes a method by combining the idea of few-shot learning and domain adaptation to deal with this problem. In the proposed method, an encoder, learned by optimizing a representation loss and an adversarial loss, is used to extract the relation of sentences in the source and target domain. The representation loss, including a cross-entropy loss and a contrastive loss, makes the encoder extract the relation of the source domain and keep the geometric structure of the classes in the source domain. And the adversarial loss is used to merge the source domain and target domain. The experimental results on the benchmark FewRel dataset demonstrate that the proposed method can outperform some state-of-the-art methods.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a well-labeled source domain to a different but related unlabeled target domain with identical label space. Currently, the main workhorse for solving UDA is domain alignment, which has proven successful. However, it is often difficult to find an appropriate source domain with identical label space. A more practical scenario is so-called partial domain adaptation (PDA) in which the source label set or space subsumes the target one. Unfortunately, in PDA, due to the existence of the irrelevant categories in the source domain, it is quite hard to obtain a perfect alignment, thus resulting in mode collapse and negative transfer. Although several efforts have been made by down-weighting the irrelevant source categories, the strategies used tend to be burdensome and risky since exactly which irrelevant categories are unknown. These challenges motivate us to find a relatively simpler alternative to solve PDA. To achieve this, we first provide a thorough theoretical analysis, which illustrates that the target risk is bounded by both model smoothness and between-domain discrepancy. Considering the difficulty of perfect alignment in solving PDA, we turn to focus on the model smoothness while discard the riskier domain alignment to enhance the adaptability of the model. Specifically, we instantiate the model smoothness as a quite simple intra-domain structure preserving (IDSP). To our best knowledge, this is the first naive attempt to address the PDA without domain alignment. Finally, our empirical results on multiple benchmark datasets demonstrate that IDSP is not only superior to the PDA SOTAs by a significant margin on some benchmarks (e.g., +10% on Cl->Rw and +8% on Ar->Rw ), but also complementary to domain alignment in the standard UDA
translated by 谷歌翻译
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). These PLMs have brought significant performance gains for a range of NLP tasks, circumventing the need to customize complex designs for specific tasks. However, most current work focus on finetuning PLMs on a domain-specific datasets, ignoring the fact that the domain gap can lead to overfitting and even performance drop. Therefore, it is practically important to find an appropriate method to effectively adapt PLMs to a target domain of interest. Recently, a range of methods have been proposed to achieve this purpose. Early surveys on domain adaptation are not suitable for PLMs due to the sophisticated behavior exhibited by PLMs from traditional models trained from scratch and that domain adaptation of PLMs need to be redesigned to take effect. This paper aims to provide a survey on these newly proposed methods and shed light in how to apply traditional machine learning methods to newly evolved and future technologies. By examining the issues of deploying PLMs for downstream tasks, we propose a taxonomy of domain adaptation approaches from a machine learning system view, covering methods for input augmentation, model optimization and personalization. We discuss and compare those methods and suggest promising future research directions.
translated by 谷歌翻译
Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. Due to the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies of transfer learning in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Unlike previous surveys, this survey paper reviews more than forty representative transfer learning approaches, especially homogeneous transfer learning approaches, from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, over twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.
translated by 谷歌翻译
我们提出了两个新颖的可传递性指标F-OTCE(基于快速最佳运输的条件熵)和JC-otce(联合通信OTCE),以评估源模型(任务)可以使目标任务的学习受益多少,并学习更可转移的表示形式。用于跨域交叉任务转移学习。与需要评估辅助任务的经验可转让性的现有指标不同,我们的指标是无辅助的,以便可以更有效地计算它们。具体而言,F-otce通过首先求解源和目标分布之间的最佳传输(OT)问题来估计可转移性,然后使用最佳耦合来计算源和目标标签之间的负条件熵。它还可以用作损失函数,以最大化目标任务填充源模型的可传递性。同时,JC-OTCE通过在OT问题中包含标签距离来提高F-otce的可转移性鲁棒性,尽管它可能会产生额外的计算成本。广泛的实验表明,F-otce和JC-otce优于最先进的无辅助指标,分别为18.85%和28.88%,与基础真相转移精度相关系数。通过消除辅助任务的训练成本,两个指标将前一个方法的总计算时间从43分钟减少到9.32s和10.78,用于一对任务。当用作损失函数时,F-otce在几个射击分类实验中显示出源模型的传输精度的一致性提高,精度增益高达4.41%。
translated by 谷歌翻译
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WD-GRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
translated by 谷歌翻译
在本文中,我们提出了一种对无监督域适应的新方法,与最佳运输,学习概率措施和无监督学习的概念相关。所提出的方法Hot-DA基于最佳运输的分层制定,其利用了由地面度量捕获的几何信息,源和目标域中的结构信息更丰富的结构信息。通过根据其类标签将样本分组到结构中,本质地形成标记的源域中的附加信息。在探索未标记的目标域中的隐藏结构的同时,通过Wassersein BaryCenter的学习概率措施的问题,我们证明是等同于光谱聚类。具有可控复杂性的玩具数据集的实验和两个具有挑战性的视觉适应数据集显示了所提出的方法的优越性。
translated by 谷歌翻译
几乎没有命名的实体识别(NER)对于在有限的资源领域中标记的实体标记至关重要,因此近年来受到了适当的关注。现有的几声方法主要在域内设置下进行评估。相比之下,对于这些固有的忠实模型如何使用一些标记的域内示例在跨域NER中执行的方式知之甚少。本文提出了一种两步以理性为中心的数据增强方法,以提高模型的泛化能力。几个数据集中的结果表明,与先前的最新方法相比,我们的模型无形方法可显着提高跨域NER任务的性能,包括反事实数据增强和及时调用方法。我们的代码可在\ url {https://github.com/lifan-yuan/factmix}上获得。
translated by 谷歌翻译
通过从完全标记的源域中利用数据,无监督域适应(UDA)通过显式差异最小化数据分布或对抗学习来提高未标记的目标域上的分类性能。作为增强,通过利用模型预测来加强目标特征识别期间涉及类别对齐。但是,在目标域上的错误类别预测中产生的伪标签不准确以及由源域的过度录制引起的分发偏差存在未探明的问题。在本文中,我们提出了一种模型 - 不可知的两阶段学习框架,这大大减少了使用软伪标签策略的缺陷模型预测,并避免了课程学习策略的源域上的过度拟合。从理论上讲,它成功降低了目标域上预期误差的上限的综合风险。在第一阶段,我们用分布对齐的UDA方法训练一个模型,以获得具有相当高的置位目标域上的软语义标签。为了避免在源域上的过度拟合,在第二阶段,我们提出了一种课程学习策略,以自适应地控制来自两个域的损失之间的加权,以便训练阶段的焦点从源分布逐渐移位到目标分布,以预测信心提升了目标分布在目标领域。对两个知名基准数据集的广泛实验验证了我们提出框架促进促进顶级UDA算法的性能的普遍效果,并展示其一致的卓越性能。
translated by 谷歌翻译
域的适应性旨在将从源域获得的标记实例转移到目标域,以填补域之间的空白。大多数域适应方法都假定源和目标域具有相同的维度。当每个域中的特征数量不同时,都很少研究当适用的方法,尤其是当未给出从目标域获得的测试数据的标签信息时。在本文中,假定在两个域中都存在共同特征,并且在目标域中观察到额外的(新的)特征。因此,目标域的维度高于源域的维度。为了利用共同特征的均匀性,这些源和目标域之间的适应性被称为最佳运输(OT)问题。此外,得出了基于ot的方法的目标域中的学习结合。使用模拟和现实世界数据对所提出的算法进行验证。
translated by 谷歌翻译