在语义细分中进行了无监督的域的适应,以减轻对昂贵像素的依赖的依赖。它利用标有标记的源域数据集以及未标记的目标域图像来学习分割网络。在本文中,我们观察到现有的域不变学习框架的两个主要问题。 (1)由于特征分布对齐而分心,网络不能专注于分割任务。 (2)拟合源域数据很好地损害了目标域性能。为了解决这些问题,我们提出了减轻过度拟合源域的脱钩,并使最终模型能够更多地专注于细分任务。此外,我们提出自我歧视(SD),并引入辅助分类器,以使用伪标签学习更多歧视目标域特征。最后,我们建议在线增强自我训练(OEST),以在线方式上下文提高伪标签的质量。实验表明,我们的方法优于现有的最新方法,广泛的消融研究验证了每个组件的有效性。代码可在https://github.com/dvlab-research/decouplenet上找到。
translated by 谷歌翻译
大型未标记语料库上的预训练的变压器语言模型已产生了最新的最先进的结果,从而导致了自然语言处理,有机分子设计和蛋白质序列的产生。但是,尚未应用这种模型来学习无机材料的组成模式。在这里,我们使用在ICSD,OQMD中存放的材料和材料项目数据库中扩展的公式培训了七种现代变压器模型(GPT,GPT-2,GPT-2,GPT-NEO,GPT-NEO,GPT-J,BLMM,BART和ROBERTA) 。六个不同的数据集,具有/输出非电荷 - 中性或平衡的电负性样品用于对性能进行基准测试,并发现现代变压器模型的产生偏见,以生成材料组成的生成设计。我们的广泛实验表明,基于因果语言模型的材料变形金刚可以产生高达97.54 \%的化学有效材料组合物,即充电中性,而91.40 \%的电负性平衡,与基线相比,它的富集高6倍以上伪随机抽样算法。这些模型还表现出了很高的新颖性,并且它们在新材料发现中的潜力已经证明了它们的能力恢复了留出的材料。我们还发现,可以通过使用精选的训练集(例如高带盖材料)训练模型来量身定制生成的样品的性能。我们的实验还表明,不同模型在生成样品的属性方面都有自己的喜好,并且其运行时间复杂性差异很大。我们已经应用了材料变压器模型来发现一套使用DFT计算验证的新材料。
translated by 谷歌翻译
This paper describes the PASH participation in TREC 2021 Deep Learning Track. In the recall stage, we adopt a scheme combining sparse and dense retrieval method. In the multi-stage ranking phase, point-wise and pair-wise ranking strategies are used one after another based on model continual pre-trained on general knowledge and document-level data. Compared to TREC 2020 Deep Learning Track, we have additionally introduced the generative model T5 to further enhance the performance.
translated by 谷歌翻译
建模语义信息对于场景文本识别有用。在这项工作中,我们建议与视觉语义变压器(VST)共同模拟语义和视觉信息。 VST首先从具有变压器模块和主视觉语义对齐模块中的视觉特征映射明确地提取主语义信息。然后将语义信息与视觉特征映射(被视为序列)连接以形成伪多域序列,该伪多域序列组合视觉和语义信息,随后将其馈入基于变压器的交互模块,以便能够在视觉和视觉之间学习相互作用语义特征。以这种方式,可以通过语义信息和反之亦然可以增强视觉特征。可视特征的增强版本通过辅助视觉 - 语义对准模块进一步解码,其与主要一个共享权重。最后,通过获得最终文本预测的第三变压器模块共同处理解码的视觉特征和增强的语义特征。在包括常规/不规则文本识别数据集的七个公共基准测试中的实验验证了我们所提出的模型,在七个基准中的四个基准中达到最先进的效果。
translated by 谷歌翻译
目前,现有的最先进的3D对象检测器位于两阶段范例中。这些方法通常包括两个步骤:1)利用区域提案网络以自下而上的方式提出少数高质量的提案。 2)调整拟议区域的语义特征的大小和汇集,以总结Roi-Wise表示进一步改进。注意,步骤2中的这些ROI-WISE表示在馈送到遵循检测标题之后,在步骤2中的循环表示作为不相关的条目。然而,我们观察由步骤1所产生的这些提案,以某种方式从地面真理偏移,在局部邻居中兴起潜在的概率。在该提案在很大程度上用于由于坐标偏移而导致其边界信息的情况下出现挑战,而现有网络缺乏相应的信息补偿机制。在本文中,我们向点云进行了3D对象检测的$ BADET $。具体地,而不是以先前的工作独立地将每个提议进行独立地改进每个提议,我们将每个提议代表作为在给定的截止阈值内的图形构造的节点,局部邻域图形式的提案,具有明确利用的对象的边界相关性。此外,我们设计了轻量级区域特征聚合模块,以充分利用Voxel-Wise,Pixel-Wise和Point-Wise特征,具有扩展的接收领域,以实现更多信息ROI-WISE表示。我们在广泛使用的基提数据集中验证了坏人,并且具有高度挑战的Nuscenes数据集。截至4月17日,2021年,我们的坏账在基蒂3D检测排行榜上实现了Par表演,并在Kitti Bev检测排行榜上排名在$ 1 ^ {st} $ in $ superge $难度。源代码可在https://github.com/rui-qian/badet中获得。
translated by 谷歌翻译
培训语义分割模型需要大量的精细注释数据,使得很难快速适应不满足这种情况的新型类。很少拍摄的分割(FS-SEG)用许多约束来解决这个问题。在本文中,我们介绍了一种新的基准,称为广义的少量语义分割(GFS-SEG),分析了同时分割了具有很少的例子和基本类别的新型类别的泛化能力。第一研究表明,以前的代表性最先进的FS-SEG方法在GFS-SEG中缺乏,并且性能差异主要来自FS-SEG的约束设置。为了制作GFS-SEG易旧的,我们设置了GFS-SEG基线,可以在原始模型上实现不良性能的体现性能。因此,由于上下文对于语义分割是必不可少的,我们提出了显着提高性能的上下文感知原型学习(CAPL)1)利用支持样本的共同发生,以及2)将上下文信息动态地丰富到分类器,对每个查询映像的内容进行调节。两项贡献都是通过实验证明具有实际实际优点的贡献。对Pascal-VOC和Coco的广泛实验表现出CAPL的有效性,CAPL通过实现竞争性能来概括为FS-SEG。代码将公开可用。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
With the fast development of big data, it has been easier than before to learn the optimal decision rule by updating the decision rule recursively and making online decisions. We study the online statistical inference of model parameters in a contextual bandit framework of sequential decision-making. We propose a general framework for online and adaptive data collection environment that can update decision rules via weighted stochastic gradient descent. We allow different weighting schemes of the stochastic gradient and establish the asymptotic normality of the parameter estimator. Our proposed estimator significantly improves the asymptotic efficiency over the previous averaged SGD approach via inverse probability weights. We also conduct an optimality analysis on the weights in a linear regression setting. We provide a Bahadur representation of the proposed estimator and show that the remainder term in the Bahadur representation entails a slower convergence rate compared to classical SGD due to the adaptive data collection.
translated by 谷歌翻译