鉴于自然语言陈述,如何验证其对维基百科这样的大型文本知识来源的准确性吗?大多数现有的神经模型在不提供关于哪一部分错误的情况下出现问题的情况下会进行预测。在本文中,我们提出了Loren,一种可解释的事实验证的方法。我们分解了在短语级别的整个索赔的验证,其中短语的真实性用作解释,可以根据逻辑规则汇总到最终判决中。 Loren的关键洞察力是将索赔词如三值潜变量代表如下,由聚合逻辑规则规范化。最终索赔验证基于所有潜在的变量。因此,Loren享有可解释性的额外好处 - 很容易解释它如何通过索赔词来达到某些结果。关于公共事实验证基准的实验表明,Loren对以前的方法具有竞争力,同时享有忠实和准确的可解释性的优点。 Loren的资源可用于:https://github.com/jiangjiechen/loren。
translated by 谷歌翻译
Face forgery detection plays an important role in personal privacy and social security. With the development of adversarial generative models, high-quality forgery images become more and more indistinguishable from real to humans. Existing methods always regard as forgery detection task as the common binary or multi-label classification, and ignore exploring diverse multi-modality forgery image types, e.g. visible light spectrum and near-infrared scenarios. In this paper, we propose a novel Hierarchical Forgery Classifier for Multi-modality Face Forgery Detection (HFC-MFFD), which could effectively learn robust patches-based hybrid domain representation to enhance forgery authentication in multiple-modality scenarios. The local spatial hybrid domain feature module is designed to explore strong discriminative forgery clues both in the image and frequency domain in local distinct face regions. Furthermore, the specific hierarchical face forgery classifier is proposed to alleviate the class imbalance problem and further boost detection performance. Experimental results on representative multi-modality face forgery datasets demonstrate the superior performance of the proposed HFC-MFFD compared with state-of-the-art algorithms. The source code and models are publicly available at https://github.com/EdWhites/HFC-MFFD.
translated by 谷歌翻译
Attention-based arbitrary style transfer studies have shown promising performance in synthesizing vivid local style details. They typically use the all-to-all attention mechanism: each position of content features is fully matched to all positions of style features. However, all-to-all attention tends to generate distorted style patterns and has quadratic complexity. It virtually limits both the effectiveness and efficiency of arbitrary style transfer. In this paper, we rethink what kind of attention mechanism is more appropriate for arbitrary style transfer. Our answer is a novel all-to-key attention mechanism: each position of content features is matched to key positions of style features. Specifically, it integrates two newly proposed attention forms: distributed and progressive attention. Distributed attention assigns attention to multiple key positions; Progressive attention pays attention from coarse to fine. All-to-key attention promotes the matching of diverse and reasonable style patterns and has linear complexity. The resultant module, dubbed StyA2K, has fine properties in rendering reasonable style textures and maintaining consistent local structure. Qualitative and quantitative experiments demonstrate that our method achieves superior results than state-of-the-art approaches.
translated by 谷歌翻译
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks, including question answering, natural language inference, word sense disambiguation, coreference resolution, and reasoning. [Method] Instead of arbitrarily increasing the size of a pretrained language model (PLM), our aim is to 1) fully extract knowledge from the input pretraining data given a certain parameter budget, e.g., 6B, and 2) effectively transfer this knowledge to downstream tasks. To achieve goal 1), we propose self-evolution learning for PLMs to wisely predict the informative tokens that should be masked, and supervise the masked language modeling (MLM) process with rectified smooth labels. For goal 2), we leverage the prompt transfer technique to improve the low-resource tasks by transferring the knowledge from the foundation model and related downstream tasks to the target task. [Results] According to our submission record (Oct. 2022), with our optimized pretraining and fine-tuning strategies, our 6B Vega method achieved new state-of-the-art performance on 4/8 tasks, sitting atop the SuperGLUE leaderboard on Oct. 8, 2022, with an average score of 91.3.
translated by 谷歌翻译
Unsupervised person re-identification (ReID) aims at learning discriminative identity features for person retrieval without any annotations. Recent advances accomplish this task by leveraging clustering-based pseudo labels, but these pseudo labels are inevitably noisy which deteriorate model performance. In this paper, we propose a Neighbour Consistency guided Pseudo Label Refinement (NCPLR) framework, which can be regarded as a transductive form of label propagation under the assumption that the prediction of each example should be similar to its nearest neighbours'. Specifically, the refined label for each training instance can be obtained by the original clustering result and a weighted ensemble of its neighbours' predictions, with weights determined according to their similarities in the feature space. In addition, we consider the clustering-based unsupervised person ReID as a label-noise learning problem. Then, we proposed an explicit neighbour consistency regularization to reduce model susceptibility to over-fitting while improving the training stability. The NCPLR method is simple yet effective, and can be seamlessly integrated into existing clustering-based unsupervised algorithms. Extensive experimental results on five ReID datasets demonstrate the effectiveness of the proposed method, and showing superior performance to state-of-the-art methods by a large margin.
translated by 谷歌翻译
Network traffic classification is the basis of many network security applications and has attracted enough attention in the field of cyberspace security. Existing network traffic classification based on convolutional neural networks (CNNs) often emphasizes local patterns of traffic data while ignoring global information associations. In this paper, we propose a MLP-Mixer based multi-view multi-label neural network for network traffic classification. Compared with the existing CNN-based methods, our method adopts the MLP-Mixer structure, which is more in line with the structure of the packet than the conventional convolution operation. In our method, the packet is divided into the packet header and the packet body, together with the flow features of the packet as input from different views. We utilize a multi-label setting to learn different scenarios simultaneously to improve the classification performance by exploiting the correlations between different scenarios. Taking advantage of the above characteristics, we propose an end-to-end network traffic classification method. We conduct experiments on three public datasets, and the experimental results show that our method can achieve superior performance.
translated by 谷歌翻译
基于深度卷积神经网络(CNN)的单图像飞机方法已取得了重大成功。以前的方法致力于通过增加网络的深度和宽度来改善网络的性能。当前的方法着重于增加卷积内核的大小,以通过受益于更大的接受场来增强其性能。但是,直接增加卷积内核的大小会引入大量计算开销和参数。因此,本文设计了一个新型的大内核卷积驱动块(LKD块),该磁带(LKD块)由分解深度大核卷积块(DLKCB)和通道增强的进料前向前网络(CEFN)组成。设计的DLKCB可以将深度大的内核卷积分为较小的深度卷积和深度扩张的卷积,而无需引入大量参数和计算开销。同时,设计的CEFN将通道注意机制纳入馈电网络中,以利用重要的通道并增强鲁棒性。通过组合多个LKD块和上向下的采样模块,可以进行大内核卷积DeHaze网络(LKD-NET)。评估结果证明了设计的DLKCB和CEFN的有效性,而我们的LKD-NET优于最先进的功能。在SOTS室内数据集上,我们的LKD-NET极大地优于基于变压器的方法Dehamer,只有1.79%#PARAM和48.9%的FLOPS。我们的LKD-NET的源代码可在https://github.com/swu-cs-medialab/lkd-net上获得。
translated by 谷歌翻译
发现深神经网络(DNN)容易受到对抗噪声的影响。它们通常被对抗样本误导,以做出错误的预测。为了减轻本文,我们从信息理论的角度研究了目标模型的输出与输入对抗样本之间的依赖性,并提出了一种对抗性防御方法。具体而言,我们首先通过估计输入和自然模式之间的相互信息(MI)(称为天然MI)以及分别在输出和输入的对抗模式之间的依赖性(称为对抗MI)。我们发现,与W.R.T.相比,对抗样品通常具有更大的对抗性MI和较小的天然MI。天然样品。在这一观察结果的推动下,我们建议通过在训练过程中最大化自然MI并最大程度地减少对抗性MI来增强对抗性的鲁棒性。这样,目标模型应更加关注包含客观语义的自然模式。经验评估表明,我们的方法可以有效地提高针对多次攻击的对抗精度。
translated by 谷歌翻译
旨在预测人们对不同视觉刺激的情绪的视觉情感分析(VEA)最近已成为一个有吸引力的研究主题。而不是单个标签分类任务,而是通过向不同个人投票将VEA视为标签分布学习(LDL)问题是更合理的。现有方法通常可以预测统一网络中的视觉情绪分布,从而忽略了人群投票过程中的固有主观性。在心理学中,\ textit {object-appraiSal-emotion}模型表明,每个人的情绪都受到主观评估的影响,这是由情感记忆进一步形成的。受此启发,我们提出了一个新颖的\ textit {主观性评估和匹配网络(SAMNET)},以研究视觉情感分布中的主观性。为了描述人群投票过程中的多样性,我们首先提出了\ textit {主观性评估},其中每个分支都模拟了特定个人的情感唤起过程。具体而言,我们使用基于注意力的机制来构建情感记忆,以保护每个人的独特情感体验。进一步提出了主观性损失,以确保不同个体之间的差异。此外,我们提出了\ textit {主观性匹配},旨在将无序的情感标签分配给与匈牙利算法一对一的对应关系中的单个预测。广泛的实验和比较是在公共视觉情绪分布数据集上进行的,结果表明,所提出的SAMNET始终优于最新方法。消融研究验证我们方法的有效性,可视化证明了其可解释性。
translated by 谷歌翻译
面部属性评估在视频监视和面部分析中起着重要作用。尽管基于卷积神经网络的方法取得了长足的进步,但它们不可避免地一次仅与一个当地社区打交道。此外,现有方法主要将面部属性评估视为单个多标签分类任务,而忽略了语义属性和面部身份信息之间的固有关系。在本文中,我们提出了一个小说\ textbf {trans} \ textbf {f} ace \ textbf {a} ttribute评估方法(\ textbf {transfa})的基于\ textbf {f} ace \ textbf {a}的表示,可以有效地增强属性的差异性表示。注意机制的背景。多个分支变压器用于探索类似语义区域中不同属性之间的相互关系以进行属性特征学习。特别是,层次标识构成属性损失旨在训练端到端体系结构,这可以进一步整合面部身份判别信息以提高性能。多个面部属性基准的实验结果表明,与最新方法相比,所提出的Transfa取得了出色的性能。
translated by 谷歌翻译