张量分解因其在多维数据中捕获潜在因素的固有能力而获得了越来越多的兴趣,该数据具有许多应用程序,例如推荐系统和电子健康记录(EHR)挖掘。已经提出了Parafac2及其变体来解决不规则的张量,其中一种张量模式不对齐,例如,EHR中推荐系统或患者的不同用户可能具有不同的记录。 PARAFAC2已成功应用于EHRS,用于提取有意义的医学概念(表型)。尽管有最近的进步,但当前模型的可预测性和可解释性并不令人满意,这限制了其用于下游分析的效用。在本文中,我们提出了多个多任务学习的多个监督不规则张量分解。多个多个可以灵活地包含静态(例如,院内死亡率预测)和连续或动态(例如,通风的需求)任务。通过通过下游预测任务监督张量分解并利用来自多个相关预测任务的信息,Multipar不仅可以产生更有意义的表型,而且可以为下游任务提供更好的预测性能。我们在两个现实世界中的EHR数据集上进行了广泛的实验,以证明Multipar是可扩展的,并且与现有的最新方法相比,具有更有意义的亚组和更强的预测性能,可以更好地张紧张量。
translated by 谷歌翻译
人群计数是一项回归任务,它估计场景图像中的人数,在一系列安全至关重要的应用程序中起着至关重要的作用,例如视频监视,交通监控和流量控制。在本文中,我们研究了基于深度学习的人群计数模型对后门攻击的脆弱性,这是对深度学习的主要安全威胁。后门攻击者通过数据中毒将后门触发植入目标模型,以控制测试时间的预测。与已经开发和测试的大多数现有后门攻击的图像分类模型不同,人群计数模型是输出多维密度图的回归模型,因此需要不同的技术来操纵。在本文中,我们提出了两次新颖的密度操纵后门攻击(DMBA $^{ - } $和DMBA $^{+} $),以攻击模型以产生任意的大或小密度估计。实验结果证明了我们对五个经典人群计数模型和四种类型数据集的DMBA攻击的有效性。我们还深入分析了后门人群计数模型的独特挑战,并揭示了有效攻击的两个关键要素:1)完整而密集的触发器以及2)操纵地面真相计数或密度图。我们的工作可以帮助评估人群计数模型对潜在后门攻击的脆弱性。
translated by 谷歌翻译
自然语言视频本地化(NLVL)是视觉语言理解区域的重要任务,该方面还要求深入了解单独的计算机视觉和自然语言侧,但更重要的是两侧之间的相互作用。对抗性脆弱性得到了很好的认可,作为深度神经网络模型的关键安全问题,需要谨慎调查。尽管在视频和语言任务中进行了广泛但分开的研究,但目前对NLVL等愿景联合任务的对抗鲁棒性的理解较少。因此,本文旨在通过检查攻击和防御方面的三个脆弱性,全面调查NLVL模型的对抗性鲁棒性。为了实现攻击目标,我们提出了一种新的对抗攻击范式,称为同义句子感知对抗对抗攻击对逆向(潜行),这捕获了视觉和语言侧面之间的跨模式相互作用。
translated by 谷歌翻译
作为人类视觉系统(HVS)的重要感知特性,已经研究了几十年的图像和视频处理(例如,感知视觉信号压缩)已经研究了刚刚明显的差异(JND)。然而,对于深度机器视觉(DMV)的JND存在很少的探索,尽管DMV在许多机器视觉任务中取得了很大的进步。在本文中,我们进行了初步尝试,并证明DMV具有JND,称为DMV-JND。然后,我们为DMV中的图像分类任务提出了JND模型。已经发现DMV可以通过与所提出的DMV-JND-NET的无监督学习产生JND来容忍平均PSNR的扭曲图像,其平均PSNR仅为9.56dB(越来越越好)。特别是,设计语义引导的冗余评估策略旨在抑制DMV-JND的幅度和空间分布。图像分类的实验结果表明,我们成功找到了深度机视觉的JND。我们的DMV-JND有助于DMV导向图像和视频压缩,水印,质量评估,深度神经网络安全等方向的可能方向。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
Implicit regularization is an important way to interpret neural networks. Recent theory starts to explain implicit regularization with the model of deep matrix factorization (DMF) and analyze the trajectory of discrete gradient dynamics in the optimization process. These discrete gradient dynamics are relatively small but not infinitesimal, thus fitting well with the practical implementation of neural networks. Currently, discrete gradient dynamics analysis has been successfully applied to shallow networks but encounters the difficulty of complex computation for deep networks. In this work, we introduce another discrete gradient dynamics approach to explain implicit regularization, i.e. landscape analysis. It mainly focuses on gradient regions, such as saddle points and local minima. We theoretically establish the connection between saddle point escaping (SPE) stages and the matrix rank in DMF. We prove that, for a rank-R matrix reconstruction, DMF will converge to a second-order critical point after R stages of SPE. This conclusion is further experimentally verified on a low-rank matrix reconstruction problem. This work provides a new theory to analyze implicit regularization in deep learning.
translated by 谷歌翻译
In this paper, we introduce a novel variation of model-agnostic meta-learning, where an extra multiplicative parameter is introduced in the inner-loop adaptation. Our variation creates a shortcut in the parameter space for the inner-loop adaptation and increases model expressivity in a highly controllable manner. We show both theoretically and numerically that our variation alleviates the problem of conflicting gradients and improves training dynamics. We conduct experiments on 3 distinctive problems, including a toy classification problem for threshold comparison, a regression problem for wavelet transform, and a classification problem on MNIST. We also discuss ways to generalize our method to a broader class of problems.
translated by 谷歌翻译