Adversarial robustness assessment for video recognition models has raised concerns owing to their wide applications on safety-critical tasks. Compared with images, videos have much high dimension, which brings huge computational costs when generating adversarial videos. This is especially serious for the query-based black-box attacks where gradient estimation for the threat models is usually utilized, and high dimensions will lead to a large number of queries. To mitigate this issue, we propose to simultaneously eliminate the temporal and spatial redundancy within the video to achieve an effective and efficient gradient estimation on the reduced searching space, and thus query number could decrease. To implement this idea, we design the novel Adversarial spatial-temporal Focus (AstFocus) attack on videos, which performs attacks on the simultaneously focused key frames and key regions from the inter-frames and intra-frames in the video. AstFocus attack is based on the cooperative Multi-Agent Reinforcement Learning (MARL) framework. One agent is responsible for selecting key frames, and another agent is responsible for selecting key regions. These two agents are jointly trained by the common rewards received from the black-box threat models to perform a cooperative prediction. By continuously querying, the reduced searching space composed of key frames and key regions is becoming precise, and the whole query number becomes less than that on the original video. Extensive experiments on four mainstream video recognition models and three widely used action recognition datasets demonstrate that the proposed AstFocus attack outperforms the SOTA methods, which is prevenient in fooling rate, query number, time, and perturbation magnitude at the same.
translated by 谷歌翻译
The input and output of most text generation tasks can be transformed to two sequences of tokens and they can be modeled using sequence-to-sequence learning modeling tools such as Transformers. These models are usually trained by maximizing the likelihood the output text sequence and assumes the input sequence and all gold preceding tokens are given during training, while during inference the model suffers from the exposure bias problem (i.e., it only has access to its previously predicted tokens rather gold tokens during beam search). In this paper, we propose MoCa ({\bf Mo}mentum {\bf Ca}libration) for text generation. MoCa is an online method that dynamically generates slowly evolving (but consistent) samples using a momentum moving average generator with beam search and MoCa learns to align its model scores of these samples with their actual qualities. Experiments on four text generation datasets (i.e., CNN/DailyMail, XSum, SAMSum and Gigaword) show MoCa consistently improves strong pre-trained transformers using vanilla fine-tuning and we achieve the state-of-the-art results on CNN/DailyMail and SAMSum datasets.
translated by 谷歌翻译
Prompts with different control signals (e.g., length, keywords, etc.) can be used to control text summarization. When control signals are available, they can control the properties of generated summaries and potentially improve summarization quality (since more information are given). Unfortunately, control signals are not already available during inference time. In this paper, we propose Lotus (shorthand for Latent Prompt Tuning for Summarization), which is a single model that can be applied in both controlled and uncontrolled (without control signals) modes. During training, Lotus learns latent prompt representations from prompts with gold control signals using a contrastive learning objective. Experiments show Lotus in uncontrolled mode consistently improves upon strong (uncontrollable) summarization models across four different summarization datasets. We also demonstrate generated summaries can be controlled using prompts with user specified control tokens.
translated by 谷歌翻译
近年来,将多光谱数据集成在对象检测中,尤其是可见的和红外图像。由于可见(RGB)和红外(IR)图像可以提供互补的信息来处理光变化,因此在许多领域中使用了配对图像,例如多光谱的行人检测,RGB-IR人群计数和RGB-IR显着对象检测。与天然RGB-IR图像相比,我们发现空中RGB-IR图像中的检测遭受跨模式弱的未对准问题,这些问题表现在同一物体的位置,大小和角度偏差。在本文中,我们主要解决了空中RGB-IR图像中跨模式弱未对准的挑战。具体而言,我们首先解释和分析了弱错位问题的原因。然后,我们提出了一个翻译尺度的反向对齐(TSRA)模块,以通过校准这两种方式的特征图来解决问题。该模块通过对齐过程预测了两个模式对象之间的偏差,并利用模态选择(MS)策略来提高对齐的性能。最后,基于TSRA模块的两流特征比对检测器(TSFADET)是为空中图像中的RGB-IR对象检测构建的。通过对公共无人机数据集进行的全面实验,我们验证我们的方法是否降低了交叉模式未对准的效果并实现了可靠的检测结果。
translated by 谷歌翻译
快速对抗训练(脂肪)有效地提高了标准对抗训练(SAT)的效率。然而,初始脂肪遇到灾难性的过度拟合,即,对抗性攻击的稳健精度突然并大大减少。尽管有几种脂肪变体毫不费力地防止过度拟合,但他们牺牲了很多计算成本。在本文中,我们探讨了SAT和FAT的训练过程之间的差异,并观察到,对抗性实例(AES)脂肪的攻击成功率在后期训练阶段逐渐变得更糟,从而导致过度拟合。 AE是通过零或随机初始化的快速梯度标志方法(FGSM)生成的。根据观察结果,我们提出了一种先前的FGSM初始化方法,以避免在研究多种初始化策略后避免过度适应,从而在整个训练过程中提高了AE的质量。初始化是通过利用历史上生成的AE而没有额外计算成本而形成的。我们进一步为提出的初始化方法提供了理论分析。我们还基于先前的初始化,即当前生成的扰动不应过多地偏离先前引导的初始化,因此我们还提出了一个简单而有效的正规化程序。正常化器同时采用历史和当前的对抗性扰动来指导模型学习。在四个数据集上进行的评估表明,所提出的方法可以防止灾难性过度拟合和优于最先进的脂肪方法。该代码在https://github.com/jiaxiaojunqaq/fgsm-pgi上发布。
translated by 谷歌翻译
持续学习需要与一系列任务的逐步兼容性。但是,模型体系结构的设计仍然是一个悬而未决的问题:一般而言,以一组共享的参数学习所有任务都受到任务之间的严重干扰;使用专用参数子空间学习每个任务时,受到可扩展性的限制。在这项工作中,我们从理论上分析了在不断学习中学习可塑性和记忆稳定性的概括错误,这可以在任务分布之间的(1)差异,(2)损失景观和(3)参数的覆盖率之间的差异。空间。然后,受到强大的生物学学习系统的启发,该系统通过多个平行的隔室处理顺序体验,我们建议将小型持续学习者(COSCL)的合作作为持续学习的一般策略。具体而言,我们介绍了一个架构,具有固定数量的较窄子网络,以并联学习所有增量任务,这可以自然地通过改善上限的三个组件来减少两个错误。为了增强这一优势,我们鼓励通过惩罚其功能表示的预测差异来合作这些子网络。有了固定的参数预算,COSCL可以将各种代表性的持续学习方法提高较大的利润率(例如,CIFAR-100-SC最高10.64%,CIFAR-100-RS为9.33%,CUB-200-100-100-100-100-100-100-100-100-100-100-100-100-100- 2011年和6.72%的小象征)并实现了新的最新性能。
translated by 谷歌翻译
基础学习者和集合中的样本(镜头)几乎没有弹出分类器极大地影响了模型性能。当表现不满意时,通常很难理解基本原因并进行改进。为了解决这个问题,我们提出了一种视觉分析方法FSLDIAGNOTOR。考虑到一组基础学习者和一系列射击的样本,我们考虑了两个问题:1)找到一个很好的基础学习者,可以很好地预测样本集; 2)用更多代表性的镜头代替低质量的镜头,以充分代表样品集。我们将两个问题提出为稀疏子集选择,并开发两种选择算法,分别推荐适当的学习者和射击。将矩阵可视化和散点图组合在一起,以解释上下文中推荐的学习者和镜头,并促进用户调整它们。根据调整,该算法更新了建议结果,以进行另一轮改进。进行了两项案例研究,以证明FSLDIAGNOTOR有助于有效地构建一些分类器,并分别将精度提高12%和21%。
translated by 谷歌翻译
广告分配涉及将广告和有机项目分配给有限的饲料插槽,以最大化平台收入,已成为研究热点。请注意,电子商务平台通常有多个针对不同类别的入口,并且某些入口几乎没有访问。这些入口的数据覆盖范围较低,这使得代理很难学习。为了应对这一挑战,我们提出了基于相似性的ADS分配(SHTAA)的混合转移,该转移有效地将样本和知识从数据富裕的入口转移到数据贫乏的入口。具体而言,我们为MDP定义了不确定性感知的相似性,以估计不同入口的MDP的相似性。基于这种相似性,我们设计了一种混合转移方法,包括实例传输和策略传输,以有效地将样本和知识从一个入口传递到另一个入口。 Meituan食品交付平台上的离线和在线实验都表明,该建议的方法可以在数据贫困的入口方面获得更好的性能并增加平台的收入。
translated by 谷歌翻译
随着强化学习(RL)的最新流行率,在推荐平台(例如电子商务和新闻提要网站)中利用RL来利用RL进行广泛的兴趣。为了获得更好的分配,将最近基于RL的广告分配方法的输入从点单项目升级到列表项目的布置。但是,这也导致了国家行动对的高维空间,因此很难以良好的概括能力学习列表表示。这进一步阻碍了RL药物的探索,并导致样本效率差。为了解决这个问题,我们提出了一种基于RL的新方法,用于广告分配,该方法通过利用Meituan食品交付平台上的任务特定信号来学习更好的列表表示形式。具体而言,我们根据对ADS分配的先前领域知识分别提出基于重建,预测和对比度学习的三个不同的辅助任务。我们在Meituan食品输送平台上进行了广泛的实验,以评估拟议的辅助任务的有效性。离线和在线实验结果都表明,与最先进的基线相比,提出的方法可以学习更好的列表表示形式,并获得更高的平台收入。
translated by 谷歌翻译
Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not. This paradigm, although effective, is inefficient because the detectors have to go through all patches, severely hindering the inference speed. This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results, enabling a simple and effective solution to object detection in large images. In brief, OAN is a light fully-convolutional network for judging whether each patch contains objects or not, which can be easily integrated into many object detectors and jointly trained with them end-to-end. We extensively evaluate our OAN with five advanced detectors. Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets, meanwhile with consistent accuracy improvements. On extremely large Gaofen-2 images (29200$\times$27620 pixels), our OAN improves the detection speed by 70.5%. Moreover, we extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively, without sacrificing the accuracy. Code is available at https://github.com/Ranchosky/OAN.
translated by 谷歌翻译