在本文中,我们向使用未标记的视频数据提出了用于视频变压器的自我监督培训。从给定的视频,我们创建了不同的空间尺寸和帧速率的本地和全球时空视图。我们的自我监督目标旨在匹配这些不同视图的特征,代表相同的视频,以不变于动作的时空变化。据我们所知,所提出的方法是第一个缓解对自我监督视频变压器(SVT)中的负样本或专用内存库的依赖。此外,由于变压器模型的灵活性,SVT使用动态调整的位置编码在单个架构内支持慢速视频处理,并支持沿着时空尺寸的长期关系建模。我们的方法在四个动作识别基准(动力学-400,UCF-101,HMDB-51和SSV2)上执行良好,并通过小批量尺寸更快地收敛。代码:https://git.io/j1juj.
translated by 谷歌翻译
视觉变形金刚(VITS)处理将图像输入图像作为通过自我关注的斑块;比卷积神经网络(CNNS)彻底不同的结构。这使得研究Vit模型的对抗特征空间及其可转移性有趣。特别是,我们观察到通过常规逆势攻击发现的对抗性模式,即使对于大型Vit模型,也表现出非常低的黑箱可转移性。但是,我们表明这种现象仅是由于不利用VITS的真实表示潜力的次优攻击程序。深紫色由多个块组成,具有一致的架构,包括自我关注和前馈层,其中每个块能够独立地产生类令牌。仅使用最后一类令牌(传统方法)制定攻击并不直接利用存储在早期令牌中的辨别信息,从而导致VITS的逆势转移性差。使用Vit模型的组成性质,我们通过引入特定于Vit模型结构的两种新策略来增强现有攻击的可转移性。 (i)自我合奏:我们提出了一种通过将单vit模型解剖到网络的集合来找到多种判别途径的方法。这允许在每个VIT块处明确地利用特定于类信息。 (ii)令牌改进:我们建议改进令牌,以进一步增强每种Vit障碍的歧视能力。我们的令牌细化系统地将类令牌系统组合在补丁令牌中保留的结构信息。在一个视觉变压器中发现的分类器的集合中应用于此类精炼令牌时,对抗攻击具有明显更高的可转移性。
translated by 谷歌翻译
视觉变压器(VIT)在各种机器视觉问题上表现出令人印象深刻的性能。这些模型基于多头自我关注机制,可以灵活地参加一系列图像修补程序以编码上下文提示。一个重要问题是在给定贴片上参加图像范围内的上下文的这种灵活性是如何促进在自然图像中处理滋扰,例如,严重的闭塞,域移位,空间置换,对抗和天然扰动。我们通过广泛的一组实验来系统地研究了这个问题,包括三个vit家族和具有高性能卷积神经网络(CNN)的比较。我们展示和分析了vit的以下迷恋性质:(a)变压器对严重闭塞,扰动和域移位高度稳健,例如,即使在随机堵塞80%的图像之后,也可以在想象中保持高达60%的前1个精度。内容。 (b)与局部纹理的偏置有抗闭锁的强大性能,与CNN相比,VITS对纹理的偏置显着偏差。当受到适当训练以编码基于形状的特征时,VITS展示与人类视觉系统相当的形状识别能力,以前在文献中无与伦比。 (c)使用VIT来编码形状表示导致准确的语义分割而没有像素级监控的有趣后果。 (d)可以组合从单VIT模型的现成功能,以创建一个功能集合,导致传统和几枪学习范例的一系列分类数据集中的高精度率。我们显示VIT的有效特征是由于自我关注机制可以实现灵活和动态的接受领域。
translated by 谷歌翻译
We present a new algorithm to learn a deep neural network model robust against adversarial attacks. Previous algorithms demonstrate an adversarially trained Bayesian Neural Network (BNN) provides improved robustness. We recognize the adversarial learning approach for approximating the multi-modal posterior distribution of a Bayesian model can lead to mode collapse; consequently, the model's achievements in robustness and performance are sub-optimal. Instead, we first propose preventing mode collapse to better approximate the multi-modal posterior distribution. Second, based on the intuition that a robust model should ignore perturbations and only consider the informative content of the input, we conceptualize and formulate an information gain objective to measure and force the information learned from both benign and adversarial training instances to be similar. Importantly. we prove and demonstrate that minimizing the information gain objective allows the adversarial risk to approach the conventional empirical risk. We believe our efforts provide a step toward a basis for a principled method of adversarially training BNNs. Our model demonstrate significantly improved robustness--up to 20%--compared with adversarial training and Adv-BNN under PGD attacks with 0.035 distortion on both CIFAR-10 and STL-10 datasets.
translated by 谷歌翻译
The widespread of offensive content online, such as hate speech and cyber-bullying, is a global phenomenon. This has sparked interest in the artificial intelligence (AI) and natural language processing (NLP) communities, motivating the development of various systems trained to detect potentially harmful content automatically. These systems require annotated datasets to train the machine learning (ML) models. However, with a few notable exceptions, most datasets on this topic have dealt with English and a few other high-resource languages. As a result, the research in offensive language identification has been limited to these languages. This paper addresses this gap by tackling offensive language identification in Sinhala, a low-resource Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level, improving the explainability of the ML models. SOLD is the first large publicly available offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
translated by 谷歌翻译
Artificial Intelligence (AI) and its data-centric branch of machine learning (ML) have greatly evolved over the last few decades. However, as AI is used increasingly in real world use cases, the importance of the interpretability of and accessibility to AI systems have become major research areas. The lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms. This is due to many reasons including ethical and regulatory concerns, which have resulted in poorer adoption of ML in some areas. The recent past has seen a surge in research on interpretable ML. Generally, designing a ML system requires good domain understanding combined with expert knowledge. New techniques are emerging to improve ML accessibility through automated model design. This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems while also being relevant to developing countries. We review work under multiple levels of interpretability including scientific and mathematical interpretation, statistical interpretation and partial semantic interpretation. This review includes applications in three areas, namely food processing, agriculture and health.
translated by 谷歌翻译
许多应用程序需要神经网络的鲁棒性或理想的不变性,以使输入数据的某些转换。最常见的是,通过使用对抗性培训或定义包括设计所需不变性的网络体系结构来解决此要求。在这项工作中,我们提出了一种方法,使网络体系结构通过基于固定标准从(可能连续的)轨道中选择一个元素,从而使网络体系结构相对于小组操作证明是不变的。简而言之,我们打算在将数据馈送到实际网络之前“撤消”任何可能的转换。此外,我们凭经验分析了通过训练或体系结构结合不变性的不同方法的特性,并在鲁棒性和计算效率方面证明了我们方法的优势。特别是,我们研究了图像旋转(可以持续到离散化工件)以及3D点云分类的可证明的方向和缩放不变性方面的鲁棒性。
translated by 谷歌翻译
词汇简化(LS)是自动替换复杂词的任务,使其更容易使文本更容易被各种目标人群访问(例如,识字率低,学习障碍的人,第二语言学习者)。为了训练和测试模型,LS系统通常需要在上下文中具有复杂词的CORPORA及其候选替代。为了继续提高LS系统的性能,我们引入了Alexsis-PT,这是一个新型的巴西葡萄牙LS的多候选数据集,其中包含9,605个候选替代,用于387个复杂词。 Alexsis-PT已按照Alexsis协议进行编译,用于西班牙开放跨语言模型的令人兴奋的新途径。 Alexsis-PT是第一个包含巴西报纸文章的LS多候车数据集。我们评估了该数据集上替代生成的四个模型,即Mdistilbert,Mbert,XLM-R和Bertimbau。 Bertimbau在所有评估指标中取得了最高的性能。
translated by 谷歌翻译
多词表达式(MWE)是一系列单词,共同提出的含义不是从其单个单词中得出的。处理MWE的任务在许多自然语言处理(NLP)应用中至关重要,包括机器翻译和术语提取。因此,在不同领域中检测MWE是一个重要的研究主题。在本文中,我们探索了最新的神经变压器,以检测花和植物名称中的MWES。我们在由植物和花朵百科全书创建的数据集上评估了不同的变压器模型。我们从经验上表明,Transformer模型模型优于基于长期记忆(LSTM)的先前神经模型。
translated by 谷歌翻译
多字表达式(MWES)呈现单词组,其中整体的含义不是源于其部分的含义。处理MWE的任务在许多自然语言处理(NLP)应用中至关重要,包括机器翻译和术语提取。因此,检测MWE是一个流行的研究主题。在本文中,我们在检测MWES的任务中探索了最新的神经变压器。我们在数据集中凭经验评估了Semeval-2016任务10:检测最小的语义单元及其含义(DIMSUM)。我们表明,变压器模型的表现优于先前基于长期记忆(LSTM)的神经模型。该代码和预培训模型将免费提供给社区。
translated by 谷歌翻译