人类对象相互作用(HOI)识别的关键是推断人与物体之间的关系。最近,该图像的人类对象相互作用(HOI)检测取得了重大进展。但是,仍然有改善视频HOI检测性能的空间。现有的一阶段方法使用精心设计的端到端网络来检测视频段并直接预测交互。它使网络的模型学习和进一步的优化更加复杂。本文介绍了空间解析和动态时间池(SPDTP)网络,该网络将整个视频作为时空图作为人类和对象节点作为输入。与现有方法不同,我们提出的网络通过显式空间解析预测交互式和非相互作用对之间的差异,然后执行交互识别。此外,我们提出了一个可学习且可区分的动态时间模块(DTM),以强调视频的关键帧并抑制冗余帧。此外,实验结果表明,SPDTP可以更多地关注主动的人类对象对和有效的密钥帧。总体而言,我们在CAD-1220数据集和某些ELSE数据集上实现了最先进的性能。
translated by 谷歌翻译
学习优化(L2O)最近被出现为通过利用神经网络的强预测力来解决优化问题的有希望的方法,并提供比传统求解器更低的运行时复杂性。虽然L2O已经应用于各种问题,但对于Minimax优化形式的一个至关重要的且挑战性的问题 - 稳健的组合优化 - 在很大程度上仍然存在。除了指数大的决策空间之外,对于鲁棒组合优化的关键挑战在于内部优化问题,其通常是非凸出的并且缠绕在外的优化中。在本文中,我们研究了强大的组合优化,并提出了一种新的基于学习的优化器,称为LRCO(用于鲁棒组合优化的学习),其在存在不确定上下文存在下快速输出鲁棒解决方案。 LRCO利用一对基于学习的优化器 - 一个用于最小化器,另一个用于最大化器 - 使用它们各自的目标函数作为损失,并且可以培训而无需标签训练问题实例。为了评估LRCO的性能,我们对车辆边缘计算中的任务卸载问题进行仿真。我们的结果突出显示LRCO可以大大降低最坏情况的成本并提高鲁棒性,同时具有非常低的运行时复杂性。
translated by 谷歌翻译
对抗商业黑匣子语音平台的对抗攻击,包括云语音API和语音控制设备,直到近年来接受了很少的关注。目前的“黑匣子”攻击所有严重依赖于预测/置信度评分的知识,以加工有效的对抗示例,这可以通过服务提供商直观地捍卫,而不返回这些消息。在本文中,我们提出了在更实用和严格的情况下提出了两种新的对抗攻击。对于商业云演讲API,我们提出了一个决定的黑匣子逆势攻击,这些攻击是唯一的最终决定。在偶变中,我们将决策的AE发电作为一个不连续的大规模全局优化问题,并通过自适应地将该复杂问题自适应地分解成一组子问题并协同优化每个问题来解决它。我们的春天是一种齐全的所有方法,它在一个广泛的流行语音和扬声器识别API,包括谷歌,阿里巴巴,微软,腾讯,达到100%的攻击攻击速度100%的攻击率。 iflytek,和景东,表现出最先进的黑箱攻击。对于商业语音控制设备,我们提出了Ni-Occam,第一个非交互式物理对手攻击,而对手不需要查询Oracle并且无法访问其内部信息和培训数据。我们将对抗性攻击与模型反演攻击相结合,从而产生具有高可转换性的物理有效的音频AE,而无需与目标设备的任何交互。我们的实验结果表明,NI-Occam可以成功欺骗苹果Siri,Microsoft Cortana,Google Assistant,Iflytek和Amazon Echo,平均SRO为52%和SNR为9.65dB,对抗语音控制设备的非交互式物理攻击。
translated by 谷歌翻译
转移学习已成为解决培训数据稀缺性的常见解决方案。它通过重复或微调训练有素的教师模型的早期层来训练特定的学生模型,该模型通常是公开可用的。但是,除了公用事业的改进外,转移的公共知识还为建模机密性带来了潜在的威胁,甚至进一步提出了其他安全和隐私问题。在本文中,我们介绍了转移学习环境中教师模型敞口威胁的首次全面调查,旨在更深入地了解公共知识和模型机密性之间的紧张关系。为此,我们提出了一种教师模型指纹攻击,以推断学生模型的起源,即它从中转移的教师模型。具体而言,我们提出了一种基于优化的新方法,以仔细生成查询以探测学生模型以实现我们的攻击。与现有的模型逆向工程方法不同,我们提出的指纹识别方法不依赖于细粒的模型输出,例如,后代和模型体系结构或培训数据集的辅助信息。我们系统地评估拟议攻击的有效性。经验结果表明,我们的攻击可以通过很少的查询准确地识别模型来源。此外,我们表明拟议的攻击可以作为垫脚石,以促进针对机器学习模型的其他攻击,例如窃取模型。
translated by 谷歌翻译
当上行链路和下行链路通信都有错误时联合学习(FL)工作吗?通信噪音可以处理多少,其对学习性能的影响是什么?这项工作致力于通过明确地纳入流水线中的上行链路和下行链路嘈杂的信道来回答这些实际重要的问题。我们在同时上行链路和下行链路嘈杂通信通道上提供了多种新的融合分析,其包括完整和部分客户端参与,直接模型和模型差分传输,以及非独立和相同分布的(IID)本地数据集。这些分析表征了嘈杂通道的流动条件,使其具有与无通信错误的理想情况相同的融合行为。更具体地,为了保持FEDAVG的O(1 / T)具有完美通信的O(1 / T)收敛速率,应控制用于直接模型传输的上行链路和下行链路信噪比(SNR),使得它们被缩放为O(t ^ 2)其中T是通信轮的索引,但可以保持常量的模型差分传输。这些理论结果的关键洞察力是“雷达下的飞行”原则 - 随机梯度下降(SGD)是一个固有的噪声过程,并且可以容忍上行链路/下行链路通信噪声,只要它们不占据时变的SGD噪声即可。我们举例说明了具有两种广泛采用的通信技术 - 传输功率控制和多样性组合的这些理论发现 - 并通过使用多个真实世界流动任务的广泛数值实验进一步通过标准方法验证它们的性能优势。
translated by 谷歌翻译
In this paper, we investigate the impact of diverse user preference on learning under the stochastic multi-armed bandit (MAB) framework. We aim to show that when the user preferences are sufficiently diverse and each arm can be optimal for certain users, the O(log T) regret incurred by exploring the sub-optimal arms under the standard stochastic MAB setting can be reduced to a constant. Our intuition is that to achieve sub-linear regret, the number of times an optimal arm being pulled should scale linearly in time; when all arms are optimal for certain users and pulled frequently, the estimated arm statistics can quickly converge to their true values, thus reducing the need of exploration dramatically. We cast the problem into a stochastic linear bandits model, where both the users preferences and the state of arms are modeled as {independent and identical distributed (i.i.d)} d-dimensional random vectors. After receiving the user preference vector at the beginning of each time slot, the learner pulls an arm and receives a reward as the linear product of the preference vector and the arm state vector. We also assume that the state of the pulled arm is revealed to the learner once its pulled. We propose a Weighted Upper Confidence Bound (W-UCB) algorithm and show that it can achieve a constant regret when the user preferences are sufficiently diverse. The performance of W-UCB under general setups is also completely characterized and validated with synthetic data.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译