垂直协作学习系统也被称为垂直联合学习(VFL)系统最近成为一个概念,以处理在许多个人来源上分布的数据,而无需集中它。多个参与者以隐私保留方式基于其本地数据协作培训模型。迄今为止,VFL已成为一项事实上的解决方案,以便在组织之间安全地学习模型,允许在不影响任何个人组织的隐私的情况下共享知识。尽管VFL系统的发展繁荣发展,但我们发现参与者的某些投入,名叫对抗的主导投入(ADIS),可以将联合推断占主持旨在的意志的方向,并迫使其他(受害者)参与者进行可忽略不计的捐款,失败奖励通常提供他们在合作学习情景中的贡献的重要性。通过首先在典型的VFL系统中证明其存在,我们对ADI进行了系统研究。然后,我们提出基于梯度的方法来综合各种格式的ADI并利用公共VFL系统。我们进一步推出了Greybox Fuzz测试,以“受害者”参与者的弹性分数为指导,以扰乱对抗控制的输入,并以隐私保存方式系统地探索VFL攻击表面。我们对临界参数和环境在合成ADIS中的影响进行了深入的研究。我们的研究揭示了新的VFL攻击机会,在违反之前促进了未知威胁的识别,并建立了更安全的VFL系统。
translated by 谷歌翻译
作为服务的云计算和机器学习的繁荣发展导致媒体软件的广泛使用来处理机密媒体数据。本文探讨了对媒体软件启动侧通道分析(SCA)以重建机密介质输入的侵略性的能力。代表学习和感知学习的最新进展激发了我们考虑从侧通道迹线的媒体输入的重建作为跨模式歧管学习任务,可以以统一的方式通过训练的自动介质框架来寻址,以便学习媒体输入之间的映射和侧沟道观测。我们进一步提升了自动统计学家,注意本地化对SCA的主要贡献的程序点,从而自动查明媒体软件中的信息泄漏点。我们还提出了一种新颖且非常有效的防御技术,称为感知致盲,可以使媒体输入具有感知掩模和减轻基于多种学习的SCA。我们的评估利用三个流行的媒体软件重建图像,音频和文本格式的输入。我们分析了三个常见的侧面通道 - 缓存库,缓存行和页面表 - 以及仅由标准Prime +探针记录的用户空间缓存设置访问。我们的框架成功地从评估的媒体软件重建了高质量的机密输入,并自动查明了他们脆弱的程序点,其中许多是公众所未知的。我们进一步表明,感知致盲可以减轻基于流形的学习的SCA,额外的成本可忽略不计。
translated by 谷歌翻译
Markov决策过程(MDP)为建模顺序决策问题提供了一种数学框架,其中许多是对安全性和安全性至关重要,例如自主驾驶和机器人控制。人工智能研究的快速发展已经创造了解决MDP的有效方法,例如深神经网络(DNN),加固学习(RL)和仿制学习(IL)。然而,这些用于解决MDP的流行模型既不彻底测试也不是严格的可靠性。我们呈现MDPFuzzer,这是求解MDP的模型的第一个Blackbox Fuzz测试框架。 MDPFuzzer通过检查目标模型是否进入异常和危险状态来形成oracelles。在模糊期间,MDPFuzzer通过测量可以减少累积奖励或形成新的状态序列来确定哪个突变状态。我们设计有效的技术来使用高斯混合模型(GMM)和动态期望 - 最大化(Dynem)来量化状态序列的“新鲜度”。我们还通过估计各种目标模型的局部敏感度,优先考虑具有泄露崩溃的高潜力。 MDPFuzzer在五种最先进的模型中进行了评估,用于解决MDP,包括监督DNN,RL,IL和多代理RL。我们的评估包括自动驾驶,飞机碰撞避免和经常用于基准测试的两个游戏的情况。在12小时的运行期间,我们在每个模型上找到超过80次碰撞触发状态序列。我们展示了鼓舞的发现,碰撞触发状态虽然正常,但与正常状态相比,诱导不同的神经元激活模式。我们进一步开发了异常行为检测器,以硬化所有评估的模型,并使用MDPFuzzer的调查结果修复它们,以显着提高其鲁棒性而不会牺牲精度。
translated by 谷歌翻译
我们开发DeepTraversal,一个反馈驱动的框架来测试DNN。DeepTraversal首先启动离线阶段,以将各种形式的媒体数据映射到歧管。然后,在其在线测试阶段,DeameTraversal遍历准备的歧管空间以最大化DNN覆盖标准和触发预测误差。在我们的评估中,使用DNN执行各种任务(例如,分类,自动驾驶,机器翻译)和不同类型(图像,音频,文本)的媒体数据。DeepTraversal表现出比现有的方法相对于流行DNN覆盖标准的方法更好,并且它可以发现更大的数量和更高质量的错误触发输入。经过测试的DNN模型,经过深度干扰的调查结果,实现更好的准确性
translated by 谷歌翻译
本文总结了DNN测试标准的八种设计要求,考虑到分配性能和实际问题。然后,我们提出了一种新的标准NLC,满足所有这些设计要求。NLC将单个DNN层视为基本计算单元(而不是单个神经元),并捕获神经元输出分布的四个关键特征。因此,NLC表示为神经覆盖,这更准确地描述神经网络如何通过近似分布而不是神经元来理解输入。我们证明NLC与跨多个任务(分类和发电)和数据格式(图像和文本)的测试套件的多样性相关。它发现DNN预测误差的能力是有前途的。由NLC引导的测试输入突变导致暴露错误行为的更高质量和多样性。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译