Large, text-conditioned generative diffusion models have recently gained a lot of attention for their impressive performance in generating high-fidelity images from text alone. However, achieving high-quality results is almost unfeasible in a one-shot fashion. On the contrary, text-guided image generation involves the user making many slight changes to inputs in order to iteratively carve out the envisioned image. However, slight changes to the input prompt often lead to entirely different images being generated, and thus the control of the artist is limited in its granularity. To provide flexibility, we present the Stable Artist, an image editing approach enabling fine-grained control of the image generation process. The main component is semantic guidance (SEGA) which steers the diffusion process along variable numbers of semantic directions. This allows for subtle edits to images, changes in composition and style, as well as optimization of the overall artistic conception. Furthermore, SEGA enables probing of latent spaces to gain insights into the representation of concepts learned by the model, even complex ones such as 'carbon emission'. We demonstrate the Stable Artist on several tasks, showcasing high-quality image editing and composition.
translated by 谷歌翻译
While text-to-image synthesis currently enjoys great popularity among researchers and the general public, the security of these models has been neglected so far. Many text-guided image generation models rely on pre-trained text encoders from external sources, and their users trust that the retrieved models will behave as promised. Unfortunately, this might not be the case. We introduce backdoor attacks against text-guided generative models and demonstrate that their text encoders pose a major tampering risk. Our attacks only slightly alter an encoder so that no suspicious model behavior is apparent for image generations with clean prompts. By then inserting a single non-Latin character into the prompt, the adversary can trigger the model to either generate images with pre-defined attributes or images following a hidden, potentially malicious description. We empirically demonstrate the high effectiveness of our attacks on Stable Diffusion and highlight that the injection process of a single backdoor takes less than two minutes. Besides phrasing our approach solely as an attack, it can also force an encoder to forget phrases related to certain concepts, such as nudity or violence, and help to make image generation safer.
translated by 谷歌翻译
文本指导的图像生成模型,例如DALL-E 2和稳定的扩散,最近受到了学术界和公众的关注。这些模型提供了文本描述,能够生成描绘各种概念和样式的高质量图像。但是,此类模型接受了大量公共数据的培训,并从其培训数据中隐含地学习关系,这些数据并不明显。我们证明,可以通过简单地用视觉上类似的非拉丁字符替换文本描述中的单个字符来触发并注入生成的图像中的常见多模型模型,这些偏见可以被触发并注入生成的图像。这些所谓的同符文更换使恶意用户或服务提供商能够诱导偏见到生成的图像中,甚至使整个一代流程变得无用。我们实际上说明了对DALL-E 2和稳定扩散的这种攻击,例如文本引导的图像生成模型,并进一步表明夹子的行为也相似。我们的结果进一步表明,经过多语言数据训练的文本编码器提供了一种减轻同符替代效果的方法。
translated by 谷歌翻译
由于现在在许多现实世界应用中使用了深度学习,因此研究越来越集中于深度学习模型的隐私以及如何防止攻击者获得有关培训数据的敏感信息。但是,在隐私攻击的背景下,尚未对诸如剪辑之类的图像文本模型进行研究。虽然会员推理攻击旨在判断是否使用特定数据点进行培训,但我们引入了一种新型的隐私攻击,该隐私攻击名为“身份推理攻击”(IDIA),该攻击(IDIA)是为CLIP等多模式图像文本模型而设计的。使用IDIA,攻击者可以通过以黑盒方式查询模型,并以同一个人的不同图像来揭示特定人是否是培训数据的一部分。让模型从各种可能的文本标签中进行选择,攻击者可以探究该模型是否识别该人,因此可以用于培训。通过剪辑上的几个实验,我们表明攻击者可以以非常高的精度识别用于培训的个人,并且该模型学会了将名称与被描绘的人联系起来。我们的实验表明,多模式图像文本模型确实泄漏了有关其训练数据的敏感信息,因此应谨慎处理。
translated by 谷歌翻译
模糊哈希是数字取证中的重要工具,可用于近似匹配,以确定数字工件之间的相似性。他们将文件的字节代码转换为可计算的字符串,这使得它们对于智能机器处理特别有趣。在这项工作中,我们提出了深度学习近似匹配(DLAM),该匹配(DLAM)在检测模糊哈希异常的准确性比传统方法更高。除了著名的聚类恶意软件应用程序外,我们还表明,模糊的哈希和深度学习确实非常适合根据某些内容(例如恶意软件)进行分类。 DLAM依赖于自然语言处理领域的基于变压器的模型,并优于现有方法。传统的模糊哈希(TLSH和SSDEEP)的尺寸有限,并且与整体文件大小相比相对较小,并且无法检测到文件异常。然而,DLAM可以在TLSH和SSDEEP的计算模糊哈希中检测此类文件相关性,即使对于异常大小不到15%也是如此。它与最先进的模糊散列算法获得了可比的结果,同时依靠更高效的哈希计算,因此可以在更大的规模上使用。
translated by 谷歌翻译
模型反转攻击(MIAS)旨在创建合成图像,通过利用模型的学习知识来反映目标分类器的私人培训数据中的班级特征。先前的研究开发了生成的MIA,该MIA使用生成的对抗网络(GAN)作为针对特定目标模型的图像先验。这使得攻击时间和资源消耗,不灵活,并且容易受到数据集之间的分配变化的影响。为了克服这些缺点,我们提出了插头攻击,从而放宽了目标模型和图像之前的依赖性,并启用单个GAN来攻击广泛的目标,仅需要对攻击进行少量调整。此外,我们表明,即使在公开获得的预训练的gan和强烈的分配转变下,也可以实现强大的MIA,而先前的方法无法产生有意义的结果。我们的广泛评估证实了插头攻击的鲁棒性和灵活性,以及​​它们创建高质量图像的能力,揭示了敏感的类特征。
translated by 谷歌翻译
会员推理攻击(MIS)旨在确定特定样本是否用于培训预测模型。知道这可能确实导致隐私违约。可以说,大多数MIS利用模型的预测分数 - 每个输出的概率给出一些输入 - 在其训练的模型趋于不同地在其训练数据上表现不同。我们认为这是许多现代深度网络架构的谬论,例如,Relu型神经网络几乎总是远离训练数据的高预测分数。因此,MIS将误会失败,因为这种行为不仅导致高伪率,不仅在已知的域名上,而且对分发数据外,并且隐含地作为针对偏心的防御。具体地,使用生成的对抗网络,我们能够产生虚假分类为培训数据的一部分的潜在无限数量的样本。换句话说,MIS的威胁被高估,并且泄漏的信息较少地假设。此外,在分类器的过度自信和对偏话的易感性之间实际上有一个权衡:更频率越多,他们不知道,对远离训练数据的低信任预测,他们越远,他们越多,揭示了训练数据。
translated by 谷歌翻译
Apple最近透露了它的深度感知散列系统的神经枢纽,以检测文件在文件上传到其iCloud服务之前的用户设备上的儿童性滥用材料(CSAM)。关于保护用户隐私和系统可靠性的公众批评很快就会出现。本文基于神经枢纽的深度感知哈希展示了第一综合实证分析。具体而言,我们表明当前深度感知散列可能不具有稳健性。对手可以通过应用图像的略微变化来操纵散列值,或者通过基于梯度的方法或简单地执行标准图像转换,强制或预防哈希冲突来操纵。这种攻击允许恶意演员轻松利用检测系统:从隐藏滥用材料到框架无辜的用户,一切都是可能的。此外,使用散列值,仍然可以对存储在用户设备上的数据进行推断。在我们的观点中,根据我们的结果,其目前形式的深度感知散列通常不适用于强大的客户端扫描,不应从隐私角度使用。
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
In this work, a method for obtaining pixel-wise error bounds in Bayesian regularization of inverse imaging problems is introduced. The proposed method employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain coverage guarantees for the error bounds, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner. Guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, experiments with multiple regularization approaches presented in the paper confirm that in practice, the obtained error bounds are rather tight. For realizing the numerical experiments, also a novel primal-dual Langevin algorithm for sampling from non-smooth distributions is introduced in this work.
translated by 谷歌翻译