生成的对抗性网络(GANS)的快速进展提出了滥用恶意目的的担忧,特别是在创造虚假的脸部图像方面。尽管许多所提出的方法成功地检测基于GaN的合成图像,但它们仍然受到大量训练假图像数据集和探测器对未知面部图像的普遍性的挑战的限制。在本文中,我们提出了一种新方法,探讨了颜色通道的异步频谱,这对于培训无监督和监督的学习模型来区分GaN的合成图像是简单而有效的。我们进一步调查了培训模型的可转换性,该培训模型从一个源域中的建议功能中学习,并在另一个目标域上验证了具有功能分布的先验知识。我们的实验结果表明,频域中光谱的差异是实用的伪影,以有效地检测各种类型的基于GaN的生成图像。
translated by 谷歌翻译
深度学习已成功地用于解决从大数据分析到计算机视觉和人级控制的各种复杂问题。但是,还采用了深度学习进步来创建可能构成隐私,民主和国家安全威胁的软件。最近出现的那些深度学习驱动的应用程序之一是Deepfake。 DeepFake算法可以创建人类无法将它们与真实图像区分开的假图像和视频。因此,可以自动检测和评估数字视觉媒体完整性的技术的建议是必不可少的。本文介绍了一项用于创造深击的算法的调查,更重要的是,提出的方法旨在检测迄今为止文献中的深击。我们对与Deepfake技术有关的挑战,研究趋势和方向进行了广泛的讨论。通过回顾深层味和最先进的深层检测方法的背景,本研究提供了深入的深层技术的概述,并促进了新的,更强大的方法的发展,以应对日益挑战性的深击。
translated by 谷歌翻译
虽然生成模型的最新进步为社会带来了不同的优势,但它也可以滥用恶意目的,例如欺诈,诽谤和假新闻。为了防止这种情况,进行了剧烈的研究以区分生成的图像从真实图像中的图像,但仍然存在挑战以区分训练设置之外的未经证实的图像。由于模型过度的问题引起了由特定GAN生成的培训数据而产生的数据依赖性,发生了这种限制。为了克服这个问题,我们采用自我监督计划提出一个新颖的框架。我们所提出的方法由人工指纹发生器重构GaN图像的高质量人工指纹进行详细分析,并且通过学习重建的人造指纹来区分GaN图像。为了提高人工指纹发生器的泛化,我们构建具有不同数量的上耦层的多个自动泊。利用许多消融研究,即使不利用训练数据集的GaN图像,也通过表现出先前最先进的算法的概括来验证我们的方法的鲁棒广泛化。
translated by 谷歌翻译
目前的高保真发电和高精度检测DeepFake图像位于臂赛中。我们认为,生产高度逼真和“检测逃避”的深度可以服务于改善未来一代深度检测能力的最终目标。在本文中,我们提出了一种简单但强大的管道,以通过执行隐式空间域陷波滤波来减少假图像的伪影图案而不会损伤图像质量。我们首先表明频域陷波滤波,尽管由于陷波滤波器所需的手动设计,我们的任务对于我们的任务是有效的,但是频域陷波过滤虽然是有效的。因此,我们诉诸基于学习的方法来重现陷波滤波效果,而是仅在空间域中。我们采用添加压倒性的空间噪声来打破周期性噪声模式和深映像滤波来重建无噪声假图像,我们将我们的方法命名为Deadnotch。深度图像过滤为嘈杂图像中的每个像素提供专用过滤器,与其DeepFake对应物相比,产生具有高保真度的滤波图像。此外,我们还使用图像的语义信息来生成对抗性引导映射,以智能地添加噪声。我们对3种代表性的最先进的深蓝进行的大规模评估(在16种DeepFakes上测试)已经证明,我们的技术显着降低了这3种假图像检测方法的准确性,平均和高度为36.79% 97.02%在最好的情况下。
translated by 谷歌翻译
生成的对抗网络(GANS)能够生成从真实图像视觉无法区分的图像。然而,最近的研究表明,生成和实际图像在频域中共享显着差异。在本文中,我们探讨了高频分量在GAN训练中的影响。根据我们的观察,在大多数GAN的培训期间,严重的高频差异使鉴别器聚焦在过度高频成分上,阻碍了发电机拟合了对学习图像内容很重要的低频分量。然后,我们提出了两个简单但有效的频率操作,以消除由GAN训练的高频差异引起的副作用:高频混淆(HFC)和高频滤波器(HFF)。拟议的操作是一般的,可以应用于大多数现有的GAN,一小部分成本。在多丢失函数,网络架构和数据集中验证了所提出的操作的高级性能。具体而言,拟议的HFF在Celeba(128 * 128)基于SSNGAN的Celeba无条件生成的Celeba(128 * 128)无条件一代,在Celeba无条件一代基于SSGAN的13.2 \%$ 30.2 \%$ 69.3 \%$ 69.3 \%$ FID在Celeba无条件一代基于Infomaxgan。
translated by 谷歌翻译
在该工作中提出了一种用于检测CNN生成的图像的新方法,称为注意力PIXELHOP(或A-PIXELHOP)。它有三个优点:1)计算复杂性低,模型尺寸小,2)对多种生成型号的高检测性能,以及3)数学透明度。 A-Pixelhop是在假设中难以在本地区域中合成高质量的高频分量的假设。它包含四个构建模块:1)选择边缘/纹理块,其包含显着的高频分量,2)将多个过滤器组应用于它们以获得丰富的空间光谱响应,如功能,3)将功能送至多个二进制分类器。获得一套软化决策,4)开发有效的集合计划,以使软决策融入最终决定。实验结果表明,A-Pixelhop在检测激活的图像中优于最先进的方法。此外,它可以概括到未经看明的生成模型和数据集。
translated by 谷歌翻译
生成的对抗网络由于研究人员的最新性能在生成新图像时仅使用目标分布的数据集,因此引起了研究人员的关注。已经表明,真实图像的频谱和假图像之间存在差异。由于傅立叶变换是一种徒图映射,因此说该模型在学习原始分布方面有一个重大问题是一个公平的结论。在这项工作中,我们研究了当前gan的架构和数学理论中提到的缺点的可能原因。然后,我们提出了一个新模型,以减少实际图像和假图像频谱之间的差异。为此,我们使用几何深度学习的蓝图为频域设计了一个全新的架构。然后,我们通过将原始数据的傅立叶域表示作为训练过程中的主要特征来表明生成图像的质量的有希望的改善。
translated by 谷歌翻译
尽管基于深度学习的伪造探测器具有重要的进步,但是对于区分操纵的深度图像,大多数检测方法遭受中度至显着性能降解,具有低质量的压缩的深度图像。由于低质量图像中的信息有限,检测低质量的深脂仍然是一个重要的挑战。在这项工作中,我们在知识蒸馏(KD)中应用频域学习和最优运输理论,以具体改善低质量压缩的深粉图像的检测。我们探索KD中的转移学习能力,使学生网络能够有效地学习低质量图像的歧视特征。特别是,我们提出了基于关注的DeepFake检测蒸馏器(Add),其中包括两种新蒸馏:1)频率注意蒸馏,有效地检索学生网络中的除去高频分量,2)多视图注意蒸馏通过在不同的意见下切片教师和学生的张量来创造多种关注传感器,以便更有效地将教师张富翁的分发转移给学生。我们广泛的实验结果表明,我们的方法优于最先进的基线检测低质量压缩的深度图像。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
Generative Adversarial Networks (GANs) have paved the path towards entirely new media generation capabilities at the forefront of image, video, and audio synthesis. However, they can also be misused and abused to fabricate elaborate lies, capable of stirring up the public debate. The threat posed by GANs has sparked the need to discern between genuine content and fabricated one. Previous studies have tackled this task by using classical machine learning techniques, such as k-nearest neighbours and eigenfaces, which unfortunately did not prove very effective. Subsequent methods have focused on leveraging on frequency decompositions, i.e., discrete cosine transform, wavelets, and wavelet packets, to preprocess the input features for classifiers. However, existing approaches only rely on isotropic transformations. We argue that, since GANs primarily utilize isotropic convolutions to generate their output, they leave clear traces, their fingerprint, in the coefficient distribution on sub-bands extracted by anisotropic transformations. We employ the fully separable wavelet transform and multiwavelets to obtain the anisotropic features to feed to standard CNN classifiers. Lastly, we find the fully separable transform capable of improving the state-of-the-art.
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
凭借生成的对抗网络(GANS)和其变体的全面合成和部分面部操纵已经提高了广泛的公众关注。在多媒体取证区,检测和最终定位图像伪造已成为一个必要的任务。在这项工作中,我们调查了现有的GaN的面部操纵方法的架构,并观察到其上采样方法的不完美可以作为GaN合成假图像检测和伪造定位的重要资产。基于这一基本观察,我们提出了一种新的方法,称为FAKELOCATOR,以在操纵的面部图像上全分辨率获得高分辨率准确性。据我们所知,这是第一次尝试解决GaN的虚假本地化问题,灰度尺寸贴身贴图,保留了更多伪造地区的信息。为了改善Fakelocator跨越多种面部属性的普遍性,我们介绍了注意机制来指导模型的培训。为了改善不同的DeepFake方法的FakElecator的普遍性,我们在训练图像上提出部分数据增强和单一样本聚类。对流行的面部刻度++,DFFD数据集和七种不同最先进的GAN的面部生成方法的实验结果表明了我们方法的有效性。与基线相比,我们的方法在各种指标上表现更好。此外,该方法对针对各种现实世界的面部图像劣化进行鲁棒,例如JPEG压缩,低分辨率,噪声和模糊。
translated by 谷歌翻译
Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement.
translated by 谷歌翻译
近年来,随着面部编辑和发电的迅速发展,越来越多的虚假视频正在社交媒体上流传,这引起了极端公众的关注。基于频域的现有面部伪造方法发现,与真实图像相比,GAN锻造图像在频谱中具有明显的网格视觉伪像。但是对于综合视频,这些方法仅局限于单个帧,几乎不关注不同框架之间最歧视的部分和时间频率线索。为了充分利用视频序列中丰富的信息,本文对空间和时间频域进行了视频伪造检测,并提出了一个离散的基于余弦转换的伪造线索增强网络(FCAN-DCT),以实现更全面的时空功能表示。 FCAN-DCT由一个骨干网络和两个分支组成:紧凑特征提取(CFE)模块和频率时间注意(FTA)模块。我们对两个可见光(VIS)数据集Wilddeepfake和Celeb-DF(V2)进行了彻底的实验评估,以及我们的自我构建的视频伪造数据集DeepFakenir,这是第一个近境模式的视频伪造数据集。实验结果证明了我们方法在VIS和NIR场景中检测伪造视频的有效性。
translated by 谷歌翻译
随着神经网络能够生成现实的人造图像,它们有可能改善电影,音乐,视频游戏并使互联网变得更具创造力和鼓舞人心的地方。然而,最新的技术有可能使新的数字方式撒谎。作为响应,出现了多种可靠的方法工具箱,以识别人造图像和其他内容。先前的工作主要依赖于像素空间CNN或傅立叶变换。据我们所知,到目前为止,基于多尺度小波表示的综合伪造图像分析和检测方法始于迄今为止在空间和频率中始终存在。小波转换在一定程度上可以保守空间信息,这使我们能够提出新的分析。比较真实图像和假图像的小波系数可以解释。确定了显着差异。此外,本文提议学习一个模型,以根据自然和gan生成图像的小波包装表示合成图像。正如我们在FFHQ,Celeba和LSUN源识别问题上所证明的那样,我们的轻巧法医分类器在相对较小的网络大小上表现出竞争性或改进的性能。此外,我们研究了二进制脸庞++假检测问题。
translated by 谷歌翻译
如果通过参考人类感知能力,他们的培训可以实现深度学习模型可以实现更大的概括吗?我们如何以实际的方式实现这一目标?本文提出了一种首次培训策略来传达大脑监督,以提高泛化(机器人)。这种新的训练方法将人类注释的显着性图纳入了一种机器人损失函数,指导了在求解给定视觉任务时从图像区域的学习特征的模型。类激活映射(CAM)机制用于探测模型在每个训练批处理中的电流显着性,与人为显着性,并惩罚模型以实现大差异。结果综合面检测任务表明,Cyborg损失导致看不见的样本的性能显着改善,这些样本由多种分类网络架构中的六个生成对抗网络(GANS)产生的面部图像组成。我们还表明,与标准损失的培训数据缩放到甚至七次甚至不能击败机器人损失的准确性。作为副作用,我们观察到,在合成面检测的任务方面增加了显式区域注释增加了人类分类性能。这项工作开启了关于如何将人类视力置于损失功能的新研究领域。本文提供了本工作中使用的所有数据,代码和预训练型号。
translated by 谷歌翻译
Face forgery detection plays an important role in personal privacy and social security. With the development of adversarial generative models, high-quality forgery images become more and more indistinguishable from real to humans. Existing methods always regard as forgery detection task as the common binary or multi-label classification, and ignore exploring diverse multi-modality forgery image types, e.g. visible light spectrum and near-infrared scenarios. In this paper, we propose a novel Hierarchical Forgery Classifier for Multi-modality Face Forgery Detection (HFC-MFFD), which could effectively learn robust patches-based hybrid domain representation to enhance forgery authentication in multiple-modality scenarios. The local spatial hybrid domain feature module is designed to explore strong discriminative forgery clues both in the image and frequency domain in local distinct face regions. Furthermore, the specific hierarchical face forgery classifier is proposed to alleviate the class imbalance problem and further boost detection performance. Experimental results on representative multi-modality face forgery datasets demonstrate the superior performance of the proposed HFC-MFFD compared with state-of-the-art algorithms. The source code and models are publicly available at https://github.com/EdWhites/HFC-MFFD.
translated by 谷歌翻译
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. An appealing alternative is to render synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based model adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.
translated by 谷歌翻译
近年来有条件的GAN已经成熟,并且能够产生高质量的现实形象。但是,计算资源和培训高质量的GAN所需的培训数据是巨大的,因此对这些模型的转移学习的研究是一个紧急话题。在本文中,我们探讨了从高质量预训练的无条件GAN到有条件的GAN的转移。为此,我们提出了基于HyperNetwork的自适应权重调制。此外,我们介绍了一个自我初始化过程,不需要任何真实数据才能初始化HyperNetwork参数。为了进一步提高知识转移的样本效率,我们建议使用自我监督(对比)损失来改善GaN判别者。在广泛的实验中,我们验证了多个标准基准上的Hypernetworks,自我初始化和对比损失的效率。
translated by 谷歌翻译
The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline.We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at https://github.com/layumi/Person-reID_GAN .
translated by 谷歌翻译