最新的馈送前向神经方法的任意图像样式转移主要使用的编码特征映射到其二阶统计数据,即线性转换内容图像的编码特征映射,以具有相同的均值和方差(或协方差)(或协方差)功能图。在这项工作中,我们将二阶统计特征匹配扩展到一般分布匹配,以理解图像的样式由接收场的响应分布表示表示。对于此概括,首先,我们提出了一个新的特征转换层,该层与内容图像的特征映射分布完全匹配到目标样式图像的特征图层。其次,我们分析了与我们的新功能变换层一致的最新样式损失,以训练一个解码器网络,该网络生成了从变换的功能映射传输图像的样式。根据我们的实验结果,证明使用我们的方法获得的风格化图像与所有现有样式测量中的目标样式图像更相似,而不会丢失内容清晰度。
translated by 谷歌翻译
任意样式转移生成了艺术图像,该图像仅使用一个训练有素的网络结合了内容图像的结构和艺术风格的结合。此方法中使用的图像表示包含内容结构表示和样式模式表示形式,这通常是预训练的分类网络中高级表示的特征表示。但是,传统的分类网络是为分类而设计的,该分类通常集中在高级功能上并忽略其他功能。结果,风格化的图像在整个图像中均匀地分布了样式元素,并使整体图像结构无法识别。为了解决这个问题,我们通过结合全球和局部损失,引入了一种新型的任意风格转移方法,并通过结构增强。局部结构细节由LapStyle表示,全局结构由图像深度控制。实验结果表明,与其他最新方法相比,我们的方法可以在几个常见数据集中生成具有令人印象深刻的视觉效果的更高质量图像。
translated by 谷歌翻译
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.
translated by 谷歌翻译
STYLE TRANSED引起了大量的关注,因为它可以在保留图像结构的同时将给定图像更改为一个壮观的艺术风格。然而,常规方法容易丢失图像细节,并且在风格转移期间倾向于产生令人不快的伪影。在本文中,为了解决这些问题,提出了一种具有目标特征调色板的新颖艺术程式化方法,可以准确地传递关键特征。具体而言,我们的方法包含两个模块,即特征调色板组成(FPC)和注意着色(AC)模块。 FPC模块基于K-means群集捕获代表特征,并生成特征目标调色板。以下AC模块计算内容和样式图像之间的注意力映射,并根据注意力映射和目标调色板传输颜色和模式。这些模块使提出的程式化能够专注于关键功能并生成合理的传输图像。因此,所提出的方法的贡献是提出一种新的深度学习的样式转移方法和当前目标特征调色板和注意着色模块,并通过详尽的消融研究提供对所提出的方法的深入分析和洞察。定性和定量结果表明,我们的程式化图像具有最先进的性能,具有保护核心结构和内容图像的细节。
translated by 谷歌翻译
现有的神经样式传输方法需要参考样式图像来将样式图像的纹理信息传输到内容图像。然而,在许多实际情况中,用户可能没有参考样式图像,但仍然有兴趣通过想象它们来传输样式。为了处理此类应用程序,我们提出了一个新的框架,它可以实现样式转移`没有'风格图像,但仅使用所需风格的文本描述。使用预先训练的文本图像嵌入模型的剪辑,我们仅通过单个文本条件展示了内容图像样式的调制。具体而言,我们提出了一种针对现实纹理传输的多视图增强的修补程序文本图像匹配丢失。广泛的实验结果证实了具有反映语义查询文本的现实纹理的成功图像风格转移。
translated by 谷歌翻译
We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.
translated by 谷歌翻译
最近,提出了注意力任意样式转移方法来实现细粒度的结果,其操纵内容和风格特征之间的点亮相似性。然而,基于特征点的注意机构忽略了特征多歧管分布,其中每个特征歧管对应于图像中的语义区域。因此,通过来自各种样式语义区域的高度不同模式来呈现均匀内容语义区域,通过视觉伪像产生不一致的程式化结果。我们提出了逐步的注意力歧管对齐(PAMA)来缓解这个问题,这反复应用关注操作和空间感知的插值。根据内容特征的空间分布,注意操作重新排列风格特性。这使得内容和样式歧管对应于特征映射。然后,空间感知插值自适应地在相应的内容和样式歧管之间插入以增加它们的相似性。通过逐步将内容歧管对准风格歧管,所提出的PAMA实现了最先进的性能,同时避免了语义区域的不一致。代码可在https://github.com/computer-vision2022/pama获得。
translated by 谷歌翻译
Photo-realistic style transfer aims at migrating the artistic style from an exemplar style image to a content image, producing a result image without spatial distortions or unrealistic artifacts. Impressive results have been achieved by recent deep models. However, deep neural network based methods are too expensive to run in real-time. Meanwhile, bilateral grid based methods are much faster but still contain artifacts like overexposure. In this work, we propose the \textbf{Adaptive ColorMLP (AdaCM)}, an effective and efficient framework for universal photo-realistic style transfer. First, we find the complex non-linear color mapping between input and target domain can be efficiently modeled by a small multi-layer perceptron (ColorMLP) model. Then, in \textbf{AdaCM}, we adopt a CNN encoder to adaptively predict all parameters for the ColorMLP conditioned on each input content and style image pair. Experimental results demonstrate that AdaCM can generate vivid and high-quality stylization results. Meanwhile, our AdaCM is ultrafast and can process a 4K resolution image in 6ms on one V100 GPU.
translated by 谷歌翻译
This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feedforward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.
translated by 谷歌翻译
Neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image. It was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate neural style transfer and increase its resolution, but they all compromise the quality of the produced images. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution images, enabling multiscale style transfer at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles.
translated by 谷歌翻译
我们提出了一个极其简单的超分辨率样式转移框架,称为URST,以灵活地处理任意的高分辨率图像(例如,10000x10000像素)第一次转移。由于在处理超高分辨率图像时,由于巨大的内存成本和小行程大小,大多数现有最先进的方法将降低。 URST完全避免了由超高分辨率图像引起的内存问题(1)将图像划分为小块和(2)与新颖的缩略图实例归一化(TIN)执行修补程序样式传输。具体而言,TIN可以提取缩略图功能的归一化统计信息,并将它们应用于小补丁,确保不同补丁之间的风格一致性。总的来说,与现有技术相比,URST框架有三个优点。 (1)我们将输入图像分为小补丁并采用锡,成功传输图像样式,具有任意的高分辨率。 (2)实验表明,我们的URST超越了现有的SOTA方法对超高分辨率图像,从提高行程大小的提出的中风感知损失的有效性中受益。 (3)我们的URST可以轻松插入大多数现有的样式转移方法,即使在没有培训的情况下也直接提高他们的性能。代码可在https://git.io/urst上获得。
translated by 谷歌翻译
任意神经风格转移是一个重要的主题,具有研究价值和工业应用前景,该主题旨在使用另一个样式呈现一个图像的结构。最近的研究已致力于任意风格转移(AST)的任务,以提高风格化质量。但是,关于AST图像的质量评估的探索很少,即使它可以指导不同算法的设计。在本文中,我们首先构建了一个新的AST图像质量评估数据库(AST-IQAD),该数据库包括150个内容样式的图像对以及由八种典型AST算法产生的相应的1200个风格化图像。然后,在我们的AST-IQAD数据库上进行了一项主观研究,该研究获得了三种主观评估(即内容保存(CP),样式相似(SR)和整体视觉(OV),该数据库获得了所有风格化图像的主观评分评分。 。为了定量测量AST图像的质量,我们提出了一个新的基于稀疏表示的图像质量评估度量(SRQE),该指标(SRQE)使用稀疏特征相似性来计算质量。 AST-IQAD的实验结果证明了该方法的优越性。数据集和源代码将在https://github.com/hangwei-chen/ast-iqad-srqe上发布
translated by 谷歌翻译
最近的研究表明,通用风格转移的成功取得了巨大的成功,将任意视觉样式转移到内容图像中。但是,现有的方法遭受了审美的非现实主义问题,该问题引入了不和谐的模式和明显的人工制品,从而使结果很容易从真实的绘画中发现。为了解决这一限制,我们提出了一种新颖的美学增强风格转移方法,可以在美学上为任意风格产生更现实和令人愉悦的结果。具体而言,我们的方法引入了一种审美歧视者,以从大量的艺术家创造的绘画中学习通用的人类自愿美学特征。然后,合并了美学特征,以通过新颖的美学感知样式(AESSA)模块来增强样式转移过程。这样的AESSA模块使我们的Aesust能够根据样式图像的全局美学通道分布和内容图像的局部语义空间分布有效而灵活地集成样式模式。此外,我们还开发了一种新的两阶段转移培训策略,并通过两种审美正规化来更有效地训练我们的模型,从而进一步改善风格化的性能。广泛的实验和用户研究表明,我们的方法比艺术的状态综合了美学上更加和谐和现实的结果,从而大大缩小了真正的艺术家创造的绘画的差异。我们的代码可在https://github.com/endywon/aesust上找到。
translated by 谷歌翻译
许多传统的计算机视觉算法通过要求生成图像中的每个补丁都与训练图像中的补丁相似,反之亦然。最近,这种经典方法已被补丁歧视者所取代。对抗方法避免了找到最近的贴片邻居的计算负担,但通常需要很长的训练时间,并且可能无法匹配斑块的分布。在本文中,我们利用了最近开发的切片瓦斯坦距离,并开发了一种算法,该算法明确有效地最大程度地减少了两个图像中斑块分布之间的距离。我们的方法在概念上很简单,不需要培训,并且可以在几行代码中实现。在许多图像生成任务上,我们表明我们的结果通常优于单片图像,不需要训练,并且可以在几秒钟内生成高质量的图像。我们的实施可从https://github.com/ariel415el/gpdm获得
translated by 谷歌翻译
我们提出了一种将任意样式图像的艺术特征转移到3D场景的方法。在点云或网格上执行3D风格的先前方法对复杂的现实世界场景的几何重建错误敏感。取而代之的是,我们建议对更健壮的辐射场字段表示。我们发现,常用的基于克矩阵的损失倾向于在没有忠实笔触的情况下产生模糊的结果,并引入了最近的基于邻居的损失,该损失非常有效地捕获样式的细节,同时保持多视图一致性。我们还提出了一种新颖的递延后传播方法,以使用在全分辨率渲染图像上定义的样式损失来优化记忆密集型辐射场。我们广泛的评估表明,我们的方法通过产生与样式图像更相似的艺术外观来优于基线。请检查我们的项目页面以获取视频结果和开源实现:https://www.cs.cornell.edu/projects/arf/。
translated by 谷歌翻译
Arbitrary Style Transfer is a technique used to produce a new image from two images: a content image, and a style image. The newly produced image is unseen and is generated from the algorithm itself. Balancing the structure and style components has been the major challenge that other state-of-the-art algorithms have tried to solve. Despite all the efforts, it's still a major challenge to apply the artistic style that was originally created on top of the structure of the content image while maintaining consistency. In this work, we solved these problems by using a Deep Learning approach using Convolutional Neural Networks. Our implementation will first extract foreground from the background using the pre-trained Detectron 2 model from the content image, and then apply the Arbitrary Style Transfer technique that is used in SANet. Once we have the two styled images, we will stitch the two chunks of images after the process of style transfer for the complete end piece.
translated by 谷歌翻译
Photorealistic style transfer aims to transfer the artistic style of an image onto an input image or video while keeping photorealism. In this paper, we think it's the summary statistics matching scheme in existing algorithms that leads to unrealistic stylization. To avoid employing the popular Gram loss, we propose a self-supervised style transfer framework, which contains a style removal part and a style restoration part. The style removal network removes the original image styles, and the style restoration network recovers image styles in a supervised manner. Meanwhile, to address the problems in current feature transformation methods, we propose decoupled instance normalization to decompose feature transformation into style whitening and restylization. It works quite well in ColoristaNet and can transfer image styles efficiently while keeping photorealism. To ensure temporal coherency, we also incorporate optical flow methods and ConvLSTM to embed contextual information. Experiments demonstrates that ColoristaNet can achieve better stylization effects when compared with state-of-the-art algorithms.
translated by 谷歌翻译
Attention-based arbitrary style transfer studies have shown promising performance in synthesizing vivid local style details. They typically use the all-to-all attention mechanism: each position of content features is fully matched to all positions of style features. However, all-to-all attention tends to generate distorted style patterns and has quadratic complexity. It virtually limits both the effectiveness and efficiency of arbitrary style transfer. In this paper, we rethink what kind of attention mechanism is more appropriate for arbitrary style transfer. Our answer is a novel all-to-key attention mechanism: each position of content features is matched to key positions of style features. Specifically, it integrates two newly proposed attention forms: distributed and progressive attention. Distributed attention assigns attention to multiple key positions; Progressive attention pays attention from coarse to fine. All-to-key attention promotes the matching of diverse and reasonable style patterns and has linear complexity. The resultant module, dubbed StyA2K, has fine properties in rendering reasonable style textures and maintaining consistent local structure. Qualitative and quantitative experiments demonstrate that our method achieves superior results than state-of-the-art approaches.
translated by 谷歌翻译
Image harmonization aims to produce visually harmonious composite images by adjusting the foreground appearance to be compatible with the background. When the composite image has photographic foreground and painterly background, the task is called painterly image harmonization. There are only few works on this task, which are either time-consuming or weak in generating well-harmonized results. In this work, we propose a novel painterly harmonization network consisting of a dual-domain generator and a dual-domain discriminator, which harmonizes the composite image in both spatial domain and frequency domain. The dual-domain generator performs harmonization by using AdaIn modules in the spatial domain and our proposed ResFFT modules in the frequency domain. The dual-domain discriminator attempts to distinguish the inharmonious patches based on the spatial feature and frequency feature of each patch, which can enhance the ability of generator in an adversarial manner. Extensive experiments on the benchmark dataset show the effectiveness of our method. Our code and model are available at https://github.com/bcmi/PHDNet-Painterly-Image-Harmonization.
translated by 谷歌翻译
通用样式转移(UST)从任意参考图像中注入样式中的内容图像。现有的方法虽然享有许多实际的成功,但无法解释实验观察,包括UST算法的不同性能在保存内容图像的空间结构时。此外,方法仅限于对风格化的繁琐全局控制,因此它们需要其他空间掩码才能进行所需的风格化。在这项工作中,我们为UST的一般框架提供了系统的傅立叶分析。我们在频域中提出了框架的等效形式。该形式意味着现有算法平均处理特征图的所有频率组件和像素,但零频率组件除外。我们分别将傅立叶振幅和相位与革兰氏矩阵和样式转移的内容重建损失联系起来。因此,基于这种等效性和连接,我们可以解释具有傅立叶相的算法之间的不同结构保存行为​​。鉴于我们的解释,我们在实践中提出了两项​​操纵,以保存结构和所需的风格。定性和定量实验都证明了我们方法对最新方法的竞争性能。我们还进行实验以证明(1)上述等效性,(2)基于傅立叶幅度和相位的可解释性以及(3)与频率分量相关的可控性。
translated by 谷歌翻译