通用样式转移(UST)从任意参考图像中注入样式中的内容图像。现有的方法虽然享有许多实际的成功,但无法解释实验观察,包括UST算法的不同性能在保存内容图像的空间结构时。此外,方法仅限于对风格化的繁琐全局控制,因此它们需要其他空间掩码才能进行所需的风格化。在这项工作中,我们为UST的一般框架提供了系统的傅立叶分析。我们在频域中提出了框架的等效形式。该形式意味着现有算法平均处理特征图的所有频率组件和像素,但零频率组件除外。我们分别将傅立叶振幅和相位与革兰氏矩阵和样式转移的内容重建损失联系起来。因此,基于这种等效性和连接,我们可以解释具有傅立叶相的算法之间的不同结构保存行为​​。鉴于我们的解释,我们在实践中提出了两项​​操纵,以保存结构和所需的风格。定性和定量实验都证明了我们方法对最新方法的竞争性能。我们还进行实验以证明(1)上述等效性,(2)基于傅立叶幅度和相位的可解释性以及(3)与频率分量相关的可控性。
translated by 谷歌翻译
Photo-realistic style transfer aims at migrating the artistic style from an exemplar style image to a content image, producing a result image without spatial distortions or unrealistic artifacts. Impressive results have been achieved by recent deep models. However, deep neural network based methods are too expensive to run in real-time. Meanwhile, bilateral grid based methods are much faster but still contain artifacts like overexposure. In this work, we propose the \textbf{Adaptive ColorMLP (AdaCM)}, an effective and efficient framework for universal photo-realistic style transfer. First, we find the complex non-linear color mapping between input and target domain can be efficiently modeled by a small multi-layer perceptron (ColorMLP) model. Then, in \textbf{AdaCM}, we adopt a CNN encoder to adaptively predict all parameters for the ColorMLP conditioned on each input content and style image pair. Experimental results demonstrate that AdaCM can generate vivid and high-quality stylization results. Meanwhile, our AdaCM is ultrafast and can process a 4K resolution image in 6ms on one V100 GPU.
translated by 谷歌翻译
任意样式转移生成了艺术图像,该图像仅使用一个训练有素的网络结合了内容图像的结构和艺术风格的结合。此方法中使用的图像表示包含内容结构表示和样式模式表示形式,这通常是预训练的分类网络中高级表示的特征表示。但是,传统的分类网络是为分类而设计的,该分类通常集中在高级功能上并忽略其他功能。结果,风格化的图像在整个图像中均匀地分布了样式元素,并使整体图像结构无法识别。为了解决这个问题,我们通过结合全球和局部损失,引入了一种新型的任意风格转移方法,并通过结构增强。局部结构细节由LapStyle表示,全局结构由图像深度控制。实验结果表明,与其他最新方法相比,我们的方法可以在几个常见数据集中生成具有令人印象深刻的视觉效果的更高质量图像。
translated by 谷歌翻译
最新的馈送前向神经方法的任意图像样式转移主要使用的编码特征映射到其二阶统计数据,即线性转换内容图像的编码特征映射,以具有相同的均值和方差(或协方差)(或协方差)功能图。在这项工作中,我们将二阶统计特征匹配扩展到一般分布匹配,以理解图像的样式由接收场的响应分布表示表示。对于此概括,首先,我们提出了一个新的特征转换层,该层与内容图像的特征映射分布完全匹配到目标样式图像的特征图层。其次,我们分析了与我们的新功能变换层一致的最新样式损失,以训练一个解码器网络,该网络生成了从变换的功能映射传输图像的样式。根据我们的实验结果,证明使用我们的方法获得的风格化图像与所有现有样式测量中的目标样式图像更相似,而不会丢失内容清晰度。
translated by 谷歌翻译
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.
translated by 谷歌翻译
Arbitrary style transfer (AST) transfers arbitrary artistic styles onto content images. Despite the recent rapid progress, existing AST methods are either incapable or too slow to run at ultra-resolutions (e.g., 4K) with limited resources, which heavily hinders their further applications. In this paper, we tackle this dilemma by learning a straightforward and lightweight model, dubbed MicroAST. The key insight is to completely abandon the use of cumbersome pre-trained Deep Convolutional Neural Networks (e.g., VGG) at inference. Instead, we design two micro encoders (content and style encoders) and one micro decoder for style transfer. The content encoder aims at extracting the main structure of the content image. The style encoder, coupled with a modulator, encodes the style image into learnable dual-modulation signals that modulate both intermediate features and convolutional filters of the decoder, thus injecting more sophisticated and flexible style signals to guide the stylizations. In addition, to boost the ability of the style encoder to extract more distinct and representative style signals, we also introduce a new style signal contrastive loss in our model. Compared to the state of the art, our MicroAST not only produces visually superior results but also is 5-73 times smaller and 6-18 times faster, for the first time enabling super-fast (about 0.5 seconds) AST at 4K ultra-resolutions. Code is available at https://github.com/EndyWon/MicroAST.
translated by 谷歌翻译
STYLE TRANSED引起了大量的关注,因为它可以在保留图像结构的同时将给定图像更改为一个壮观的艺术风格。然而,常规方法容易丢失图像细节,并且在风格转移期间倾向于产生令人不快的伪影。在本文中,为了解决这些问题,提出了一种具有目标特征调色板的新颖艺术程式化方法,可以准确地传递关键特征。具体而言,我们的方法包含两个模块,即特征调色板组成(FPC)和注意着色(AC)模块。 FPC模块基于K-means群集捕获代表特征,并生成特征目标调色板。以下AC模块计算内容和样式图像之间的注意力映射,并根据注意力映射和目标调色板传输颜色和模式。这些模块使提出的程式化能够专注于关键功能并生成合理的传输图像。因此,所提出的方法的贡献是提出一种新的深度学习的样式转移方法和当前目标特征调色板和注意着色模块,并通过详尽的消融研究提供对所提出的方法的深入分析和洞察。定性和定量结果表明,我们的程式化图像具有最先进的性能,具有保护核心结构和内容图像的细节。
translated by 谷歌翻译
最近的研究表明,通用风格转移的成功取得了巨大的成功,将任意视觉样式转移到内容图像中。但是,现有的方法遭受了审美的非现实主义问题,该问题引入了不和谐的模式和明显的人工制品,从而使结果很容易从真实的绘画中发现。为了解决这一限制,我们提出了一种新颖的美学增强风格转移方法,可以在美学上为任意风格产生更现实和令人愉悦的结果。具体而言,我们的方法引入了一种审美歧视者,以从大量的艺术家创造的绘画中学习通用的人类自愿美学特征。然后,合并了美学特征,以通过新颖的美学感知样式(AESSA)模块来增强样式转移过程。这样的AESSA模块使我们的Aesust能够根据样式图像的全局美学通道分布和内容图像的局部语义空间分布有效而灵活地集成样式模式。此外,我们还开发了一种新的两阶段转移培训策略,并通过两种审美正规化来更有效地训练我们的模型,从而进一步改善风格化的性能。广泛的实验和用户研究表明,我们的方法比艺术的状态综合了美学上更加和谐和现实的结果,从而大大缩小了真正的艺术家创造的绘画的差异。我们的代码可在https://github.com/endywon/aesust上找到。
translated by 谷歌翻译
最近求解深卷积神经网络(CNNS)内的光致风格转移的技术通常需要大规模数据集的密集训练,从而具有有限的适用性和揭示图像或风格的普遍性能力差。为了克服这一点,我们提出了一种新颖的框架,称为深度翻译(DTP),通过对给定输入图像对的测试时间训练来实现光致风格转移,与未经培训的网络一起学习特定于图像对的翻译,从而更好地产生性能和泛化。为风格转移进行此类测试时间培训量身定制,我们提出了新颖的网络架构,具有两个对应和生成模块的子模块,以及由对比含量,样式和循环一致性损耗组成的损耗功能。我们的框架不需要离线培训阶段进行风格转移,这是现有方法中的主要挑战之一,但网络将在测试期间仅了解。实验结果证明我们的框架具有更好的概念图像对的概括能力,甚至优于最先进的方法。
translated by 谷歌翻译
在本文中,我们介绍了纹理改革器,一个快速和通用的神经基础框架,用于使用用户指定的指导进行交互式纹理传输。挑战在三个方面:1)任务的多样性,2)引导图的简单性,以及3)执行效率。为了解决这些挑战,我们的主要思想是使用由i)全球视图结构对准阶段,ii)局部视图纹理细化阶段和III)的新的前馈多视图和多级合成程序。效果增强阶段用相干结构合成高质量结果,并以粗略的方式进行细纹细节。此外,我们还介绍了一种新颖的无学习视图特定的纹理改革(VSTR)操作,具有新的语义地图指导策略,以实现更准确的语义引导和结构保存的纹理传输。关于各种应用场景的实验结果展示了我们框架的有效性和优越性。并与最先进的交互式纹理转移算法相比,它不仅可以实现更高的质量结果,而且更加显着,也是更快的2-5个数量级。代码可在https://github.com/endywon/texture --reformer中找到。
translated by 谷歌翻译
Photorealistic style transfer aims to transfer the artistic style of an image onto an input image or video while keeping photorealism. In this paper, we think it's the summary statistics matching scheme in existing algorithms that leads to unrealistic stylization. To avoid employing the popular Gram loss, we propose a self-supervised style transfer framework, which contains a style removal part and a style restoration part. The style removal network removes the original image styles, and the style restoration network recovers image styles in a supervised manner. Meanwhile, to address the problems in current feature transformation methods, we propose decoupled instance normalization to decompose feature transformation into style whitening and restylization. It works quite well in ColoristaNet and can transfer image styles efficiently while keeping photorealism. To ensure temporal coherency, we also incorporate optical flow methods and ConvLSTM to embed contextual information. Experiments demonstrates that ColoristaNet can achieve better stylization effects when compared with state-of-the-art algorithms.
translated by 谷歌翻译
Image harmonization aims to produce visually harmonious composite images by adjusting the foreground appearance to be compatible with the background. When the composite image has photographic foreground and painterly background, the task is called painterly image harmonization. There are only few works on this task, which are either time-consuming or weak in generating well-harmonized results. In this work, we propose a novel painterly harmonization network consisting of a dual-domain generator and a dual-domain discriminator, which harmonizes the composite image in both spatial domain and frequency domain. The dual-domain generator performs harmonization by using AdaIn modules in the spatial domain and our proposed ResFFT modules in the frequency domain. The dual-domain discriminator attempts to distinguish the inharmonious patches based on the spatial feature and frequency feature of each patch, which can enhance the ability of generator in an adversarial manner. Extensive experiments on the benchmark dataset show the effectiveness of our method. Our code and model are available at https://github.com/bcmi/PHDNet-Painterly-Image-Harmonization.
translated by 谷歌翻译
在本文中,我们提出了一种使用自我监督的多任务学习的基于变换器的多曝光图像融合框架的传输。该框架基于编码器解码器网络,可以在大型自然图像数据集上培训,并且不需要地面真理融合图像。我们根据多曝光图像的特点设计三个自我监督的重建任务,并使用多任务学习同时进行这些任务;通过该过程,网络可以学习多曝光图像的特征并提取更多的广义特征。此外,为了补偿在基于CNN的架构中建立远程依赖性的缺陷,我们设计了一个与变压器模块相结合的编码器。这种组合使网络能够专注于本地和全局信息。我们评估了我们的方法,并将其与最新释放的多曝光图像融合基准数据集进行了11个基于竞争的传统和深入学习的方法,我们的方法在主观和客观评估中实现了最佳性能。
translated by 谷歌翻译
任意神经风格转移是一个重要的主题,具有研究价值和工业应用前景,该主题旨在使用另一个样式呈现一个图像的结构。最近的研究已致力于任意风格转移(AST)的任务,以提高风格化质量。但是,关于AST图像的质量评估的探索很少,即使它可以指导不同算法的设计。在本文中,我们首先构建了一个新的AST图像质量评估数据库(AST-IQAD),该数据库包括150个内容样式的图像对以及由八种典型AST算法产生的相应的1200个风格化图像。然后,在我们的AST-IQAD数据库上进行了一项主观研究,该研究获得了三种主观评估(即内容保存(CP),样式相似(SR)和整体视觉(OV),该数据库获得了所有风格化图像的主观评分评分。 。为了定量测量AST图像的质量,我们提出了一个新的基于稀疏表示的图像质量评估度量(SRQE),该指标(SRQE)使用稀疏特征相似性来计算质量。 AST-IQAD的实验结果证明了该方法的优越性。数据集和源代码将在https://github.com/hangwei-chen/ast-iqad-srqe上发布
translated by 谷歌翻译
我们提出了一个极其简单的超分辨率样式转移框架,称为URST,以灵活地处理任意的高分辨率图像(例如,10000x10000像素)第一次转移。由于在处理超高分辨率图像时,由于巨大的内存成本和小行程大小,大多数现有最先进的方法将降低。 URST完全避免了由超高分辨率图像引起的内存问题(1)将图像划分为小块和(2)与新颖的缩略图实例归一化(TIN)执行修补程序样式传输。具体而言,TIN可以提取缩略图功能的归一化统计信息,并将它们应用于小补丁,确保不同补丁之间的风格一致性。总的来说,与现有技术相比,URST框架有三个优点。 (1)我们将输入图像分为小补丁并采用锡,成功传输图像样式,具有任意的高分辨率。 (2)实验表明,我们的URST超越了现有的SOTA方法对超高分辨率图像,从提高行程大小的提出的中风感知损失的有效性中受益。 (3)我们的URST可以轻松插入大多数现有的样式转移方法,即使在没有培训的情况下也直接提高他们的性能。代码可在https://git.io/urst上获得。
translated by 谷歌翻译
在本文中,我们旨在设计一种能够共同执行艺术,照片现实和视频风格转移的通用风格的转移方法,而无需在培训期间看到视频。以前的单帧方法对整个图像进行了强大的限制,以维持时间一致性,在许多情况下可能会违反。取而代之的是,我们做出了一个温和而合理的假设,即全球不一致是由局部不一致所支配的,并设计了应用于本地斑块的一般对比度连贯性损失(CCPL)。 CCPL可以在样式传输过程中保留内容源的连贯性,而不会降低样式化。此外,它拥有一种邻居调节机制,从而大大减少了局部扭曲和大量视觉质量的改善。除了其在多功能风格转移方面的出色性能外,它还可以轻松地扩展到其他任务,例如图像到图像翻译。此外,为了更好地融合内容和样式功能,我们提出了简单的协方差转换(SCT),以有效地将内容功能的二阶统计数据与样式功能保持一致。实验证明了使用CCPL武装时,所得模型对于多功能风格转移的有效性。
translated by 谷歌翻译
神经风格转移(NST)与视觉媒体的艺术风格有关。它可以描述为将艺术图像风格转移到普通照片上的过程。最近,许多研究考虑了NST算法的深度保护功能的增强,以解决当输入内容图像包含许多深度的众多对象时发生的不希望的效果。我们的方法使用了一个深层残留卷积网络,并使用实例归一化层,该层利用高级深度预测网络将深度保存作为内容和样式的附加损失函数集成。我们展示了有效保留内容图像的深度和全局结构的结果。三个不同的评估过程表明,我们的系统能够保留风格化结果的结构,同时表现出样式捕捉功能和美学质量,或与最先进的方法相当或优越。项目页面:https://ioannoue.github.io/depth-aware-nst-using-in.html。
translated by 谷歌翻译
We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.
translated by 谷歌翻译
最近,提出了注意力任意样式转移方法来实现细粒度的结果,其操纵内容和风格特征之间的点亮相似性。然而,基于特征点的注意机构忽略了特征多歧管分布,其中每个特征歧管对应于图像中的语义区域。因此,通过来自各种样式语义区域的高度不同模式来呈现均匀内容语义区域,通过视觉伪像产生不一致的程式化结果。我们提出了逐步的注意力歧管对齐(PAMA)来缓解这个问题,这反复应用关注操作和空间感知的插值。根据内容特征的空间分布,注意操作重新排列风格特性。这使得内容和样式歧管对应于特征映射。然后,空间感知插值自适应地在相应的内容和样式歧管之间插入以增加它们的相似性。通过逐步将内容歧管对准风格歧管,所提出的PAMA实现了最先进的性能,同时避免了语义区域的不一致。代码可在https://github.com/computer-vision2022/pama获得。
translated by 谷歌翻译
Neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image. It was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate neural style transfer and increase its resolution, but they all compromise the quality of the produced images. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution images, enabling multiscale style transfer at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles.
translated by 谷歌翻译