图像学习和着色是多媒体域中的热点。受到人类的学习能力的启发,在本文中,我们提出了一种具有学习框架的自动着色方法。该方法可以看作是基于典范和基于学习的方法的混合体,并且可以将着色过程和学习过程分解,从而为相同的灰色图像生成各种颜色样式。基于示例的着色方法中的匹配过程可以被视为参数化函数,我们采用大量颜色图像作为训练样本来适合参数。在训练过程中,颜色图像是地面真相,我们通过最小化匹配函数的参数来了解匹配过程的最佳参数。为了处理具有各种组合的图像,引入了全局功能,该功能可用于将图像相对于它们的组成分类,然后分别学习每个图像类别的最佳匹配参数。更重要的是,基于空间一致性的后处理是设计从参考图像中平滑提取的颜色信息以删除匹配错误。进行了广泛的实验以验证该方法的有效性,并与最新的着色算法达到了可比的性能。
translated by 谷歌翻译
图像或视频外观特征(例如颜色,纹理,音调,照明等)反映了一个人的视觉感知和对图像或视频的直接印象。给定的源图像(视频)和目标图像(视频),图像(视频)颜色传输技术旨在处理源图像或视频的颜色(请注意,源图像或视频也引用了参考图像或一些文献中的视频)使它看起来像目标图像或视频的视频,即将目标图像或视频的外观传输到源图像或视频的外观,从而可以改变对源图像或视频的感知。作为色彩传输的扩展,样式转移是指以风格样本或通过样式传输模型的样式样本或一组图像的艺术家的样式呈现目标图像或视频的内容。作为一个新兴领域,对风格转移的研究吸引了许多研究人员的注意。经过数十年的发展,它已成为一项高度的跨学科研究,并可以实现各种艺术表达方式。本文概述了过去几年的色彩传输和样式转移方法。
translated by 谷歌翻译
基于示例的基于彩色方法依赖于参考图像来为目标灰度图像提供合理的颜色。基于示例的颜色的关键和难度是在这两个图像之间建立准确的对应关系。以前的方法已经尝试构建这种对应关系,而是面临两个障碍。首先,使用用于计算对应的亮度通道是不准确的。其次,它们构建的密集信件引入了错误的匹配结果并提高了计算负担。为了解决这两个问题,我们提出了语义 - 稀疏的彩色网络(SSCN)以粗细的方式将全局图像样式和详细的语义相关颜色传输到灰度图像。我们的网络可以完全平衡全局和本地颜色,同时减轻了暧昧的匹配问题。实验表明,我们的方法优于定量和定性评估的现有方法,实现了最先进的性能。
translated by 谷歌翻译
着色是一个计算机辅助过程,旨在为灰色图像或视频赋予色彩。它可用于增强黑白图像,包括黑白照片,老式电影和科学成像结果。相反,不着色是将颜色图像或视频转换为灰度。灰度图像或视频是指没有颜色信息的亮度信息的图像或视频。它是一些下游图像处理应用程序的基础,例如模式识别,图像分割和图像增强。与图像脱色不同,视频脱色不仅应考虑每个视频框架中的图像对比度保存,而且还应尊重视频框架之间的时间和空间一致性。研究人员致力于通过平衡时空的一致性和算法效率来开发脱色方法。随着数码相机和手机的流行,研究人员越来越关注图像和视频着色和脱色。本文概述了过去二十年来图像和视频着色和脱色方法的进度。
translated by 谷歌翻译
我们提出了第一个统一的框架Unicolor,以支持多种方式的着色,包括无条件和条件性的框架,例如中风,示例,文本,甚至是它们的混合。我们没有为每种类型的条件学习单独的模型,而是引入了一个两阶段的着色框架,以将各种条件纳入单个模型。在第一阶段,多模式条件将转换为提示点的共同表示。特别是,我们提出了一种基于剪辑的新方法,将文本转换为提示点。在第二阶段,我们提出了一个基于变压器的网络,该网络由Chroma-vqgan和Hybrid-Transformer组成,以生成以提示点为条件的多样化和高质量的着色结果。定性和定量比较都表明,我们的方法在每种控制方式中都优于最先进的方法,并进一步实现了以前不可行的多模式着色。此外,我们设计了一个交互式界面,显示了我们统一框架在实际用法中的有效性,包括自动着色,混合控制着色,局部再现和迭代色彩编辑。我们的代码和型号可在https://luckyhzt.github.io/unicolor上找到。
translated by 谷歌翻译
Deep learning based methods have significantly boosted the study of automatic building extraction from remote sensing images. However, delineating vectorized and regular building contours like a human does remains very challenging, due to the difficulty of the methodology, the diversity of building structures, and the imperfect imaging conditions. In this paper, we propose the first end-to-end learnable building contour extraction framework, named BuildMapper, which can directly and efficiently delineate building polygons just as a human does. BuildMapper consists of two main components: 1) a contour initialization module that generates initial building contours; and 2) a contour evolution module that performs both contour vertex deformation and reduction, which removes the need for complex empirical post-processing used in existing methods. In both components, we provide new ideas, including a learnable contour initialization method to replace the empirical methods, dynamic predicted and ground truth vertex pairing for the static vertex correspondence problem, and a lightweight encoder for vertex information extraction and aggregation, which benefit a general contour-based method; and a well-designed vertex classification head for building corner vertices detection, which casts light on direct structured building contour extraction. We also built a suitable large-scale building dataset, the WHU-Mix (vector) building dataset, to benefit the study of contour-based building extraction methods. The extensive experiments conducted on the WHU-Mix (vector) dataset, the WHU dataset, and the CrowdAI dataset verified that BuildMapper can achieve a state-of-the-art performance, with a higher mask average precision (AP) and boundary AP than both segmentation-based and contour-based methods.
translated by 谷歌翻译
任意神经风格转移是一个重要的主题,具有研究价值和工业应用前景,该主题旨在使用另一个样式呈现一个图像的结构。最近的研究已致力于任意风格转移(AST)的任务,以提高风格化质量。但是,关于AST图像的质量评估的探索很少,即使它可以指导不同算法的设计。在本文中,我们首先构建了一个新的AST图像质量评估数据库(AST-IQAD),该数据库包括150个内容样式的图像对以及由八种典型AST算法产生的相应的1200个风格化图像。然后,在我们的AST-IQAD数据库上进行了一项主观研究,该研究获得了三种主观评估(即内容保存(CP),样式相似(SR)和整体视觉(OV),该数据库获得了所有风格化图像的主观评分评分。 。为了定量测量AST图像的质量,我们提出了一个新的基于稀疏表示的图像质量评估度量(SRQE),该指标(SRQE)使用稀疏特征相似性来计算质量。 AST-IQAD的实验结果证明了该方法的优越性。数据集和源代码将在https://github.com/hangwei-chen/ast-iqad-srqe上发布
translated by 谷歌翻译
We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.
translated by 谷歌翻译
兴趣点检测是计算机视觉和图像处理中最根本,最关键的问题之一。在本文中,我们对图像特征信息(IFI)提取技术进行了全面综述,以进行利益点检测。为了系统地介绍现有的兴趣点检测方法如何从输入图像中提取IFI,我们提出了IFI提取技术的分类学检测。根据该分类法,我们讨论了不同类型的IFI提取技术以进行兴趣点检测。此外,我们确定了与现有的IFI提取技术有关的主要未解决的问题,以及以前尚未讨论过的任何兴趣点检测方法。提供了现有的流行数据集和评估标准,并评估和讨论了18种最先进方法的性能。此外,还详细阐述了有关IFI提取技术的未来研究方向。
translated by 谷歌翻译
We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances.
translated by 谷歌翻译
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets . While such generic features cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.
translated by 谷歌翻译
无监督的深度学习最近证明了生产高质量样本的希望。尽管它具有促进图像着色任务的巨大潜力,但由于数据歧管和模型能力的高维度,性能受到限制。这项研究提出了一种新的方案,该方案利用小波域中的基于得分的生成模型来解决这些问题。通过利用通过小波变换来利用多尺度和多渠道表示,该模型可以共同有效地从堆叠的粗糙小波系数组件中了解较富裕的先验。该策略还降低了原始歧管的维度,并减轻了维度的诅咒,这对估计和采样有益。此外,设计了小波域中的双重一致性项,即数据一致性和结构一致性,以更好地利用着色任务。具体而言,在训练阶段,一组由小波系数组成的多通道张量被用作训练网络以denoising得分匹配的输入。在推论阶段,样品是通过具有数据和结构一致性的退火Langevin动力学迭代生成的。实验证明了所提出的方法在发电和着色质量方面的显着改善,尤其是在着色鲁棒性和多样性方面。
translated by 谷歌翻译
Recent progress on salient object detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs). Semantic segmentation and salient object detection algorithms developed lately have been mostly based on Fully Convolutional Neural Networks (FCNs). There is still a large room for improvement over the generic FCN models that do not explicitly deal with the scale-space problem. Holistically-Nested Edge Detector (HED) provides a skip-layer structure with deep supervision for edge and boundary detection, but the performance gain of HED on saliency detection is not obvious. In this paper, we propose a new salient object detection method by introducing short connections to the skip-layer structures within the HED architecture. Our framework takes full advantage of multi-level and multi-scale features extracted from FCNs, providing more advanced representations at each layer, a property that is critically needed to perform segment detection. Our method produces state-of-theart results on 5 widely tested salient object detection benchmarks, with advantages in terms of efficiency (0.08 seconds per image), effectiveness, and simplicity over the existing algorithms. Beyond that, we conduct an exhaustive analysis on the role of training data on performance. Our experimental results provide a more reasonable and powerful training set for future research and fair comparisons.
translated by 谷歌翻译
深度信息在许多图像处理应用程序中是有用的。然而,由于拍摄图像是在2D成像传感器上投射3D场景的过程,因此深度信息嵌入图像中。从图像中提取深度信息是一个具有挑战性的任务。引导原理是由于散焦引起的蓝色水平与物体和焦平面之间的距离有关。基于该原理和广泛使用的假设,即高斯模糊是散焦模糊的良好模型,我们制定了作为高斯模糊分类问题的空间变化散焦模糊的问题。我们通过培训深度神经网络来解决图像补丁中的20级蓝色蓝色之一来解决问题。我们创建了一个超过500000美元的尺寸为32 \ times32 $的数据集,用于培训和测试几种知名网络模型。我们发现MobileNetv2由于其较低的内存要求和高精度而适用于此应用。训练模型用于确定通过施加迭代加权引导滤波器来改进的贴剂模糊。结果是散焦图,其携带每个像素的模糊度的信息。我们将提出的方法与最先进的技术进行比较,我们展示了其在自适应图像增强,散焦倍率和多聚焦图像融合中的成功应用。
translated by 谷歌翻译
当通过玻璃等半充实介质进行成像时,通常可以在捕获的图像中找到另一个场景的反射。它降低了图像的质量并影响其后续分析。在本文中,提出了一种新的深层神经网络方法来解决成像中的反射问题。传统的反射删除方法不仅需要长时间的计算时间来解决不同的优化功能,而且不能保证其性能。由于如今的成像设备可以轻松获得数组摄像机,因此我们首先在本文中建议使用卷积神经网络(CNN)采用基于多图像的深度估计方法。提出的网络避免了由于图像中的反射而引起的深度歧义问题,并直接估计沿图像边缘的深度。然后,它们被用来将边缘分类为属于背景或反射的边缘。由于具有相似深度值的边缘在分类中易于误差,因此将它们从反射删除过程中删除。我们建议使用生成的对抗网络(GAN)来再生删除的背景边缘。最后,估计的背景边缘图被馈送到另一个自动编码器网络,以帮助从原始图像中提取背景。实验结果表明,与最先进的方法相比,提出的反射去除算法在定量和定性上取得了出色的性能。与使用传统优化方法相比,所提出的算法还显示出比现有方法相比的速度要快得多。
translated by 谷歌翻译
作为一个常见的图像编辑操作,图像组成旨在将前景从一个图像切割并粘贴在另一个图像上,从而产生复合图像。但是,有许多问题可能使复合图像不现实。这些问题可以总结为前景和背景之间的不一致,包括外观不一致(例如,不兼容的照明),几何不一致(例如不合理的大小)和语义不一致(例如,不匹配的语义上下文)。先前的作品将图像组成任务分为多个子任务,其中每个子任务在一个或多个问题上目标。具体而言,对象放置旨在为前景找到合理的比例,位置和形状。图像混合旨在解决前景和背景之间的不自然边界。图像协调旨在调整前景的照明统计数据。影子生成旨在为前景产生合理的阴影。通过将所有上述努力放在一起,我们可以获取现实的复合图像。据我们所知,以前没有关于图像组成的调查。在本文中,我们对图像组成的子任务进行了全面的调查。对于每个子任务,我们总结了传统方法,基于深度学习的方法,数据集和评估。我们还指出了每个子任务中现有方法的局限性以及整个图像组成任务的问题。图像组合的数据集和代码在https://github.com/bcmi/awesome-image-composition上进行了总结。
translated by 谷歌翻译
基于深度学习的路面裂缝检测方法通常需要大规模标签,具有详细的裂缝位置信息来学习准确的预测。然而,在实践中,由于路面裂缝的各种视觉模式,裂缝位置很难被手动注释。在本文中,我们提出了一种基于深域适应的裂缝检测网络(DDACDN),其学会利用源域知识来预测目标域中的多类别裂缝位置信息,其中仅是图像级标签可用的。具体地,DDACDN首先通过双分支权重共享骨干网络从源和目标域中提取裂缝特征。并且在实现跨域自适应的努力中,通过从每个域的特征空间聚合三尺度特征来构建中间域,以使来自源域的裂缝特征适应目标域。最后,该网络涉及两个域的知识,并接受识别和本地化路面裂缝的培训。为了便于准确的培训和验证域适应,我们使用两个具有挑战性的路面裂缝数据集CQu-BPDD和RDD2020。此外,我们构建了一个名为CQu-BPMDD的新型大型沥青路面多标签疾病数据集,其中包含38994个高分辨率路面疾病图像,以进一步评估模型的稳健性。广泛的实验表明,DDACDN优于最先进的路面裂纹检测方法,以预测目标结构域的裂缝位置。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
图像的美学质量被定义为图像美的度量或欣赏。美学本质上是一个主观性的财产,但是存在一些影响它的因素,例如图像的语义含量,描述艺术方面的属性,用于射击的摄影设置等。在本文中,我们提出了一种方法基于语义含量分析,艺术风格和图像的组成的图像自动预测图像的美学。所提出的网络包括:用于语义特征的预先训练的网络,提取(骨干网);依赖于骨干功能的多层的Perceptron(MLP)网络,用于预测图像属性(attributeNet);一种自适应的HyperNetwork,可利用以前编码到attributeNet生成的嵌入的属性以预测专用于美学估计的目标网络的参数(AestheticNet)。鉴于图像,所提出的多网络能够预测:风格和组成属性,以及美学分数分布。结果三个基准数据集展示了所提出的方法的有效性,而消融研究则更好地了解所提出的网络。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译