图像操纵检测的关键研究问题是如何学习对新型数据中的操纵敏感的宽大功能,而特定于防止在真实图像上的误报。目前的研究强调了敏感性,特异性主要忽略了。在本文中,我们通过多视图特征学习和多尺度监督来解决两个方面。通过利用篡改区域周围的噪声分布和边界伪影,前者旨在学习语义 - 不可知,更广泛的特征。后者允许我们从真实的图像中学习以通过依赖于语义分割损耗的现有技术来考虑非凡的图像。我们的想法是由我们术语MVSS-Net及其增强版MVSS-Net ++的新网络实现。六个公共基准数据集的综合实验证明了MVSS-Net系列的可行性,以实现像素级和图像级操作检测。
translated by 谷歌翻译
With the rapid advances of image editing techniques in recent years, image manipulation detection has attracted considerable attention since the increasing security risks posed by tampered images. To address these challenges, a novel multi-scale multi-grained deep network (MSMG-Net) is proposed to automatically identify manipulated regions. In our MSMG-Net, a parallel multi-scale feature extraction structure is used to extract multi-scale features. Then the multi-grained feature learning is utilized to perceive object-level semantics relation of multi-scale features by introducing the shunted self-attention. To fuse multi-scale multi-grained features, global and local feature fusion block are designed for manipulated region segmentation by a bottom-up approach and multi-level feature aggregation block is designed for edge artifacts detection by a top-down approach. Thus, MSMG-Net can effectively perceive the object-level semantics and encode the edge artifact. Experimental results on five benchmark datasets justify the superior performance of the proposed method, outperforming state-of-the-art manipulation detection and localization methods. Extensive ablation experiments and feature visualization demonstrate the multi-scale multi-grained learning can present effective visual representations of manipulated regions. In addition, MSMG-Net shows better robustness when various post-processing methods further manipulate images.
translated by 谷歌翻译
Image manipulation localization aims at distinguishing forged regions from the whole test image. Although many outstanding prior arts have been proposed for this task, there are still two issues that need to be further studied: 1) how to fuse diverse types of features with forgery clues; 2) how to progressively integrate multistage features for better localization performance. In this paper, we propose a tripartite progressive integration network (TriPINet) for end-to-end image manipulation localization. First, we extract both visual perception information, e.g., RGB input images, and visual imperceptible features, e.g., frequency and noise traces for forensic feature learning. Second, we develop a guided cross-modality dual-attention (gCMDA) module to fuse different types of forged clues. Third, we design a set of progressive integration squeeze-and-excitation (PI-SE) modules to improve localization performance by appropriately incorporating multiscale features in the decoder. Extensive experiments are conducted to compare our method with state-of-the-art image forensics approaches. The proposed TriPINet obtains competitive results on several benchmark datasets.
translated by 谷歌翻译
法医分析取决于从操纵图像识别隐藏迹线。由于它们无法处理功能衰减和依赖主导空间特征,传统的神经网络失败。在这项工作中,我们提出了一种新颖的门控语言注意力网络(GCA-NET),用于全球背景学习的非本地关注块。另外,我们利用所通用的注意机制结合密集的解码器网络,以引导在解码阶段期间的相关特征的流动,允许精确定位。所提出的注意力框架允许网络通过过滤粗糙度来专注于相关区域。此外,通过利用多尺度特征融合和有效的学习策略,GCA-Net可以更好地处理操纵区域的比例变化。我们表明,我们的方法在多个基准数据集中平均优于最先进的网络,平均为4.2%-5.4%AUC。最后,我们还开展了广泛的消融实验,以展示该方法对图像取证的鲁棒性。
translated by 谷歌翻译
为了防止操纵图像内容(例如剪接,复制移动和删除),我们开发了一个渐进的时空通道相关网络(PSCC-NET),以检测和本地化图像操作。 PSCC-NET以两路程的过程处理图像:一条自上而下的路径,该路径提取本地和全局特征以及检测输入图像是否被操纵的自下而上的路径,并在多个尺度上估算其操纵掩码,每个尺度都在其中面具的条件是在前一个。与传统的编码器编码器和无流动结构不同,PSCC-NET在不同尺度上的功能具有密集的交叉连接,以粗到更细致的方式产生操纵罩。此外,空间通道相关模块(SCCM)捕获自下而上路径中的空间和渠道相关性,该路径赋予了整体提示,使网络能够应对广泛的操纵攻击。得益于轻巧的主链和渐进式机制,PSCC-NET可以在50+ fps下处理1,080p图像。广泛的实验证明了PSCC-NET优于最先进方法在检测和定位方面。
translated by 谷歌翻译
与普通的计算机视觉任务不同,将图像操作检测任务更多地关注图像的语义内容,更关注图像操纵的微妙信息。在本文中,通过改进的约束卷积提取的噪声图像用作模型的输入,而不是原始图像,以获得更微妙的操纵痕迹。同时,由高分辨率分支和上下文分支组成的双分支网络被用来尽可能捕获伪像的痕迹。通常,大多数操纵将操纵伪像在操纵边缘上。专门设计的操纵边缘检测模块是基于双分支网络构建的,以更好地识别这些工件。图像中像素之间的相关性与它们的距离密切相关。两个像素越远,相关性越弱。我们为自我发场模块添加了一个距离因子,以更好地描述像素之间的相关性。四个公开图像操作数据集的实验结果证明了我们模型的有效性。
translated by 谷歌翻译
In this paper we present TruFor, a forensic framework that can be applied to a large variety of image manipulation methods, from classic cheapfakes to more recent manipulations based on deep learning. We rely on the extraction of both high-level and low-level traces through a transformer-based fusion architecture that combines the RGB image and a learned noise-sensitive fingerprint. The latter learns to embed the artifacts related to the camera internal and external processing by training only on real data in a self-supervised manner. Forgeries are detected as deviations from the expected regular pattern that characterizes each pristine image. Looking for anomalies makes the approach able to robustly detect a variety of local manipulations, ensuring generalization. In addition to a pixel-level localization map and a whole-image integrity score, our approach outputs a reliability map that highlights areas where localization predictions may be error-prone. This is particularly important in forensic applications in order to reduce false alarms and allow for a large scale analysis. Extensive experiments on several datasets show that our method is able to reliably detect and localize both cheapfakes and deepfakes manipulations outperforming state-of-the-art works. Code will be publicly available at https://grip-unina.github.io/TruFor/
translated by 谷歌翻译
在本文中,我们专注于探索有效的方法,以更快,准确和域的不可知性语义分割。受到相邻视频帧之间运动对齐的光流的启发,我们提出了一个流对齐模块(FAM),以了解相邻级别的特征映射之间的\ textit {语义流},并将高级特征广播到高分辨率特征有效地,有效地有效。 。此外,将我们的FAM与共同特征的金字塔结构集成在一起,甚至在轻量重量骨干网络(例如Resnet-18和DFNET)上也表现出优于其他实时方法的性能。然后,为了进一步加快推理过程,我们还提出了一个新型的封闭式双流对齐模块,以直接对齐高分辨率特征图和低分辨率特征图,在该图中我们将改进版本网络称为SFNET-LITE。广泛的实验是在几个具有挑战性的数据集上进行的,结果显示了SFNET和SFNET-LITE的有效性。特别是,建议的SFNET-LITE系列在使用RESNET-18主链和78.8 MIOU以120 fps运行的情况下,使用RTX-3090上的STDC主链在120 fps运行时,在60 fps运行时达到80.1 miou。此外,我们将四个具有挑战性的驾驶数据集(即CityScapes,Mapillary,IDD和BDD)统一到一个大数据集中,我们将其命名为Unified Drive细分(UDS)数据集。它包含不同的域和样式信息。我们基准了UDS上的几项代表性作品。 SFNET和SFNET-LITE仍然可以在UDS上取得最佳的速度和准确性权衡,这在如此新的挑战性环境中是强大的基准。所有代码和模型均可在https://github.com/lxtgh/sfsegnets上公开获得。
translated by 谷歌翻译
Recent progress on salient object detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs). Semantic segmentation and salient object detection algorithms developed lately have been mostly based on Fully Convolutional Neural Networks (FCNs). There is still a large room for improvement over the generic FCN models that do not explicitly deal with the scale-space problem. Holistically-Nested Edge Detector (HED) provides a skip-layer structure with deep supervision for edge and boundary detection, but the performance gain of HED on saliency detection is not obvious. In this paper, we propose a new salient object detection method by introducing short connections to the skip-layer structures within the HED architecture. Our framework takes full advantage of multi-level and multi-scale features extracted from FCNs, providing more advanced representations at each layer, a property that is critically needed to perform segment detection. Our method produces state-of-theart results on 5 widely tested salient object detection benchmarks, with advantages in terms of efficiency (0.08 seconds per image), effectiveness, and simplicity over the existing algorithms. Beyond that, we conduct an exhaustive analysis on the role of training data on performance. Our experimental results provide a more reasonable and powerful training set for future research and fair comparisons.
translated by 谷歌翻译
最近,基于CNN的RGB-D显着对象检测(SOD)在检测准确性方面取得了显着提高。但是,现有模型通常在效率和准确性方面表现良好。这阻碍了他们在移动设备以及许多实际问题上的潜在应用。在本文中,为了弥合RGB-D SOD的轻质和大型模型之间的准确性差距,一个有效的模块可以极大地提高准确性,但提出了很少的计算。受深度质量是影响准确性的关键因素的启发,我们提出了有效的深度质量启发的功能操纵(DQFM)过程,该过程可以根据深度质量动态滤波深度特征。提出的DQFM求助于低级RGB和深度特征的对齐,以及深度流的整体注意力,以明确控制和增强交叉模式融合。我们嵌入了DQFM,以获得一个称为DFM-NET的有效的轻质RGB-D SOD模型,此外,我们还设计了一个定制的深度骨架和两阶段解码器作为基本零件。 9个RGB-D数据集的广泛实验结果表明,我们的DFM-NET优于最近的有效型号,在CPU上以约20 fps的速度运行,仅8.5mb型号大小,同时快2.9/2.4倍,比6.7/3.1倍,小于6.7/3.1倍最新的最佳型号A2DELE和手机。与非效率模型相比,它还保持最先进的准确性。有趣的是,进一步的统计数据和分析验证了DQFM在没有任何质量标签的各种品质的深度图中的能力。最后但并非最不重要的一点是,我们进一步应用DFM-NET来处理视频SOD(VSOD),与最近的有效模型相比,相当的性能,而比该领域的先前最佳状态的速度/2.3倍/小2.3倍。我们的代码可在https://github.com/zwbx/dfm-net上找到。
translated by 谷歌翻译
尽管近期基于深度学习的语义细分,但远程感测图像的自动建筑检测仍然是一个具有挑战性的问题,由于全球建筑物的出现巨大变化。误差主要发生在构建足迹的边界,阴影区域,以及检测外表面具有与周围区域非常相似的反射率特性的建筑物。为了克服这些问题,我们提出了一种生成的对抗基于网络的基于网络的分割框架,其具有嵌入在发电机中的不确定性关注单元和改进模块。由边缘和反向关注单元组成的细化模块,旨在精炼预测的建筑地图。边缘注意力增强了边界特征,以估计更高的精度,并且反向关注允许网络探索先前估计区域中缺少的功能。不确定性关注单元有助于网络解决分类中的不确定性。作为我们方法的权力的衡量标准,截至2021年12月4日,它在Deepglobe公共领导板上的第二名,尽管我们的方法的主要重点 - 建筑边缘 - 并不完全对齐用于排行榜排名的指标。 DeepGlobe充满挑战数据集的整体F1分数为0.745。我们还报告了对挑战的Inria验证数据集的最佳成绩,我们的网络实现了81.28%的总体验证,总体准确性为97.03%。沿着同一条线,对于官方Inria测试数据集,我们的网络总体上得分77.86%和96.41%,而且准确性。
translated by 谷歌翻译
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
translated by 谷歌翻译
伪装的对象检测(COD)旨在识别自然场景中隐藏自己的物体。准确的COD遭受了许多与低边界对比度有关的挑战,并且对象出现(例如对象大小和形状)的较大变化。为了应对这些挑战,我们提出了一种新颖的背景感知跨层次融合网络(C2F-net),该网络融合了上下文感知的跨级特征,以准确识别伪装的对象。具体而言,我们通过注意力诱导的跨融合模块(ACFM)来计算来自多级特征的内容丰富的注意系数,该模块(ACFM)进一步在注意系数的指导下进一步集成了特征。然后,我们提出了一个双分支全局上下文模块(DGCM),以通过利用丰富的全球上下文信息来完善内容丰富的功能表示的融合功能。多个ACFM和DGCM以级联的方式集成,以产生高级特征的粗略预测。粗糙的预测充当了注意力图,以完善低级特征,然后再将其传递到我们的伪装推断模块(CIM)以生成最终预测。我们对三个广泛使用的基准数据集进行了广泛的实验,并将C2F-NET与最新模型(SOTA)模型进行比较。结果表明,C2F-NET是一种有效的COD模型,并且表现出明显的SOTA模型。此外,对息肉细分数据集的评估证明了我们在COD下游应用程序中C2F-NET的有希望的潜力。我们的代码可在以下网址公开获取:https://github.com/ben57882/c2fnet-tscvt。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
伪装的对象检测(COD)旨在检测周围环境的类似模式(例如,纹理,强度,颜色等)的对象,最近吸引了日益增长的研究兴趣。由于伪装对象通常存在非常模糊的边界,如何确定对象位置以及它们的弱边界是具有挑战性的,也是此任务的关键。受到生物视觉感知过程的启发,当人类观察者发现伪装对象时,本文提出了一种名为Errnet的新型边缘的可逆重新校准网络。我们的模型的特点是两种创新设计,即选择性边缘聚集(SEA)和可逆的重新校准单元(RRU),其旨在模拟视觉感知行为,并在潜在的伪装区域和背景之间实现有效的边缘和交叉比较。更重要的是,RRU与现有COD模型相比,具有更全面的信息。实验结果表明,errnet优于三个COD数据集和五个医学图像分割数据集的现有尖端基线。特别是,与现有的Top-1模型SINET相比,ERRNET显着提高了$ \ SIM 6%(平均电子测量)的性能,以显着高速(79.3 FPS),显示ERRNET可能是一般和强大的解决方案COD任务。
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-regionbased context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixellevel prediction. The proposed approach achieves state-ofthe-art performance on various datasets. It came first in Im-ageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.
translated by 谷歌翻译
玻璃在我们的日常生活中非常普遍。现有的计算机视觉系统忽略了它,因此可能会产生严重的后果,例如,机器人可能会坠入玻璃墙。但是,感知玻璃的存在并不简单。关键的挑战是,任意物体/场景可以出现在玻璃后面。在本文中,我们提出了一个重要的问题,即从单个RGB图像中检测玻璃表面。为了解决这个问题,我们构建了第一个大规模玻璃检测数据集(GDD),并提出了一个名为GDNet-B的新颖玻璃检测网络,该网络通过新颖的大型场探索大型视野中的丰富上下文提示上下文特征集成(LCFI)模块并将高级和低级边界特征与边界特征增强(BFE)模块集成在一起。广泛的实验表明,我们的GDNET-B可以在GDD测试集内外的图像上达到满足玻璃检测结果。我们通过将其应用于其他视觉任务(包括镜像分割和显着对象检测)来进一步验证我们提出的GDNET-B的有效性和概括能力。最后,我们显示了玻璃检测的潜在应用,并讨论了可能的未来研究方向。
translated by 谷歌翻译
大多数息肉分段方法使用CNNS作为其骨干,导致在编码器和解码器之间的信息交换信息时的两个关键问题:1)考虑到不同级别特征之间的贡献的差异; 2)设计有效机制,以融合这些功能。不同于现有的基于CNN的方法,我们采用了一个变压器编码器,它学会了更强大和强大的表示。此外,考虑到息肉的图像采集影响和难以实现的性质,我们介绍了三种新模块,包括级联融合模块(CFM),伪装识别模块(CIM),A和相似性聚集模块(SAM)。其中,CFM用于从高级功能收集息肉的语义和位置信息,而CIM应用于在低级功能中伪装的息肉信息。在SAM的帮助下,我们将息肉区域的像素特征扩展到整个息肉区域的高电平语义位置信息,从而有效地融合了交叉级别特征。所提出的模型名为Polyp-PVT,有效地抑制了特征中的噪声,并显着提高了他们的表现力。在五个广泛采用的数据集上进行了广泛的实验表明,所提出的模型对各种具有挑战性的情况(例如,外观变化,小物体)比现有方法更加强大,并实现了新的最先进的性能。拟议的模型可在https://github.com/dengpingfan/polyp-pvt获得。
translated by 谷歌翻译
人行道表面数据的获取和评估在路面条件评估中起着至关重要的作用。在本文中,提出了一个称为RHA-NET的自动路面裂纹分割的有效端到端网络,以提高路面裂纹分割精度。 RHA-NET是通过将残留块(重阻)和混合注意块集成到编码器架构结构中来构建的。这些重组用于提高RHA-NET提取高级抽象特征的能力。混合注意块旨在融合低级功能和高级功能,以帮助模型专注于正确的频道和裂纹区域,从而提高RHA-NET的功能表现能力。构建并用于训练和评估所提出的模型的图像数据集,其中包含由自设计的移动机器人收集的789个路面裂纹图像。与其他最先进的网络相比,所提出的模型在全面的消融研究中验证了添加残留块和混合注意机制的功能。此外,通过引入深度可分离卷积生成的模型的轻加权版本可以更好地实现性能和更快的处理速度,而U-NET参数数量的1/30。开发的系统可以在嵌入式设备Jetson TX2(25 fps)上实时划分路面裂纹。实时实验拍摄的视频将在https://youtu.be/3xiogk0fig4上发布。
translated by 谷歌翻译