在现代探测器中,默认使用四变独立回归定位损耗,如平滑 - $ \ ell_1 $丢失。然而,这种损失超薄了,使其与联盟(iou)的最终评估度量,交叉口不一致。直接采用标准IOU也不是不可行的,因为在非重叠盒的情况下的恒定零高原和最小值的非零梯度可能使其不可培养。因此,我们提出了一种解决这些问题的系统方法。首先,我们提出了一个新的公制,延伸的iou(eiou),当两个盒子没有重叠时,它是良好的定义,当重叠时,它是不重叠的并且减少到标准iou。其次,我们介绍了凸化技术(CT)以在EIOU的基础上构建损失,这可以保证梯度最小为零。第三,我们提出了一种稳定的优化技术(SOT),使分数欧盟损失更加稳定,平稳地接近最低。第四,为了充分利用基于EIOO的损失的能力,我们引入了一个相互关联的iou预测头,以进一步提升本地化准确性。通过拟议的贡献,新方法与Reset50 + FPN的备用R-CNN掺入,作为骨干收益率\ TextBF {4.2 Map} Gain on Voc2007和Coco2017上的基准下滑 - $ \ ell_1 $损失,几乎\ textbf {没有培训和推理计算成本}。具体而言,度量标准更长的是,增益越令人显着,在Coco2017上的VOC2007和\ TextBF {5.4 MAP}上越突出,可以在Coco2017上以公式$ AP_ {90} $。
translated by 谷歌翻译
Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axisaligned 2D bounding boxes, it can be shown that IoU can be directly used as a regression loss. However, IoU has a plateau making it infeasible to optimize in the case of nonoverlapping bounding boxes. In this paper, we address the weaknesses of IoU by introducing a generalized version as both a new loss and a new metric. By incorporating this generalized IoU (GIoU ) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, IoU based, and new, GIoU based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.
translated by 谷歌翻译
In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its quality. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN. To facilitate future research, two implementations are made available at https://github.com/zhaoweicai/cascade-rcnn (Caffe) and https://github.com/zhaoweicai/Detectron-Cascade-RCNN (Detectron).
translated by 谷歌翻译
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:tinyurl.com/FCOSv1
translated by 谷歌翻译
我们提出对象盒,这是一种新颖的单阶段锚定且高度可推广的对象检测方法。与现有的基于锚固的探测器和无锚的探测器相反,它们更偏向于其标签分配中的特定对象量表,我们仅将对象中心位置用作正样本,并在不同的特征级别中平均处理所有对象,而不论对象'尺寸或形状。具体而言,我们的标签分配策略将对象中心位置视为形状和尺寸不足的锚定,并以无锚固的方式锚定,并允许学习每个对象的所有尺度。为了支持这一点,我们将新的回归目标定义为从中心单元位置的两个角到边界框的四个侧面的距离。此外,为了处理比例变化的对象,我们提出了一个量身定制的损失来处理不同尺寸的盒子。结果,我们提出的对象检测器不需要在数据集中调整任何依赖数据集的超参数。我们在MS-Coco 2017和Pascal VOC 2012数据集上评估了我们的方法,并将我们的结果与最先进的方法进行比较。我们观察到,与先前的作品相比,对象盒的性能优惠。此外,我们执行严格的消融实验来评估我们方法的不同组成部分。我们的代码可在以下网址提供:https://github.com/mohsenzand/objectbox。
translated by 谷歌翻译
无锚的检测器基本上将对象检测作为密集的分类和回归。对于流行的无锚检测器,通常是引入单个预测分支来估计本地化的质量。当我们深入研究分类和质量估计的实践时,会观察到以下不一致之处。首先,对于某些分配了完全不同标签的相邻样品,训练有素的模型将产生相似的分类分数。这违反了训练目标并导致绩效退化。其次,发现检测到具有较高信心的边界框与相应的地面真相具有较小的重叠。准确的局部边界框将被非最大抑制(NMS)过程中的精确量抑制。为了解决不一致问题,提出了动态平滑标签分配(DSLA)方法。基于最初在FCO中开发的中心概念,提出了平稳的分配策略。在[0,1]中将标签平滑至连续值,以在正样品和负样品之间稳定过渡。联合(IOU)在训练过程中会动态预测,并与平滑标签结合。分配动态平滑标签以监督分类分支。在这样的监督下,质量估计分支自然合并为分类分支,这简化了无锚探测器的体系结构。全面的实验是在MS Coco基准上进行的。已经证明,DSLA可以通过减轻上述无锚固探测器的不一致来显着提高检测准确性。我们的代码在https://github.com/yonghaohe/dsla上发布。
translated by 谷歌翻译
Bounding box regression is the crucial step in object detection. In existing methods, while n-norm loss is widely adopted for bounding box regression, it is not tailored to the evaluation metric, i.e., Intersection over Union (IoU). Recently, IoU loss and generalized IoU (GIoU) loss have been proposed to benefit the IoU metric, but still suffer from the problems of slow convergence and inaccurate regression. In this paper, we propose a Distance-IoU (DIoU) loss by incorporating the normalized distance between the predicted box and the target box, which converges much faster in training than IoU and GIoU losses. Furthermore, this paper summarizes three geometric factors in bounding box regression, i.e., overlap area, central point distance and aspect ratio, based on which a Complete IoU (CIoU) loss is proposed, thereby leading to faster convergence and better performance. By incorporating DIoU and CIoU losses into state-of-the-art object detection algorithms, e.g., YOLO v3, SSD and Faster R-CNN, we achieve notable performance gains in terms of not only IoU metric but also GIoU metric. Moreover, DIoU can be easily adopted into non-maximum suppression (NMS) to act as the criterion, further boosting performance improvement. The source code and trained models are available at https://github.com/Zzh-tju/DIoU.
translated by 谷歌翻译
Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors.
translated by 谷歌翻译
在对象检测中,边界框回归(BBR)是决定对象定位性能的关键步骤。但是,我们发现BBR的大多数先前的损失功能都有两个主要缺点:(i)$ \ ell_n $ -norm和IOU基于IOU的损失功能都无法效率地描述BBR的目标,这会导致收敛速度缓慢和不准确的回归结果。 。 (ii)大多数损失函数都忽略了BBR中的不平衡问题,即与目标盒有较小重叠的大量锚盒对BBR的优化有最大的影响。为了减轻造成的不利影响,我们进行了彻底的研究,以利用本文中BBR损失的潜力。首先,提出了有关联合(EIOU)损失的有效交集,该交集明确测量了BBR中三个几何因素的差异,即重叠面积,中心点和侧面长度。之后,我们说明有效的示例挖掘(EEM)问题,并提出了焦点损失的回归版本,以使回归过程集中在高质量的锚点上。最后,将上述两个部分组合在一起以获得新的损失函数,即焦点损失。对合成数据集和真实数据集进行了广泛的实验。与其他BBR损失相比,在收敛速度和定位精度上都可以显着优势。
translated by 谷歌翻译
In this paper, we introduce an anchor-box free and single shot instance segmentation method, which is conceptually simple, fully convolutional and can be used by easily embedding it into most off-the-shelf detection methods. Our method, termed PolarMask, formulates the instance segmentation problem as predicting contour of instance through instance center classification and dense distance regression in a polar coordinate. Moreover, we propose two effective approaches to deal with sampling high-quality center examples and optimization for dense distance regression, respectively, which can significantly improve the performance and simplify the training process. Without any bells and whistles, PolarMask achieves 32.9% in mask mAP with single-model and single-scale training/testing on the challenging COCO dataset.For the first time, we show that the complexity of instance segmentation, in terms of both design and computation complexity, can be the same as bounding box object detection and this much simpler and flexible instance segmentation framework can achieve competitive accuracy. We hope that the proposed PolarMask framework can serve as a fundamental and strong baseline for single shot instance segmentation task. Code is available at: github.com/xieenze/PolarMask.
translated by 谷歌翻译
In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code will be made available at https://github.com/zhaoweicai/cascade-rcnn.
translated by 谷歌翻译
现代领先的物体探测器是从深层CNN的骨干分类器网络重新批准的两阶段或一级网络。YOLOV3是一种这样的非常熟知的最新状态单次检测器,其采用输入图像并将其划分为相等大小的网格矩阵。具有物体中心的网格单元是负责检测特定对象的电池。本文介绍了一种新的数学方法,为准确紧密绑定函数预测分配每个对象的多个网格。我们还提出了一个有效的离线拷贝粘贴数据增强,用于对象检测。我们提出的方法显着优于一些现有的对象探测器,具有进一步更好的性能的前景。
translated by 谷歌翻译
物体检测在计算机视觉中取得了巨大的进步。具有外观降级的小物体检测是一个突出的挑战,特别是对于鸟瞰观察。为了收集足够的阳性/阴性样本进行启发式训练,大多数物体探测器预设区域锚,以便将交叉联盟(iou)计算在地面判处符号数据上。在这种情况下,小物体经常被遗弃或误标定。在本文中,我们提出了一种有效的动态增强锚(DEA)网络,用于构建新颖的训练样本发生器。与其他最先进的技术不同,所提出的网络利用样品鉴别器来实现基于锚的单元和无锚单元之间的交互式样本筛选,以产生符合资格的样本。此外,通过基于保守的基于锚的推理方案的多任务联合训练增强了所提出的模型的性能,同时降低计算复杂性。所提出的方案支持定向和水平对象检测任务。对两个具有挑战性的空中基准(即,DotA和HRSC2016)的广泛实验表明,我们的方法以适度推理速度和用于训练的计算开销的准确性实现最先进的性能。在DotA上,我们的DEA-NET与ROI变压器的基线集成了0.40%平均平均精度(MAP)的先进方法,以便用较弱的骨干网(Resnet-101 VS Resnet-152)和3.08%平均 - 平均精度(MAP),具有相同骨干网的水平对象检测。此外,我们的DEA网与重新排列的基线一体化实现最先进的性能80.37%。在HRSC2016上,它仅使用3个水平锚点超过1.1%的最佳型号。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
对象检测是典型的多任务学习应用程序,其同时优化分类和回归。但是,分类损失总是以基于锚的方法的多任务损失主导,妨碍了任务的一致和平衡优化。在本文中,我们发现转移边界盒可以在分类中改变正面和负样本的划分,意思是分类取决于回归。此外,考虑到不同的数据集,优化器和回归损耗功能,我们总结了关于微调损耗重量的三个重要结论。基于上述结论,我们提出了自适应损失重量调整(ALWA)以根据损失的统计特征来解决优化基于锚的方法的不平衡。通过将Alwa纳入以前的最先进的探测器,我们在Pascal VOC和MS Coco上实现了显着的性能增益,即使是L1,Smoothl1和Ciou丢失。代码可在https://github.com/ywx-hub/alwa获得。
translated by 谷歌翻译
由于基于相交的联盟(IOU)优化维持最终IOU预测度量和损失的一致性,因此它已被广泛用于单级2D对象检测器的回归和分类分支。最近,几种3D对象检测方法采用了基于IOU的优化,并用3D iou直接替换了2D iou。但是,由于复杂的实施和效率低下的向后操作,3D中的这种直接计算非常昂贵。此外,基于3D IOU的优化是优化的,因为它对旋转很敏感,因此可能导致训练不稳定性和检测性能恶化。在本文中,我们提出了一种新型的旋转旋转iou(RDIOU)方法,该方法可以减轻旋转敏感性问题,并在训练阶段与3D IOU相比产生更有效的优化目标。具体而言,我们的RDIOU通过将旋转变量解耦为独立术语,但保留3D iou的几何形状来简化回归参数的复杂相互作用。通过将RDIOU纳入回归和分类分支,鼓励网络学习更精确的边界框,并同时克服分类和回归之间的错位问题。基准Kitti和Waymo开放数据集的广泛实验验证我们的RDIOU方法可以为单阶段3D对象检测带来实质性改进。
translated by 谷歌翻译
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with "attention" mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
translated by 谷歌翻译
检测微小的物体是一个非常具有挑战性的问题,因为一个小物体只包含几个像素的大小。我们证明,由于缺乏外观信息,最新的检测器不会对微小物体产生令人满意的结果。我们的主要观察结果是,基于联合(IOU)的相交(例如IOU本身及其扩展)对微小物体的位置偏差非常敏感,并且在基于锚固的检测器中使用时会大大恶化检测性能。为了减轻这一点,我们提出了使用Wasserstein距离进行微小对象检测的新评估度量。具体而言,我们首先将边界框建模为2D高斯分布,然后提出一个新的公制称为标准化的瓦斯汀距离(NWD),以通过相应的高斯分布来计算它们之间的相似性。提出的NWD度量可以轻松地嵌入分配中,非最大抑制作用以及任何基于锚固的检测器的损耗函数,以替换常用的IOU度量。我们在新的数据集上评估了我们的度量,以用于微小对象检测(AI-TOD),其中平均对象大小比现有对象检测数据集小得多。广泛的实验表明,在配备NWD指标时,我们的方法的性能比标准的微调基线高6.7 AP点,并且比最先进的竞争对手高6.0 AP点。代码可在以下网址提供:https://github.com/jwwangchn/nwd。
translated by 谷歌翻译
In object detection, keypoint-based approaches often suffer a large number of incorrect object bounding boxes, arguably due to the lack of an additional look into the cropped regions. This paper presents an efficient solution which explores the visual patterns within each cropped region with minimal costs. We build our framework upon a representative one-stage keypoint-based detector named Corner-Net. Our approach, named CenterNet, detects each object as a triplet, rather than a pair, of keypoints, which improves both precision and recall. Accordingly, we design two customized modules named cascade corner pooling and center pooling, which play the roles of enriching information collected by both top-left and bottom-right corners and providing more recognizable information at the central regions, respectively. On the MS-COCO dataset, CenterNet achieves an AP of 47.0%, which outperforms all existing one-stage detectors by at least 4.9%. Meanwhile, with a faster inference speed, CenterNet demonstrates quite comparable performance to the top-ranked two-stage detectors. Code is available at https://github.com/ Duankaiwen/CenterNet.
translated by 谷歌翻译
在这项研究中,我们深入研究了半监督对象检测〜(SSOD)所面临的独特挑战。我们观察到当前的探测器通常遭受3个不一致问题。 1)分配不一致,传统的分配策略对标记噪声很敏感。 2)子任务不一致,其中分类和回归预测在同一特征点未对准。 3)时间不一致,伪Bbox在不同的训练步骤中差异很大。这些问题导致学生网络的优化目标不一致,从而恶化了性能并减慢模型收敛性。因此,我们提出了一个系统的解决方案,称为一致的老师,以补救上述挑战。首先,自适应锚分配代替了基于静态的策略,该策略使学生网络能够抵抗嘈杂的psudo bbox。然后,我们通过设计功能比对模块来校准子任务预测。最后,我们采用高斯混合模型(GMM)来动态调整伪盒阈值。一致的老师在各种SSOD评估上提供了新的强大基线。只有10%的带注释的MS-Coco数据,它可以使用Resnet-50骨干实现40.0 MAP,该数据仅使用伪标签,超过了4个地图。当对完全注释的MS-Coco进行其他未标记的数据进行培训时,性能将进一步增加到49.1 MAP。我们的代码将很快开源。
translated by 谷歌翻译