The detection of human body and its related parts (e.g., face, head or hands) have been intensively studied and greatly improved since the breakthrough of deep CNNs. However, most of these detectors are trained independently, making it a challenging task to associate detected body parts with people. This paper focuses on the problem of joint detection of human body and its corresponding parts. Specifically, we propose a novel extended object representation that integrates the center location offsets of body or its parts, and construct a dense single-stage anchor-based Body-Part Joint Detector (BPJDet). Body-part associations in BPJDet are embedded into the unified representation which contains both the semantic and geometric information. Therefore, BPJDet does not suffer from error-prone association post-matching, and has a better accuracy-speed trade-off. Furthermore, BPJDet can be seamlessly generalized to jointly detect any body part. To verify the effectiveness and superiority of our method, we conduct extensive experiments on the CityPersons, CrowdHuman and BodyHands datasets. The proposed BPJDet detector achieves state-of-the-art association performance on these three benchmarks while maintains high accuracy of detection. Code will be released to facilitate further studies.
translated by 谷歌翻译
物体检测通常需要在现代深度学习方法中基于传统或锚盒的滑动窗口分类器。但是,这些方法中的任何一个都需要框中的繁琐配置。在本文中,我们提供了一种新的透视图,其中检测对象被激励为高电平语义特征检测任务。与边缘,角落,斑点和其他特征探测器一样,所提出的探测器扫描到全部图像的特征点,卷积自然适合该特征点。但是,与这些传统的低级功能不同,所提出的探测器用于更高级别的抽象,即我们正在寻找有物体的中心点,而现代深层模型已经能够具有如此高级别的语义抽象。除了Blob检测之外,我们还预测了中心点的尺度,这也是直接的卷积。因此,在本文中,通过卷积简化了行人和面部检测作为直接的中心和规模预测任务。这样,所提出的方法享有一个无盒设置。虽然结构简单,但它对几个具有挑战性的基准呈现竞争准确性,包括行人检测和面部检测。此外,执行交叉数据集评估,证明所提出的方法的卓越泛化能力。可以访问代码和模型(https://github.com/liuwei16/csp和https://github.com/hasanirtiza/pedestron)。
translated by 谷歌翻译
在诸如人类姿态估计的关键点估计任务中,尽管具有显着缺点,但基于热线的回归是主要的方法:Heatmaps本质上遭受量化误差,并且需要过多的计算来产生和后处理。有动力寻找更有效的解决方案,我们提出了一种新的热映射无关声点估计方法,其中各个关键点和空间相关的关键点(即,姿势)被建模为基于密集的单级锚的检测框架内的对象。因此,我们将我们的方法Kapao(发音为“KA-Pow!”)对于关键点并作为对象构成。我们通过同时检测人姿势对象和关键点对象并融合检测来利用两个对象表示的强度来将Kapao应用于单阶段多人人类姿势估算问题。在实验中,我们观察到Kapao明显比以前的方法更快,更准确,这极大地来自热爱处理后处理。此外,在不使用测试时间增强时,精度速度折衷特别有利。我们的大型型号Kapao-L在Microsoft Coco Keypoints验证集上实现了70.6的AP,而无需测试时增强,其比下一个最佳单级模型更准确,4.0 AP更准确。此外,Kapao在重闭塞的存在下擅长。在繁荣试验套上,Kapao-L为一个单级方法实现新的最先进的准确性,AP为68.9。
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
Among current anchor-based detectors, a positive anchor box will be intuitively assigned to the object that overlaps it the most. The assigned label to each anchor will directly determine the optimization direction of the corresponding prediction box, including the direction of box regression and category prediction. In our practice of crowded object detection, however, the results show that a positive anchor does not always regress toward the object that overlaps it the most when multiple objects overlap. We name it anchor drift. The anchor drift reflects that the anchor-object matching relation, which is determined by the degree of overlap between anchors and objects, is not always optimal. Conflicts between the fixed matching relation and learned experience in the past training process may cause ambiguous predictions and thus raise the false-positive rate. In this paper, a simple but efficient adaptive two-stage anchor assignment (TSAA) method is proposed. It utilizes the final prediction boxes rather than the fixed anchors to calculate the overlap degree with objects to determine which object to regress for each anchor. The participation of the prediction box makes the anchor-object assignment mechanism adaptive. Extensive experiments are conducted on three classic detectors RetinaNet, Faster-RCNN and YOLOv3 on CrowdHuman and COCO to evaluate the effectiveness of TSAA. The results show that TSAA can significantly improve the detectors' performance without additional computational costs or network structure changes.
translated by 谷歌翻译
我们提出对象盒,这是一种新颖的单阶段锚定且高度可推广的对象检测方法。与现有的基于锚固的探测器和无锚的探测器相反,它们更偏向于其标签分配中的特定对象量表,我们仅将对象中心位置用作正样本,并在不同的特征级别中平均处理所有对象,而不论对象'尺寸或形状。具体而言,我们的标签分配策略将对象中心位置视为形状和尺寸不足的锚定,并以无锚固的方式锚定,并允许学习每个对象的所有尺度。为了支持这一点,我们将新的回归目标定义为从中心单元位置的两个角到边界框的四个侧面的距离。此外,为了处理比例变化的对象,我们提出了一个量身定制的损失来处理不同尺寸的盒子。结果,我们提出的对象检测器不需要在数据集中调整任何依赖数据集的超参数。我们在MS-Coco 2017和Pascal VOC 2012数据集上评估了我们的方法,并将我们的结果与最先进的方法进行比较。我们观察到,与先前的作品相比,对象盒的性能优惠。此外,我们执行严格的消融实验来评估我们方法的不同组成部分。我们的代码可在以下网址提供:https://github.com/mohsenzand/objectbox。
translated by 谷歌翻译
大多数最先进的实例级人类解析模型都采用了两阶段的基于锚的探测器,因此无法避免启发式锚盒设计和像素级别缺乏分析。为了解决这两个问题,我们设计了一个实例级人类解析网络,该网络在像素级别上无锚固且可解决。它由两个简单的子网络组成:一个用于边界框预测的无锚检测头和一个用于人体分割的边缘引导解析头。无锚探测器的头继承了像素样的优点,并有效地避免了对象检测应用中证明的超参数的敏感性。通过引入部分感知的边界线索,边缘引导的解析头能够将相邻的人类部分与彼此区分开,最多可在一个人类实例中,甚至重叠的实例。同时,利用了精炼的头部整合盒子级别的分数和部分分析质量,以提高解析结果的质量。在两个多个人类解析数据集(即CIHP和LV-MHP-V2.0)和一个视频实例级人类解析数据集(即VIP)上进行实验,表明我们的方法实现了超过全球级别和实例级别的性能最新的一阶段自上而下的替代方案。
translated by 谷歌翻译
现代领先的物体探测器是从深层CNN的骨干分类器网络重新批准的两阶段或一级网络。YOLOV3是一种这样的非常熟知的最新状态单次检测器,其采用输入图像并将其划分为相等大小的网格矩阵。具有物体中心的网格单元是负责检测特定对象的电池。本文介绍了一种新的数学方法,为准确紧密绑定函数预测分配每个对象的多个网格。我们还提出了一个有效的离线拷贝粘贴数据增强,用于对象检测。我们提出的方法显着优于一些现有的对象探测器,具有进一步更好的性能的前景。
translated by 谷歌翻译
Figure 1: Results obtained from our single image, monocular 3D object detection network MonoDIS on a KITTI3D test image with corresponding birds-eye view, showing its ability to estimate size and orientation of objects at different scales.
translated by 谷歌翻译
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:tinyurl.com/FCOSv1
translated by 谷歌翻译
In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its quality. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN. To facilitate future research, two implementations are made available at https://github.com/zhaoweicai/cascade-rcnn (Caffe) and https://github.com/zhaoweicai/Detectron-Cascade-RCNN (Detectron).
translated by 谷歌翻译
Recent one-stage object detectors follow a per-pixel prediction approach that predicts both the object category scores and boundary positions from every single grid location. However, the most suitable positions for inferring different targets, i.e., the object category and boundaries, are generally different. Predicting all these targets from the same grid location thus may lead to sub-optimal results. In this paper, we analyze the suitable inference positions for object category and boundaries, and propose a prediction-target-decoupled detector named PDNet to establish a more flexible detection paradigm. Our PDNet with the prediction decoupling mechanism encodes different targets separately in different locations. A learnable prediction collection module is devised with two sets of dynamic points, i.e., dynamic boundary points and semantic points, to collect and aggregate the predictions from the favorable regions for localization and classification. We adopt a two-step strategy to learn these dynamic point positions, where the prior positions are estimated for different targets first, and the network further predicts residual offsets to the positions with better perceptions of the object properties. Extensive experiments on the MS COCO benchmark demonstrate the effectiveness and efficiency of our method. With a single ResNeXt-64x4d-101-DCN as the backbone, our detector achieves 50.1 AP with single-scale testing, which outperforms the state-of-the-art methods by an appreciable margin under the same experimental settings.Moreover, our detector is highly efficient as a one-stage framework. Our code is public at https://github.com/yangli18/PDNet.
translated by 谷歌翻译
在本文中,我们通过将无线电信息结合到最先进的检测方法中提出了一种无线电辅助人类检测框架,包括基于锚的oneStage检测器和两级检测器。我们从无线电信号中提取无线电定位和标识符信息以帮助人类检测,由于哪种错误阳性和假否定的问题可能会大大缓解。对于两个探测器,我们使用基于无线电定位的置信度评分修订来提高检测性能。对于两级检测方法,我们建议利用无线电定位产生的区域提案,而不是依赖于区域提案网络(RPN)。此外,利用无线电标识符信息,还提出了具有无线电定位约束的非最大抑制方法,以进一步抑制假检测并减少错过的检测。模拟Microsoft Coco DataSet和CALTECH步行数据集的实验表明,借助无线电信息可以改善平均平均精度(地图)和最先进的检测方法的错过率。最后,我们在现实世界的情况下进行实验,以展示我们在实践中的提出方法的可行性。
translated by 谷歌翻译
分割高度重叠的图像对象是具有挑战性的,因为图像上的真实对象轮廓和遮挡边界之间通常没有区别。与先前的实例分割方法不同,我们将图像形成模拟为两个重叠层的组成,并提出了双层卷积网络(BCNET),其中顶层检测到遮挡对象(遮挡器),而底层则渗透到部分闭塞实例(胶囊)。遮挡关系与双层结构的显式建模自然地将遮挡和遮挡实例的边界解散,并在掩模回归过程中考虑了它们之间的相互作用。我们使用两种流行的卷积网络设计(即完全卷积网络(FCN)和图形卷积网络(GCN))研究了双层结构的功效。此外,我们通过将图像中的实例表示为单独的可学习封闭器和封闭者查询,从而使用视觉变压器(VIT)制定双层解耦。使用一个/两个阶段和基于查询的对象探测器具有各种骨架和网络层选择验证双层解耦合的概括能力,如图像实例分段基准(可可,亲戚,可可)和视频所示实例分割基准(YTVIS,OVIS,BDD100K MOTS),特别是对于重闭塞病例。代码和数据可在https://github.com/lkeab/bcnet上找到。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
在多人2D姿势估计中,自下而上的方法同时预测了所有人的姿势,与自上而下的方法不同,不依赖于人类的检测。但是,与现有的自上而下方法相比,SOTA自下而上的方法的精度仍然不如较低。这是由于预测的人类姿势是根据不一致的人类边界箱中心进行回归的,并且缺乏人类规范的正常化,从而导致预测的人类姿势被遗漏了不准确和小规模的人。为了推动自下而上的姿势估计的信封,我们首先提出了多尺度训练,以增强网络以通过单尺度测试来处理规模变化,尤其是对于小规模的人。其次,我们介绍了双解剖中心(即头部和身体),在这里我们可以更准确,可靠地预测人类的姿势,尤其是对于小规模的人。此外,现有的自下而上方法采用多尺度测试来以多个额外的前向通行证的价格提高姿势估计的准确性,这削弱了自下而上方法的效率,与自上而下的方法相比,核心强度。相比之下,我们的多尺度训练使该模型能够预测单个前向通行证(即单尺度测试)中的高质量姿势。我们的方法在边界框的精度方面取得了38.4 \%的改进,在边界框上进行了39.1 \%的改进,以对可可的具有挑战性的小规模人群进行对现状(SOTA)的回忆(SOTA)。对于人类姿势AP评估,我们在带有单尺度测试的可可测试-DEV集中实现了新的SOTA(71.0 AP)。我们还在跨数据库评估中在Ochuman数据集上实现了最高的性能(40.3 AP)。
translated by 谷歌翻译
随着深度卷积神经网络的兴起,对象检测在过去几年中取得了突出的进步。但是,这种繁荣无法掩盖小物体检测(SOD)的不令人满意的情况,这是计算机视觉中臭名昭著的挑战性任务之一,这是由于视觉外观不佳和由小目标的内在结构引起的嘈杂表示。此外,用于基准小对象检测方法基准测试的大规模数据集仍然是瓶颈。在本文中,我们首先对小物体检测进行了详尽的审查。然后,为了催化SOD的发展,我们分别构建了两个大规模的小物体检测数据集(SODA),SODA-D和SODA-A,分别集中在驾驶和空中场景上。 SODA-D包括24704个高质量的交通图像和277596个9个类别的实例。对于苏打水,我们收集2510个高分辨率航空图像,并在9个类别上注释800203实例。众所周知,拟议的数据集是有史以来首次尝试使用针对多类SOD量身定制的大量注释实例进行大规模基准测试。最后,我们评估主流方法在苏打水上的性能。我们预计发布的基准可以促进SOD的发展,并产生该领域的更多突破。数据集和代码将很快在:\ url {https://shaunyuan22.github.io/soda}上。
translated by 谷歌翻译
We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [11] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster -50 fps on a Titan X (Pascal) GPU -and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [28,29] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm.For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent 26] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.
translated by 谷歌翻译
本文调查了2D全身人类姿势估计的任务,该任务旨在将整个人体(包括身体,脚,脸部和手)局部定位在整个人体上。我们提出了一种称为Zoomnet的单网络方法,以考虑到完整人体的层次结构,并解决不同身体部位的规模变化。我们进一步提出了一个称为Zoomnas的神经体系结构搜索框架,以促进全身姿势估计的准确性和效率。Zoomnas共同搜索模型体系结构和不同子模块之间的连接,并自动为搜索的子模块分配计算复杂性。为了训练和评估Zoomnas,我们介绍了第一个大型2D人类全身数据集,即可可叶全体V1.0,它注释了133个用于野外图像的关键点。广泛的实验证明了Zoomnas的有效性和可可叶v1.0的重要性。
translated by 谷歌翻译
我们提出了一种用于多实例姿态估计的端到端培训方法,称为诗人(姿势估计变压器)。将卷积神经网络与变压器编码器 - 解码器架构组合,我们将多个姿势估计从图像标记为直接设置预测问题。我们的模型能够使用双方匹配方案直接出现所有个人的姿势。诗人使用基于集的全局损失进行培训,该丢失包括关键点损耗,可见性损失和载重损失。诗歌的原因与多个检测到的个人与完整图像上下文之间的关系直接预测它们并行姿势。我们展示诗人在Coco Keypoint检测任务上实现了高精度,同时具有比其他自下而上和自上而下的方法更少的参数和更高推理速度。此外,在将诗人应用于动物姿势估计时,我们表现出了成功的转移学习。据我们所知,该模型是第一个端到端的培训多实例姿态估计方法,我们希望它将成为一种简单而有前途的替代方案。
translated by 谷歌翻译