我们解决了一项新的任务,即计数和检测。给定目标对象类的一些示例边界框,我们试图计数和检测目标类的所有对象。该任务与几个弹出对象计数相同的监督,但另外还输出对象边界框以及总体计数。为了解决这个具有挑战性的问题,我们介绍了一种新颖的两阶段训练策略和一种新颖的不确定性 - 少数光对象探测器:计数 - 滴定。前者的目的是生成伪距离界限框来训练后者。后者利用了前者提供的伪基真实,但采取了必要的步骤来解释伪基真实的不完美。为了验证我们在新任务上的方法的性能,我们介绍了两个名为FSCD-147和FSCD-LVIS的新数据集。两个数据集都包含具有复杂场景,每个图像多个对象类的图像,并且对象形状,大小和外观的巨大变化。我们提出的方法优于非常强大的基线,该基线是根据数量计数和少量对象检测而适应的,并且在计数和检测指标中均具有很大的余量。代码和模型可在\ url {https://github.com/vinairesearch/counting-detr}中获得。
translated by 谷歌翻译
本文在3D Point Cloud中介绍了一个新问题:很少示例实例分割。给定一些带注释的点云举例说明了目标类,我们的目标是在查询点云中细分该目标类的所有实例。这个问题具有广泛的实用应用,在重点实例分段注释非常昂贵的收集中。为了解决此问题,我们提出了测量形式 - 第一个用于3D点云实例分割的地球引导变压器。关键的想法是利用大地距离来应对LIDAR 3D点云的密度不平衡。 LIDAR 3D点云在物体表面附近茂密,在其他地方稀疏或空,使欧几里得距离较差以区分不同的物体。另一方面,大地测量距离更合适,因为它编码了场景的几何形状,该几何形状可以用作变压器解码器中注意机制的指导信号,以生成代表实例的不同特征的内核。然后将这些内核用于动态卷积以获得最终实例掩模。为了评估新任务上的测量形式,我们提出了两个常见的3D点云实例分割数据集的新拆分:ScannETV2和S3DIS。地球形式始终优于根据最新的3D点云实例分割方法的强大基线,并具有明显的余量。代码可从https://github.com/vinairesearch/geoformer获得。
translated by 谷歌翻译
本文旨在解决一次性对象计数的具有挑战性的任务。鉴于包含新颖的图像,以前看不见的类别对象的图像,任务的目标是仅使用一个支持边界框示例计算所需类别中的所有实例。为此,我们提出了一个计数模型,您只需要查看一个实例(LAONET)。首先,特征相关模块结合了自我关注和相关的模块来学习内部关系和关系。它使得网络能够在不同的情况下对旋转和尺寸的不一致具有稳健性。其次,刻度聚合机制旨在帮助提取具有不同比例信息的特征。与现有的几次计数方法相比,LaOnet在以高收敛速度学习时达到最先进的结果。代码即将推出。
translated by 谷歌翻译
主要对象通常存在于图像或视频中,因为它们是摄影师想要突出的物体。人类观众可以轻松识别它们,但算法经常将它们与其他物体混为一组。检测主要受试者是帮助机器理解图像和视频内容的重要技术。我们展示了一个新的数据集,其目标是培训模型来了解对象的布局和图像的上下文,然后找到它们之间的主要拍摄对象。这是在三个方面实现的。通过通过专业射击技能创建的电影镜头收集图像,我们收集了具有强大多样性的数据集,具体而言,它包含107 \,700图像,从21 \,540电影拍摄。我们将其标记为两个类的边界框标签:主题和非主题前景对象。我们对数据集进行了详细分析,并将任务与显着性检测和对象检测进行比较。 imagesBject是第一个尝试在摄影师想要突出显示的图像中本地化主题的数据集。此外,我们发现基于变压器的检测模型提供了其他流行模型架构中的最佳结果。最后,我们讨论了潜在的应用并以数据集的重要性讨论。
translated by 谷歌翻译
在本文中,我们考虑了通用视觉对象计数的问题,其目的是开发一种计算模型,用于使用任意数量的“示例”,即零射击或几次计数来计算任意语义类别的对象数量。为此,我们做出以下四个贡献:(1)我们引入了一种基于变压器的新型架构,用于广义视觉对象计数,称为计数变压器(乡村),该架构明确捕获图像贴片或给定的“示例”之间的相似性,通过注意机制;(2)我们采用了两阶段的训练制度,首先通过自我监督的学习预先培训模型,然后进行监督的微调;(3)我们提出了一个简单,可扩展的管道,以合成合成用大量实例或不同语义类别的训练图像明确迫使模型使用给定的“示例”;(4)我们对大规模计数基准的彻底消融研究,例如FSC-147,并在零和少数设置上展示了最先进的性能。
translated by 谷歌翻译
人群本地化(预测头部位置)是一项更实用,更高的任务,而不是仅仅计数。现有方法采用伪装框或预设计的本地化图,依靠复杂的后处理来获得头部位置。在本文中,我们提出了一个名为CLTR的优雅,端到端的人群本地化变压器,该变压器在基于回归的范式中解决了任务。所提出的方法将人群定位视为直接设置的预测问题,将提取的功能和可训练的嵌入作为变压器描述器的输入。为了减少模棱两可的点并产生更合理的匹配结果,我们引入了基于KMO的匈牙利匹配器,该匹配器采用附近的环境作为辅助匹配成本。在各种数据设置中在五个数据集上进行的广泛实验显示了我们方法的有效性。特别是,所提出的方法在NWPU-Crowd,UCF-QNRF和Shanghaitech a部分A部分上实现了最佳的本地化性能。
translated by 谷歌翻译
人们在我们的日常互动中互相看待彼此或相互凝视是无处不在的,并且发现相互观察对于理解人类的社会场景具有重要意义。当前的相互视线检测方法集中在两阶段方法上,其推理速度受到两阶段管道的限制,第二阶段的性能受第一阶段的影响。在本文中,我们提出了一个新型的一阶段相互视线检测框架,称为相互视线变压器或MGTR,以端到端的方式执行相互视线检测。通过设计相互视线实例三元,MGTR可以检测每个人头边界框,并基于全局图像信息同时推断相互视线的关系,从而简化整个过程。两个相互视线数据集的实验结果表明,我们的方法能够加速相互视线检测过程而不会失去性能。消融研究表明,MGTR的不同组成部分可以捕获图像中不同级别的语义信息。代码可在https://github.com/gmbition/mgtr上找到
translated by 谷歌翻译
这项工作研究了很少的对象计数的问题,该问题计算了查询图像中出现的示例对象的数量(即由一个或几个支持图像描述)。主要的挑战在于,目标对象可以密集地包装在查询图像中,从而使每个单一对象都很难识别。为了解决障碍,我们提出了一个新颖的学习块,配备了相似性比较模块和功能增强模块。具体来说,给定支持图像和查询图像,我们首先通过比较每个空间位置的投影特征来得出分数图。有关所有支持图像的得分图将共收集在一起,并在示例维度和空间维度上均标准化,从而产生可靠的相似性图。然后,我们通过使用开发的点相似性作为加权系数来增强使用支持功能的查询功能。这样的设计鼓励模型通过更多地关注类似于支持图像的区域来检查查询图像,从而导致不同对象之间的界限更加清晰。在各种基准和培训设置上进行了广泛的实验表明,我们通过足够大的边距超过了最先进的方法。例如,在最近的大规模FSC-147数据集中,我们通过将平均绝对误差从22.08提高到14.32(35%$ \ uparrow $)来超越最新方法。代码已在https://github.com/zhiyuanyou/safecount中发布。
translated by 谷歌翻译
DETR方法中引入的查询机制正在改变对象检测的范例,最近有许多基于查询的方法获得了强对象检测性能。但是,当前基于查询的检测管道遇到了以下两个问题。首先,需要多阶段解码器来优化随机初始化的对象查询,从而产生较大的计算负担。其次,训练后的查询是固定的,导致不满意的概括能力。为了纠正上述问题,我们在较快的R-CNN框架中提出了通过查询生成网络预测的特征对象查询,并开发了一个功能性的查询R-CNN。可可数据集的广泛实验表明,我们的特征查询R-CNN获得了所有R-CNN探测器的最佳速度准确性权衡,包括最近的最新稀疏R-CNN检测器。该代码可在\ url {https://github.com/hustvl/featurized-queryrcnn}中获得。
translated by 谷歌翻译
The DETR object detection approach applies the transformer encoder and decoder architecture to detect objects and achieves promising performance. In this paper, we present a simple approach to address the main problem of DETR, the slow convergence, by using representation learning technique. In this approach, we detect an object bounding box as a pair of keypoints, the top-left corner and the center, using two decoders. By detecting objects as paired keypoints, the model builds up a joint classification and pair association on the output queries from two decoders. For the pair association we propose utilizing contrastive self-supervised learning algorithm without requiring specialized architecture. Experimental results on MS COCO dataset show that Pair DETR can converge at least 10x faster than original DETR and 1.5x faster than Conditional DETR during training, while having consistently higher Average Precision scores.
translated by 谷歌翻译
Open world object detection aims at detecting objects that are absent in the object classes of the training data as unknown objects without explicit supervision. Furthermore, the exact classes of the unknown objects must be identified without catastrophic forgetting of the previous known classes when the corresponding annotations of unknown objects are given incrementally. In this paper, we propose a two-stage training approach named Open World DETR for open world object detection based on Deformable DETR. In the first stage, we pre-train a model on the current annotated data to detect objects from the current known classes, and concurrently train an additional binary classifier to classify predictions into foreground or background classes. This helps the model to build an unbiased feature representations that can facilitate the detection of unknown classes in subsequent process. In the second stage, we fine-tune the class-specific components of the model with a multi-view self-labeling strategy and a consistency constraint. Furthermore, we alleviate catastrophic forgetting when the annotations of the unknown classes becomes available incrementally by using knowledge distillation and exemplar replay. Experimental results on PASCAL VOC and MS-COCO show that our proposed method outperforms other state-of-the-art open world object detection methods by a large margin.
translated by 谷歌翻译
Crowd counting plays an important role in risk perception and early warning, traffic control and scene statistical analysis. The challenges of crowd counting in highly dense and complex scenes lie in the mutual occlusion of the human body parts, the large variation of the body scales and the complexity of imaging conditions. Deep learning based head detection is a promising method for crowd counting. However the highly concerned object detection networks cannot be well applied to this field for two main reasons. First, most of the existing head detection datasets are only annotated with the center points instead of bounding boxes which is mandatory for the canonical detectors. Second, the sample imbalance has not been overcome yet in highly dense and complex scenes because the existing loss functions calculate the positive loss at a single key point or in the entire target area with the same weight. To address these problems, We propose a novel loss function, called Mask Focal Loss, to unify the loss functions based on heatmap ground truth (GT) and binary feature map GT. Mask Focal Loss redefines the weight of the loss contributions according to the situ value of the heatmap with a Gaussian kernel. For better evaluation and comparison, a new synthetic dataset GTA\_Head is made public, including 35 sequences, 5096 images and 1732043 head labels with bounding boxes. Experimental results show the overwhelming performance and demonstrate that our proposed Mask Focal Loss is applicable to all of the canonical detectors and to various datasets with different GT. This provides a strong basis for surpassing the crowd counting methods based on density estimation.
translated by 谷歌翻译
We present in this paper a novel denoising training method to speedup DETR (DEtection TRansformer) training and offer a deepened understanding of the slow convergence issue of DETR-like methods. We show that the slow convergence results from the instability of bipartite graph matching which causes inconsistent optimization goals in early training stages. To address this issue, except for the Hungarian loss, our method additionally feeds ground-truth bounding boxes with noises into Transformer decoder and trains the model to reconstruct the original boxes, which effectively reduces the bipartite graph matching difficulty and leads to a faster convergence. Our method is universal and can be easily plugged into any DETR-like methods by adding dozens of lines of code to achieve a remarkable improvement. As a result, our DN-DETR results in a remarkable improvement ($+1.9$AP) under the same setting and achieves the best result (AP $43.4$ and $48.6$ with $12$ and $50$ epochs of training respectively) among DETR-like methods with ResNet-$50$ backbone. Compared with the baseline under the same setting, DN-DETR achieves comparable performance with $50\%$ training epochs. Code is available at \url{https://github.com/FengLi-ust/DN-DETR}.
translated by 谷歌翻译
Open-vocabulary object detection, which is concerned with the problem of detecting novel objects guided by natural language, has gained increasing attention from the community. Ideally, we would like to extend an open-vocabulary detector such that it can produce bounding box predictions based on user inputs in form of either natural language or exemplar image. This offers great flexibility and user experience for human-computer interaction. To this end, we propose a novel open-vocabulary detector based on DETR -- hence the name OV-DETR -- which, once trained, can detect any object given its class name or an exemplar image. The biggest challenge of turning DETR into an open-vocabulary detector is that it is impossible to calculate the classification cost matrix of novel classes without access to their labeled images. To overcome this challenge, we formulate the learning objective as a binary matching one between input queries (class name or exemplar image) and the corresponding objects, which learns useful correspondence to generalize to unseen queries during testing. For training, we choose to condition the Transformer decoder on the input embeddings obtained from a pre-trained vision-language model like CLIP, in order to enable matching for both text and image queries. With extensive experiments on LVIS and COCO datasets, we demonstrate that our OV-DETR -- the first end-to-end Transformer-based open-vocabulary detector -- achieves non-trivial improvements over current state of the arts.
translated by 谷歌翻译
开放世界对象检测(OWOD)是一个具有挑战性的计算机视觉问题,其中任务是检测一组已知的对象类别,同时识别未知对象。此外,该模型必须逐步学习在下一个培训集中所知的新类。不同于标准对象检测,OWOD设置会对在潜在的未知物体上生成质量候选建议的质量挑战,将未知物体与背景中的未知物体分开并检测不同的未知物体。在这里,我们介绍了一种新的基于端到端的变换器的框架OW-DETR,用于开放世界对象检测。建议的OW-DETR包括三个专用组成部分,即注意力驱动的伪标签,新颖性分类和对象评分,以明确地解决上述OWOD挑战。我们的OW-DETR明确地编码了多尺度上下文信息,具有较少的归纳偏差,使得从已知类传输到未知类,并且可以更好地区分未知对象和背景之间。综合实验是对两个基准进行的:MS-Coco和Pascal VOC。广泛的消融揭示了我们拟议的贡献的优点。此外,我们的模型优于最近引入的OWOD方法矿石,绝对增益在MS-Coco基准测试中的未知召回方面的1.8%至3.3%。在增量对象检测的情况下,OW-DETR以Pascal VOC基准上的所有设置优于最先进的。我们的代码和模型将公开发布。
translated by 谷歌翻译
用于对象检测的注释边界框很昂贵,耗时且容易出错。在这项工作中,我们提出了一个基于DITR的框架,该框架旨在在部分注释的密集场景数据集中明确完成丢失的注释。这减少了注释场景中的每个对象实例,从而降低注释成本。完成DETR解码器中的对象查询,并使用图像中对象的补丁信息。结合匹配损失,它可以有效地找到与输入补丁相似的对象并完成丢失的注释。我们表明,我们的框架优于最先进的方法,例如软采样和公正的老师,同时可以与这些方法一起使用以进一步提高其性能。我们的框架对下游对象探测器的选择也不可知。我们显示了多个流行探测器的性能改进,例如在多个密集的场景数据集中更快的R-CNN,CASCADE R-CNN,CENTERNET2和可变形的DETR。
translated by 谷歌翻译
Recently, the dominant DETR-based approaches apply central-concept spatial prior to accelerate Transformer detector convergency. These methods gradually refine the reference points to the center of target objects and imbue object queries with the updated central reference information for spatially conditional attention. However, centralizing reference points may severely deteriorate queries' saliency and confuse detectors due to the indiscriminative spatial prior. To bridge the gap between the reference points of salient queries and Transformer detectors, we propose SAlient Point-based DETR (SAP-DETR) by treating object detection as a transformation from salient points to instance objects. In SAP-DETR, we explicitly initialize a query-specific reference point for each object query, gradually aggregate them into an instance object, and then predict the distance from each side of the bounding box to these points. By rapidly attending to query-specific reference region and other conditional extreme regions from the image features, SAP-DETR can effectively bridge the gap between the salient point and the query-based Transformer detector with a significant convergency speed. Our extensive experiments have demonstrated that SAP-DETR achieves 1.4 times convergency speed with competitive performance. Under the standard training scheme, SAP-DETR stably promotes the SOTA approaches by 1.0 AP. Based on ResNet-DC-101, SAP-DETR achieves 46.9 AP.
translated by 谷歌翻译
视频人群本地化是一项至关重要但又具有挑战性的任务,旨在估算给定拥挤视频中人头的确切位置。为了模拟人类活动性的时空依赖性,我们提出了多焦点高斯邻里注意力(GNA),可以有效利用远程对应关系,同时保持输入视频的空间拓扑结构。特别是,我们的GNA还可以使用配备的多聚焦机制良好地捕获人头的尺度变化。基于多聚焦GNA,我们开发了一个名为GNANET的统一神经网络,以通过场景建模模块和上下文交叉意见模块充分聚合时空信息来准确地定位视频片段中的头部中心。此外,为了促进该领域的未来研究,我们介绍了一个名为VScrowd的大规模人群视频基准,该视频由60k+框架组成,这些框架在各种监视场景和2M+头部注释中捕获。最后,我们在包括我们的SenseCrowd在内的三个数据集上进行了广泛的实验,实验结果表明,所提出的方法能够实现视频人群本地化和计数的最新性能。
translated by 谷歌翻译
我们将Dino(\ textbf {d} etr与\ textbf {i} mpred de \ textbf {n} oising hand \ textbf {o} r boxes),一种最先进的端到端对象检测器。 % 在本文中。 Dino通过使用一种对比度方法来降级训练,一种用于锚定初始化的混合查询选择方法以及对盒子预测的两次方案,通过使用对比的方式来改善性能和效率的模型。 Dino在$ 12 $时代获得$ 49.4 $ ap,$ 12.3 $ ap in Coco $ 24 $时期,带有Resnet-50骨干和多尺度功能,可显着改善$ \ textbf {+6.0} $ \ textbf {ap}和ap {ap}和ap}和$ \ textbf {+2.7} $ \ textbf {ap}与以前的最佳detr样模型相比,分别是dn-detr。 Dino在模型大小和数据大小方面都很好地缩放。没有铃铛和哨子,在对objects365数据集进行了swinl骨架的预训练后,Dino在两个Coco \ texttt {val2017}($ \ textbf {63.2} $ \ textbf {ap ap})和\ testtt { -dev}(\ textbf {$ \ textbf {63.3} $ ap})。与排行榜上的其他模型相比,Dino大大降低了其模型大小和预训练数据大小,同时实现了更好的结果。我们的代码将在\ url {https://github.com/ideacvr/dino}提供。
translated by 谷歌翻译
在本文中,我们提出了简单的关注机制,我们称之为箱子。它可以实现网格特征之间的空间交互,从感兴趣的框中采样,并提高变压器的学习能力,以获得几个视觉任务。具体而言,我们呈现拳击手,短暂的框变压器,通过从输入特征映射上的参考窗口预测其转换来参加一组框。通过考虑其网格结构,拳击手通过考虑其网格结构来计算这些框的注意力。值得注意的是,Boxer-2D自然有关于其注意模块内容信息的框信息的原因,使其适用于端到端实例检测和分段任务。通过在盒注意模块中旋转的旋转的不变性,Boxer-3D能够从用于3D端到端对象检测的鸟瞰图平面产生识别信息。我们的实验表明,拟议的拳击手-2D在Coco检测中实现了更好的结果,并且在Coco实例分割上具有良好的和高度优化的掩模R-CNN可比性。 Boxer-3D已经为Waymo开放的车辆类别提供了令人信服的性能,而无需任何特定的类优化。代码将被释放。
translated by 谷歌翻译