一对一的匹配是DETR建立其端到端功能的关键设计,因此对象检测不需要手工制作的NMS(非最大抑制)方法来删除重复检测。这种端到端的签名对于DETR的多功能性很重要,并且已将其推广到广泛的视觉问题,包括实例/语义分割,人体姿势估计以及基于点云/多视图的检测,但是,我们注意到,由于分配为正样本的查询太少,因此一对一的匹配显着降低了阳性样品的训练效率。本文提出了一种基于混合匹配方案的简单而有效的方法,该方法将原始的一对一匹配分支与辅助查询结合在一起,这些查询在训练过程中使用一对一的匹配损失。该混合策略已被证明可显着提高训练效率并提高准确性。在推断中,仅使用原始的一对一匹配分支,从而维持端到端的优点和相同的DETR推断效率。该方法命名为$ \ MATHCAL {H} $ - DETR,它表明可以在各种视觉任务中始终如一地改进各种代表性的DITR方法,包括可变形,3DDER/PETRV2,PETR和TRANDRACK, ,其他。代码将在以下网址提供:https://github.com/hdetr
translated by 谷歌翻译
传统上,分割任务是作为一个完整标签的像素分类任务提出的,可以从所有图像或视频共享的固定数量的预定义语义类别中预测每个像素的类。然而,遵循这种表述,在更现实的设置下,标准体系结构将不可避免地遇到各种挑战,其中类别的范围扩大了(例如,超出1K的级别)。另一方面,在典型的图像或视频中,只有少数类别,即存在一小部分完整标签。在本文中,我们提议将分割分解为两个子问题:(i)图像级或视频级多标签分类和(ii)像素级适应性选定标签分类。给定输入图像或视频,我们的框架首先在完整标签上进行多标签分类,然后对完整的标签进行分类,并根据其类置信度得分选择一个小子集。然后,我们使用等级自适应像素分类器对仅选择的标签执行像素的分类,该标签使用一组面向等级的可学习温度参数来调整像素分类分数。我们的方法在概念上是一般的,可以通过简单地使用轻质多标签分类头和等级适应像素分类器来改善各种现有的分割框架。我们通过四个任务进行了竞争性实验结果,证明了我们的框架的有效性,包括图像语义分割,图像泛型细分,视频实例分段和视频语义分段。尤其是,借助我们的rankSeg,Mask2Former在ADE20K PANOPTIC分段/YouTubevis 2019视频实例分段/VSPW视频语义分段基准分别获得了+0.8%/+0.7%/+0.7%。
translated by 谷歌翻译
我们介绍了一个高分辨率变压器(HRFormer),其学习了密集预测任务的高分辨率表示,与产生低分辨率表示的原始视觉变压器,具有高存储器和计算成本。我们利用在高分辨率卷积网络(HRNET)中引入的多分辨率并行设计,以及本地窗口自我关注,用于通过小型非重叠图像窗口进行自我关注,以提高存储器和计算效率。此外,我们将卷积介绍到FFN中以在断开连接的图像窗口中交换信息。我们展示了高分辨率变压器对人类姿态估计和语义分割任务的有效性,例如,HRFormer在Coco姿势估算中以$ 50 \%$ 50 + 50美元和30 \%$更少的拖鞋。代码可用:https://github.com/hrnet/hRFormer。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Depression is a leading cause of death worldwide, and the diagnosis of depression is nontrivial. Multimodal learning is a popular solution for automatic diagnosis of depression, and the existing works suffer two main drawbacks: 1) the high-order interactions between different modalities can not be well exploited; and 2) interpretability of the models are weak. To remedy these drawbacks, we propose a multimodal multi-order factor fusion (MMFF) method. Our method can well exploit the high-order interactions between different modalities by extracting and assembling modality factors under the guide of a shared latent proxy. We conduct extensive experiments on two recent and popular datasets, E-DAIC-WOZ and CMDC, and the results show that our method achieve significantly better performance compared with other existing approaches. Besides, by analyzing the process of factor assembly, our model can intuitively show the contribution of each factor. This helps us understand the fusion mechanism.
translated by 谷歌翻译