Image and video synthesis has become a blooming topic in computer vision and machine learning communities along with the developments of deep generative models, due to its great academic and application value. Many researchers have been devoted to synthesizing high-fidelity human images as one of the most commonly seen object categories in daily lives, where a large number of studies are performed based on various deep generative models, task settings and applications. Thus, it is necessary to give a comprehensive overview on these variant methods on human image generation. In this paper, we divide human image generation techniques into three paradigms, i.e., data-driven methods, knowledge-guided methods and hybrid methods. For each route, the most representative models and the corresponding variants are presented, where the advantages and characteristics of different methods are summarized in terms of model architectures and input/output requirements. Besides, the main public human image datasets and evaluation metrics in the literature are also summarized. Furthermore, due to the wide application potentials, two typical downstream usages of synthesized human images are covered, i.e., data augmentation for person recognition tasks and virtual try-on for fashion customers. Finally, we discuss the challenges and potential directions of human image generation to shed light on future research.
translated by 谷歌翻译
时间序列预测在许多现实世界中都起着重要的作用,例如设备生命周期预测,天气预报和交通流量预测。从最近的研究中可以看出,各种基于变压器的模型在预测时间序列中显示出了显着的结果。但是,仍然有一些问题限制了在时间序列预测任务上基于变压器的模型的能力:(i)直接在原始数据上学习由于其复杂且不稳定的功能表示,因此对噪声易受噪声; (ii)自我发挥的机制不足以对变化的特征和时间依赖性的关注不足。为了解决这两个问题,我们提出了一个基于变压器的差异重构注意模型Draformer。具体而言,Draformer具有以下创新:(i)对差异序列进行学习,该序列通过差异和突出序列的变化属性来保留清晰和稳定的序列特征; (ii)重建的注意力:综合距离注意力通过可学习的高斯内核表现出顺序距离,分布式差异注意通过将差异序列映射到适应性特征空间来计算分布差异,并且两者的组合有效地集中在具有显着关联的序列上; (iii)重建的解码器输入,该输入通过集成变异信息和时间相关来提取序列特征,从而获得了更全面的序列表示。在四个大型数据集上进行的广泛实验表明,Draformer的表现优于最先进的基线。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
为了提高实例级别检测/分割性能,现有的自我监督和半监督方法从未标记的数据提取非常任务 - 无关或非常任务特定的训练信号。我们认为这两种方法在任务特异性频谱的两端是任务性能的次优。利用太少的任务特定的培训信号导致底下地区任务的地面真理标签导致磨损,而相反的原因会在地面真理标签上过度装修。为此,我们提出了一种新的类别无关的半监督预测(CASP)框架,在提取来自未标记数据的训练信号中实现更有利的任务特异性平衡。与半监督学习相比,CASP通过忽略伪标签中的类信息并具有仅使用任务 - 不相关的未标记数据的单独预先预订阶段来减少训练信号的任务特异性。另一方面,CASP通过利用盒子/面具级伪标签来保留适量的任务特异性。因此,我们的预磨模模型可以更好地避免在下游任务上的FineTuned时避免在地面真理标签上抵抗/过度拟合。使用3.6M未标记的数据,我们在对象检测上实现了4.7%的显着性能增益。我们的预制模型还展示了对其他检测和分割任务/框架的优异可转移性。
translated by 谷歌翻译
人工智能(AI)为简化Covid-19诊断提供了有前景的替代。然而,涉及周围的安全和可信度的担忧阻碍了大规模代表性的医学数据,对临床实践中训练广泛的模型造成了相当大的挑战。为了解决这个问题,我们启动了统一的CT-Covid AI诊断计划(UCADI),其中AI模型可以在没有数据共享的联合学习框架(FL)下在每个主机机构下分发和独立地在没有数据共享的情况下在每个主机机构上执行。在这里,我们认为我们的FL模型通过大的产量(中国测试敏感性/特异性:0.973 / 0.951,英国:0.730 / 0.942),与专业放射科医师的面板实现可比性表现。我们进一步评估了持有的模型(从另外两家医院收集,留出FL)和异构(用造影材料获取)数据,提供了模型所做的决策的视觉解释,并分析了模型之间的权衡联邦培训过程中的性能和沟通成本。我们的研究基于来自位于中国和英国的23家医院的3,336名患者的9,573次胸部计算断层扫描扫描(CTS)。统称,我们的工作提出了利用联邦学习的潜在保留了数字健康的前景。
translated by 谷歌翻译
更广泛的人重新识别(Reid)在最近的计算机视觉社区中引起了不断的关注。在这项工作中,我们在身份标签,特定特定因素(衣服/鞋子颜色等)和域特定因素(背景,观点等)之间构建结构因果模型。根据因果分析,我们提出了一种新颖的域不变表示,以获得概括的人重新识别(DIR-REID)框架。具体而言,我们首先建议解散特定于特定的和域特定的特征空间,我们提出了一种有效的算法实现,用于后台调整,基本上是朝向SCM的因果干预。已经进行了广泛的实验,表明Dir-Reid在大规模域泛化Reid基准上表现出最先进的方法。
translated by 谷歌翻译
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as Ima-geNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated crossattention models. The representations also enable cross-modality search with complex text and text + image queries.
translated by 谷歌翻译
在对象检测中,边界框回归(BBR)是决定对象定位性能的关键步骤。但是,我们发现BBR的大多数先前的损失功能都有两个主要缺点:(i)$ \ ell_n $ -norm和IOU基于IOU的损失功能都无法效率地描述BBR的目标,这会导致收敛速度缓慢和不准确的回归结果。 。 (ii)大多数损失函数都忽略了BBR中的不平衡问题,即与目标盒有较小重叠的大量锚盒对BBR的优化有最大的影响。为了减轻造成的不利影响,我们进行了彻底的研究,以利用本文中BBR损失的潜力。首先,提出了有关联合(EIOU)损失的有效交集,该交集明确测量了BBR中三个几何因素的差异,即重叠面积,中心点和侧面长度。之后,我们说明有效的示例挖掘(EEM)问题,并提出了焦点损失的回归版本,以使回归过程集中在高质量的锚点上。最后,将上述两个部分组合在一起以获得新的损失函数,即焦点损失。对合成数据集和真实数据集进行了广泛的实验。与其他BBR损失相比,在收敛速度和定位精度上都可以显着优势。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译