跨模式时尚图像合成已成为一代域中最有前途的方向之一,因为巨大的未开发的潜力融合了多种方式和广泛的时尚图像应用。为了促进准确的生成,跨模式合成方法通常依赖于对比的语言图像预训练(剪辑)来对齐文本和服装信息。在这项工作中,我们认为,简单地对齐纹理和服装信息不足以捕获视觉信息的语义,因此提出了maskClip。 MaskClip将服装分解为语义部分,以确保视觉和文本信息之间的细粒度和语义准确对齐。在MaskClip上,我们建议Armani,这是一位统一的跨模式时装设计师,具有零件级的服装文本对齐。 Armani在第一阶段将图像分散成统一令牌,并使用变压器在第二阶段的控制信号的标记中使用变压器为真实图像的图像令牌进行建模。与同样依赖两阶段范式的先前方法相反,Armani将文本令牌引入了代码簿中,使该模型可以利用细粒语义信息来生成更真实的图像。此外,通过引入跨模式变压器,Armani具有通用性,可以从各种控制信号(例如纯文本,草图图像和部分图像)中完成图像合成。在我们新收集的跨模式时尚数据集上进行的广泛实验表明,Armani在不同的合成任务中生成了光真实的图像,并且优于现有的最先进的跨模式图像综合方法。 github.com/harvey594/armani。
translated by 谷歌翻译
现代方法主要将车道检测视为像素细分的问题,该问题正在努力解决效率问题和诸如严重闭塞和极端照明条件之类的挑战性情况。受到人类感知的启发,在严重的阻塞和极端照明条件下对车道的认识主要基于上下文和全球信息。在这一观察结果的推动下,我们提出了一种针对超快速速度的新颖,简单而有效的配方,以及具有挑战性的场景问题。具体而言,我们将车道检测过程视为使用全局特征的锚定序列分类问题。首先,我们在一系列混合(行和列)锚点上代表具有稀疏坐标的车道。借助锚驱动的代表,我们随后将车道检测任务重新制定为序数分类问题,以获取车道的坐标。我们的方法可以通过锚驱动的表示可以大大降低计算成本。使用顺序分类公式的大型接受场特性,我们还可以处理具有挑战性的情况。在四个车道检测数据集上进行的广泛实验表明,我们的方法可以在速度和准确性方面达到最先进的性能。轻量级版本甚至可以每秒达到300帧(FPS)。我们的代码在https://github.com/cfzd/ultra-fast-lane-detection-v2上。
translated by 谷歌翻译
Aspect Sentiment Triplet Extraction (ASTE) is a new fine-grained sentiment analysis task that aims to extract triplets of aspect terms, sentiments, and opinion terms from review sentences. Recently, span-level models achieve gratifying results on ASTE task by taking advantage of the predictions of all possible spans. Since all possible spans significantly increases the number of potential aspect and opinion candidates, it is crucial and challenging to efficiently extract the triplet elements among them. In this paper, we present a span-level bidirectional network which utilizes all possible spans as input and extracts triplets from spans bidirectionally. Specifically, we devise both the aspect decoder and opinion decoder to decode the span representations and extract triples from aspect-to-opinion and opinion-to-aspect directions. With these two decoders complementing with each other, the whole network can extract triplets from spans more comprehensively. Moreover, considering that mutual exclusion cannot be guaranteed between the spans, we design a similar span separation loss to facilitate the downstream task of distinguishing the correct span by expanding the KL divergence of similar spans during the training process; in the inference process, we adopt an inference strategy to remove conflicting triplets from the results base on their confidence scores. Experimental results show that our framework not only significantly outperforms state-of-the-art methods, but achieves better performance in predicting triplets with multi-token entities and extracting triplets in sentences contain multi-triplets.
translated by 谷歌翻译
This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments. In this paper, we treat the MOT task as a two-stage task including human detection and trajectory matching. Specifically, we designed an improved human detector and associated most of detection to guarantee the integrity of the motion trajectory. We also propose a location-wise matching matrix to obtain more accurate trace matching. Without any model merging, our method achieves 66.672 HOTA and 93.971 MOTA on the DanceTrack challenge dataset.
translated by 谷歌翻译
Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.
translated by 谷歌翻译
Entity alignment is to find identical entities in different knowledge graphs (KGs) that refer to the same real-world object. Embedding-based entity alignment techniques have been drawing a lot of attention recently because they can help solve the issue of symbolic heterogeneity in different KGs. However, in this paper, we show that the progress made in the past was due to biased and unchallenging evaluation. We highlight two major flaws in existing datasets that favor embedding-based entity alignment techniques, i.e., the isomorphic graph structures in relation triples and the weak heterogeneity in attribute triples. Towards a critical evaluation of embedding-based entity alignment methods, we construct a new dataset with heterogeneous relations and attributes based on event-centric KGs. We conduct extensive experiments to evaluate existing popular methods, and find that they fail to achieve promising performance. As a new approach to this difficult problem, we propose a time-aware literal encoder for entity alignment. The dataset and source code are publicly available to foster future research. Our work calls for more effective and practical embedding-based solutions to entity alignment.
translated by 谷歌翻译
大多数现有的语义分割方法都以图像级类标签作为监督,高度依赖于从标准分类网络生成的初始类激活图(CAM)。在本文中,提出了一种新颖的“渐进贴片学习”方法,以改善分类的局部细节提取,从而更好地覆盖整个对象的凸轮,而不仅仅是在常规分类模型中获得的CAM中的最歧视区域。 “补丁学习”将特征映射破坏成贴片,并在最终聚合之前并行独立处理每个本地贴片。这样的机制强迫网络从分散的歧视性本地部分中找到弱信息,从而提高了本地细节的敏感性。 “渐进的补丁学习”进一步将特征破坏和补丁学习扩展到多层粒度。与多阶段优化策略合作,这种“渐进的补丁学习”机制隐式地为模型提供了跨不同位置粒状性的特征提取能力。作为隐式多粒性渐进式融合方法的替代方案,我们还提出了一种明确的方法,以同时将单个模型中不同粒度的特征融合,从而进一步增强了完整对象覆盖的凸轮质量。我们提出的方法在Pascal VOC 2012数据集上取得了出色的性能,例如,测试集中有69.6 $%miou),它超过了大多数现有的弱监督语义细分方法。代码将在此处公开提供,https://github.com/tyroneli/ppl_wsss。
translated by 谷歌翻译
生成精确的类感知的伪基真实,也就是类激活图(CAM),对于弱监督的语义分割至关重要。原始CAM方法通常会产生不完整和不准确的定位图。为了解决这个问题,本文提出了基于可变形卷积中的偏移学习的扩展和收缩方案,以依次改善两个各个阶段中定位对象的回忆和精度。在扩展阶段,在可变形卷积层中的偏移学习分支,称为“扩展采样器”,寻求采样越来越小的判别对象区域,这是由逆监督信号驱动的,从而最大程度地提高了图像级分类损失。然后在收缩阶段逐渐将位置更完整的物体逐渐缩小到最终对象区域。在收缩阶段,引入了另一个可变形卷积层的偏移学习分支,称为“收缩采样器”,以排除在扩展阶段参加的假积极背景区域,以提高定位图的精度。我们在Pascal VOC 2012和MS Coco 2014上进行了各种实验,以很好地证明了我们方法比其他最先进的方法对弱监督语义分割的优越性。代码将在此处公开提供,https://github.com/tyroneli/esol_wsss。
translated by 谷歌翻译
融合激光雷达和相机信息对于在自动驾驶系统中实现准确可靠的3D对象检测至关重要。但是,由于难以结合两个截然不同的方式的多晶格几何和语义特征,因此这是具有挑战性的。最近的方法旨在通过2D摄像机图像中的提升点(称为种子)中的3D空间来探索相机功能的语义密度,并且可以将它们大致分为1)1)原始点的早期融合,旨在增强3D在早期输入阶段的点云,以及2)Bev(鸟眼视图)的后期融合,在检测头之前合并了LiDar和Camera BEV功能。尽管两者在增强联合特征的表示能力方面都具有优点,但这种单级融合策略是对上述挑战的次优点。他们的主要缺点是无法充分从两种不同的方式中相互作用的多晶格语义特征。为此,我们提出了一个新颖的框架,该框架着重于多粒性激光雷达和相机功能的多尺度渐进互动。我们提出的方法缩写为MDMSFusion,实现最先进的方法可导致3D对象检测,在Nuscenes验证集上具有69.1 MAP和71.8 NDS,在NUSCENES测试集上进行了70.8 MAP和73.2 nds,该级别的第一和第二级和第二个NDS。在提交时,在单模型的非集结方法中。
translated by 谷歌翻译
实体对齐是知识图融合中的至关重要任务。但是,大多数实体对准方法都有可伸缩性问题。最近的方法通过将大型公斤分成小块来解决这个问题,以嵌入和对齐学习。但是,这种分区和学习过程导致结构和对齐过度损失过多。因此,在这项工作中,我们提出了一种可扩展的基于GNN的实体对准方法,以从三个角度降低结构和对齐损失。首先,我们提出一种基于中心性的子图生成算法,以回顾一些具有不同子图之间桥梁的地标实体。其次,我们介绍了自我监督的实体重建,以从不完整的邻里子图中恢复实体表示形式,并设计了跨纸笔负面抽样,以在对齐学习中纳入其他子图中的实体。第三,在推理过程中,我们合并子图的嵌入,以制作一个单个空间进行对齐搜索。基准开放数据集和提议的大型DBPEDIA1M数据集的实验结果验证了我们方法的有效性。
translated by 谷歌翻译