Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs. However, CCR suffers from two main transitive problems: threshold effect and scene drift. In other words, the causal pairs to be spliced may have a conflicting threshold boundary or scenario. To address these issues, we propose a novel Reliable Causal chain reasoning framework~(ReCo), which introduces exogenous variables to represent the threshold and scene factors of each causal pair within the causal chain, and estimates the threshold and scene contradictions across exogenous variables via structural causal recurrent neural networks~(SRNN). Experiments show that ReCo outperforms a series of strong baselines on both Chinese and English CCR datasets. Moreover, by injecting reliable causal chain knowledge distilled by ReCo, BERT can achieve better performances on four downstream causal-related tasks than BERT models enhanced by other kinds of knowledge.
translated by 谷歌翻译
尽管在半监督语义细分领域的进度程度不同,但其最近的大部分成功都涉及笨拙的模型,并且尚未探索轻量级解决方案。我们发现,现有的知识蒸馏技术更多地关注标签数据中的像素级概念,该数据未能在未标记的数据中考虑更有用的线索。因此,我们提供了首次尝试通过新颖的多晶蒸馏(MGD)方案提供轻量级SSS模型,其中从三个方面捕获了多个跨性别:i)互补的教师结构; ii)标记为未标记的数据合作蒸馏; iii)分层和多层次损失设置。具体而言,MGD被配制为标记的未标记数据合作蒸馏方案,该方案有助于充分利用在半监督环境中必不可少的不同数据特征。图像水平的语义敏感损失,区域级别的内容感知损失和像素级的一致性损失是通过结构互补的教师来丰富层次蒸馏抽象的。 Pascal VOC2012和CityScapes的实验结果表明,在不同的分区协议下,MGD可以超越竞争方法。例如,在1/16的CityScapes分区协议下,RESNET-18和MOBILENET-V2主链的性能分别增长了11.5%和4.6%。尽管模型骨干的拖曳量被3.4-5.3倍(RESNET-18)和38.7-59.6X(MobileNetV2)压缩,但该模型旨在实现令人满意的分割结果。
translated by 谷歌翻译
给定标签噪声的数据(即数据不正确),深神经网络将逐渐记住标签噪声和损害模型性能。为了减轻此问题,提出了课程学习,以通过在有意义的(例如,易于硬)序列中订购培训样本来提高模型性能和概括。先前的工作将错误的样本作为通用的硬性样本,而无需区分硬样品(即正确数据中的硬样品)和不正确的样本。确实,模型应该从硬样本中学习,以促进概括而不是过度拟合错误。在本文中,我们通过在现有的任务损失之外附加新颖的损失函数Indimloss来解决此问题。它的主要影响是在训练的早期阶段自动,稳定地估计简易样品和困难样本(包括硬和不正确的样品)的重要性,以改善模型性能。然后,在以下阶段中,歧视专门用于区分硬性和不正确样本以改善模型的概括。这种培训策略可以以自我监督的方式动态制定,从而有效地模仿课程学习的主要原则。关于图像分类,图像回归,文本序列回归和事件关系推理的实验证明了我们方法的多功能性和有效性,尤其是在存在多样化的噪声水平的情况下。
translated by 谷歌翻译
最近,基于合成数据的实例分割已成为一种极其有利的优化范式,因为它利用模拟渲染和物理学来生成高质量的图像宣传对。在本文中,我们提出了一个并行预训练的变压器(PPT)框架,以完成基于合成数据的实例分割任务。具体而言,我们利用现成的预训练的视觉变压器来减轻自然数据和合成数据之间的差距,这有助于在下游合成数据场景中提供良好的概括,几乎没有样本。基于SWIN-B基的CBNET V2,基于SWINL的CBNET V2和SWIN-L基统一器用于并行特征学习,并且这三个模型的结果由像素级非最大最大抑制(NMS)算法融合来获得更强大的结果。实验结果表明,PPT在CVPR2022 AVA可访问性视觉和自主性挑战中排名第一,地图为65.155%。
translated by 谷歌翻译
开发用于训练图形的可扩展解决方案,用于链路预测任务的Neural网络(GNNS)由于具有高计算成本和巨大内存占用的高数据依赖性,因此由于高数据依赖性而具有挑战性。我们提出了一种新的方法,用于缩放知识图形嵌入模型的培训,以满足这些挑战。为此,我们提出了以下算法策略:自给自足的分区,基于约束的负采样和边缘迷你批量培训。两者都是分区策略和基于约束的负面采样,避免在训练期间交叉分区数据传输。在我们的实验评估中,我们表明,我们基于GNN的知识图形嵌入模型的缩放解决方案在基准数据集中实现了16倍的加速,同时将可比的模型性能作为标准度量的非分布式方法。
translated by 谷歌翻译
目前对语言理解(SLU)的研究重大仅限于简单的设置:基于纯文本的SLU,它将用户话语为输入并生成其相应的语义帧(例如,意图和插槽)。不幸的是,当话语是语义模糊的话语时,这种简单的设置可能无法在复杂的真实情景中工作,这不能通过基于文本的SLU模型来实现的。在本文中,我们首先介绍了一种新的和重要任务,基于个人资料的口语语言理解(ProSlu),这需要不仅依赖于纯文本的模型,而且需要支持的资料配置文件,以预测正确的意图和插槽。为此,我们进一步引入了一个具有超过5K的大规模的汉语数据集及其相应的支持简档信息(知识图(kg),用户配置文件(向上),上下文意识(CA))。此外,我们还评估了多个最先进的基线模型,并探索多级知识适配器,以有效地结合资料信息。实验结果表明,当话语是语义模糊的,我们所提出的框架可以有效地融合了句子级意图检测和令牌级槽填充的支持信息,所以所有现有的基于文本的SLU模型都无法工作。最后,我们总结了关键挑战,为未来方向提供了新的观点,希望促进研究。
translated by 谷歌翻译
图像级弱监督的语义分割(WSSS)是一个基本但具有挑战性的计算机视觉任务,促进了场景理解和自动驾驶。大多数现有方法都采用基于分类的类激活地图(CAM)作为初始伪标签进行播放,倾向于关注分割任务的定制特征。为了减轻这个问题,我们提出了一种新的激活调制和重新校准(AMR)方案,它利用聚光灯分支和补偿分支来获得加权凸轮,可以提供可重新校准和特定于任务的概念。具体地,用于重新排列来自信道空间顺序透视的特征重要性的分布,这有助于明确地模拟通道 - 方向的相互依赖性和空间编码,以自适应地调制面向分割的激活响应。此外,我们向双分支引入交叉伪监督,这可以被视为对互动两个分支的语义类似的正则化。广泛的实验表明,AMR在Pascal VOC 2012年数据集上建立了新的最先进的性能,不仅超越了当前方法培训的监督图像水平,而且一些方法依赖于更强的监督,如显着性标签。实验还揭示了我们的计划是即插即用的,可以与其他促进其性能的其他方法合并。
translated by 谷歌翻译
我们提出了一种叫做SkullEngine的多级粗内CNN框架,可通过协作,集成和可扩展的JSD模型和三个分段和地标检测细化模型进行高分辨率分割和大规模地标检测。我们在临床数据集中评估了由170 CBCT / CT图像组成的临床数据集,用于分割2骨骼(Midface和Mabless)的任务,并在骨骼,牙齿和软组织上检测175个临床普通的地标。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译