Recent studies on semi-supervised semantic segmentation (SSS) have seen fast progress. Despite their promising performance, current state-of-the-art methods tend to increasingly complex designs at the cost of introducing more network components and additional training procedures. Differently, in this work, we follow a standard teacher-student framework and propose AugSeg, a simple and clean approach that focuses mainly on data perturbations to boost the SSS performance. We argue that various data augmentations should be adjusted to better adapt to the semi-supervised scenarios instead of directly applying these techniques from supervised learning. Specifically, we adopt a simplified intensity-based augmentation that selects a random number of data transformations with uniformly sampling distortion strengths from a continuous space. Based on the estimated confidence of the model on different unlabeled samples, we also randomly inject labelled information to augment the unlabeled samples in an adaptive manner. Without bells and whistles, our simple AugSeg can readily achieve new state-of-the-art performance on SSS benchmarks under different partition protocols.
translated by 谷歌翻译
物理知识的神经网络(PINN)已成为解决各种域中的部分微分方程(PDE)的强大工具。尽管PINNS的先前研究主要集中在训练期间构建和平衡损失功能以避免最小值,但采样搭配点对PINNS性能的影响很大程度上被忽略了。在这项工作中,我们发现PINN的性能可以随着不同的采样策略而显着变化,并且使用固定的搭配点可能对PINNS与正确解决方案的收敛性很小。特别是(1)我们假设对PINN的培训依赖于从初始和/或边界条件点到内部点的成功“传播”,而采样策略差的PINN可能会卡在琐事的解决方案上,如果有\ textit {传播失败}。 (2)我们证明,传播失败的特征是高度不平衡的PDE残留场,在非常狭窄的区域中观察到非常高的残留物。 (3)为了减轻传播失败,我们提出了一种新颖的\ textit {Evolutionary采样}(EVO)方法,该方法可以逐步积累高PDE残差区域中的搭配点。我们进一步提供EVO的扩展,以尊重因果关系原理,同时解决时间依赖性PDE。我们从经验上证明了我们提出的方法在各种PDE问题中的功效和效率。
translated by 谷歌翻译
人工智能的一种令人信服的应用是生成一个目标人执行任意所需运动的视频(来自来源的人)。虽然最新的方法能够合成一个视频,展示了类似的宽带运动细节,但它们通常缺乏纹理细节。相关的表现出现为扭曲的脸,脚和手,这种缺陷是人类观察者对人的非常敏感的。此外,当前的方法通常采用L2损失的GAN来评估生成的视频的真实性,固有地需要大量的培训样品来学习纹理细节以进行足够的视频生成。在这项工作中,我们从三个方面应对这些挑战:1)我们将每个视频框架分解为前景(人)和背景,重点是生成前景,以减少网络输出的基本维度。 2)我们提出了一种理论上动机的Gromov-Wasserstein损失,可促进从姿势到前景图像学习地图。 3)为了增强纹理细节,我们用几何指导编码面部特征,并使用当地甘斯来完善面部,脚和手。广泛的实验表明,我们的方法能够生成现实的目标人视频,忠实地从源人员那里复制复杂的动作。我们的代码和数据集在https://github.com/sifann/fakemotion上发布
translated by 谷歌翻译
Colonoscopy, currently the most efficient and recognized colon polyp detection technology, is necessary for early screening and prevention of colorectal cancer. However, due to the varying size and complex morphological features of colonic polyps as well as the indistinct boundary between polyps and mucosa, accurate segmentation of polyps is still challenging. Deep learning has become popular for accurate polyp segmentation tasks with excellent results. However, due to the structure of polyps image and the varying shapes of polyps, it is easy for existing deep learning models to overfit the current dataset. As a result, the model may not process unseen colonoscopy data. To address this, we propose a new state-of-the-art model for medical image segmentation, the SSFormer, which uses a pyramid Transformer encoder to improve the generalization ability of models. Specifically, our proposed Progressive Locality Decoder can be adapted to the pyramid Transformer backbone to emphasize local features and restrict attention dispersion. The SSFormer achieves stateof-the-art performance in both learning and generalization assessment.
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译
基于激光雷达的3D单一对象跟踪是机器人技术和自动驾驶中的一个具有挑战性的问题。当前,现有方法通常会遇到长距离对象通常具有非常稀疏或部分倾斜的点云的问题,这使得模型含糊不清。模棱两可的功能将很难找到目标对象,并最终导致不良跟踪结果。为了解决此问题,我们使用功能强大的变压器体系结构,并为基于点云的3D单一对象跟踪任务提出一个点轨转换器(PTT)模块。具体而言,PTT模块通过计算注意力重量来生成微调的注意力特征,该功能指导追踪器的重点关注目标的重要功能,并提高复杂场景中的跟踪能力。为了评估我们的PTT模块,我们将PTT嵌入主要方法中,并构建一个名为PTT-NET的新型3D SOT跟踪器。在PTT-NET中,我们分别将PTT嵌入了投票阶段和提案生成阶段。投票阶段中的PTT模块可以模拟点斑块之间的交互作用,该点贴片学习上下文依赖于上下文。同时,提案生成阶段中的PTT模块可以捕获对象和背景之间的上下文信息。我们在Kitti和Nuscenes数据集上评估了PTT-NET。实验结果证明了PTT模块的有效性和PTT-NET的优越性,PTT-NET的优势超过了基线,在CAR类别中〜10%。同时,我们的方法在稀疏场景中也具有显着的性能提高。通常,变压器和跟踪管道的组合使我们的PTT-NET能够在两个数据集上实现最先进的性能。此外,PTT-NET可以在NVIDIA 1080TI GPU上实时以40fps实时运行。我们的代码是为研究社区开源的,网址为https://github.com/shanjiayao/ptt。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译