Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
由于其极端的长距离建模能力,基于视觉变压器的网络在可变形图像注册中变得越来越流行。但是,我们认为,5层卷积U-NET的接受场足以捕获准确的变形而无需长期依赖性。因此,这项研究的目的是研究与现代变压器的方法相比,将基于U-NET的方法用于医学图像注册时是否已过时。为此,我们通过将平行的卷积块嵌入香草U-NET以增强有效的接受场来提出一个大核U-NET(LKU-NET)。在公共3D IXI Brain Dataset上,用于基于ATLAS的注册,我们表明,香草U-NET的性能已经与基于最新的变压器网络(例如Transmorph)相提并论,并且提出的LKU-NET仅使用其参数的1.12%和其多添加操作的10.8%,优于Transmorph。我们进一步评估了MICCAI Learn2Reg 2021挑战数据集中的LKU-NET,以进行主题间注册,我们的LKU-NET在此数据集中也优于TransMorph,并且在此工作提交后,在公共排行榜上排名第一。只有对香草U-NET的适度修改,我们表明U-NET可以在基于主体间和基于ATLAS的3D医疗图像注册上胜过基于变压器的体系结构。代码可在https://github.com/xi-jia/lku-net上找到。
translated by 谷歌翻译
在图像识别中已广泛提出了生成模型,以生成更多图像,其中分布与真实图像相似。它通常会引入一个歧视网络,以区分真实数据与生成的数据。这样的模型利用了一个歧视网络,该网络负责以区分样式从目标数据集中包含的数据传输的数据。但是,这样做的网络着重于强度分布的差异,并可能忽略数据集之间的结构差异。在本文中,我们制定了一个新的图像到图像翻译问题,以确保生成的图像的结构类似于目标数据集中的图像。我们提出了一个简单但功能强大的结构不稳定的对抗(SUA)网络,该网络在执行图像分割时介绍了训练和测试集之间的强度和结构差异。它由空间变换块组成,然后是强度分布渲染模块。提出了空间变换块来减少两个图像之间的结构缝隙,还产生了一个反变形字段,以使最终的分段图像背部扭曲。然后,强度分布渲染模块将变形结构呈现到具有目标强度分布的图像。实验结果表明,所提出的SUA方法具有在多个数据集之间传递强度分布和结构含量的能力。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
大规模凸孔concave minimax问题在许多应用中出现,包括游戏理论,强大的培训和生成对抗网络的培训。尽管它们的适用性广泛,但使用现有的随机最小值方法在大量数据的情况下,有效,有效地解决此类问题是具有挑战性的。我们研究了一类随机最小值方法,并开发了一种沟通效率的分布式随机外算法Localadaseg,其自适应学习速率适合在参数 - 服务器模型中求解凸Conconcove minimax问题。 Localadaseg具有三个主要功能:(i)定期沟通策略,可降低工人与服务器之间的通信成本; (ii)在本地计算并允许无调实现的自适应学习率; (iii)从理论上讲,在随机梯度的估计中,相对于主要差异项的几乎线性加速在平滑和非平滑凸凸环设置中都证明了。 Localadaseg用于解决随机双线游戏,并训练生成的对抗网络。我们将localadaseg与几个用于最小问题的现有优化者进行了比较,并通过在均质和异质环境中的几个实验来证明其功效。
translated by 谷歌翻译
双语术语是电子商务领域中重要的机器翻译资源,通常是手动翻译或自动从并行数据中提取的。人类的翻译成本高昂,电子商务并行语料库非常稀缺。但是,同一商品领域中不同语言中的可比数据很丰富。在本文中,我们提出了一个新颖的框架,即从可比较的数据中提取电子商业双语术语。我们的框架受益于电子商务的跨语化预培训,可以充分利用源端术语和目标端句子之间的深层语义关系,以提取相应的目标术语。各种语言对的实验结果表明,我们的方法比各种强大的基线都取得了明显更好的性能。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译