心动运动的时间模式为心脏病诊断提供重要信息。该图案可以通过三方向调整多层左心室心肌速度映射(3DIR MVM)获得,这是一种心动MR技术,同时提供心肌运动的幅度和相位信息。然而,长采集时间通过导致呼吸伪像来限制这种技术的使用,同时缩短时间导致低时间分辨率,并且可以提供对心动的不准确评估。在本研究中,我们提出了一种帧综合算法来增加3DIR MVM数据的时间分辨率。我们的算法特点是1)三个基于关注的编码器,其接受幅度图像,相位图像和心肌细分掩模作为输入; 2)输出内插帧的三个解码器和相应的心肌细分结果; 3)损失功能突出显示心肌像素。我们的算法不仅可以增加时间分辨率3DIR MVM,而且还可以同时生成心肌细分结果。
translated by 谷歌翻译
视频框架插值是一项艰巨的任务,这是由于不断变化的现实场景。先前的方法通常计算双向光流,然后在线性运动假设下预测中间光流,从而导致各向同性中间流量产生。随访研究通过估计的高阶运动信息和额外的帧获得各向异性调整。基于运动假设,它们的方法很难在真实场景中对复杂的运动进行建模。在本文中,我们提出了一种端到端训练方法A^2OF,用于视频框架插值,并通过事件驱动的各向异性调整光学流量调节。具体而言,我们使用事件为中间光流生成光流分布掩码,这可以对两个帧之间的复杂运动进行建模。我们提出的方法在视频框架插值中优于先前的方法,将基于事件的视频插值带到了更高的阶段。
translated by 谷歌翻译
视频框架插值是一项经典且具有挑战性的低级计算机视觉任务。最近,基于深度学习的方法取得了令人印象深刻的结果,并且已证明基于光流的方法可以合成具有更高质量的帧。但是,大多数基于流动的方法都假设两个输入帧之间具有恒定速度的线轨迹。只有一点点工作可以使用曲线轨迹执行预测,但这需要两个以上的框架作为输入来估计加速度,这需要更多的时间和内存才能执行。为了解决这个问题,我们提出了一个基于ARC轨迹的模型(ATCA),该模型仅从连续两个帧中就可以在前学习运动,而且轻量级。实验表明,我们的方法的性能要比许多参数较少且推理速度更快的SOTA方法更好。
translated by 谷歌翻译
从电影心脏磁共振(CMR)成像中恢复心脏的3D运动可以评估区域心肌功能,对于理解和分析心血管疾病很重要。但是,3D心脏运动估计是具有挑战性的,因为获得的Cine CMR图像通常是2D切片,它限制了对整个平面运动的准确估计。为了解决这个问题,我们提出了一个新颖的多视图运动估计网络(Mulvimotion),该网络集成了以短轴和长轴平面获取的2D Cine CMR图像,以学习心脏的一致性3D运动场。在提出的方法中,构建了一个混合2D/3D网络,以通过从多视图图像中学习融合表示形式来生成密集的3D运动场。为了确保运动估计在3D中保持一致,在训练过程中引入了形状正则化模块,其中利用了来自多视图图像的形状信息,以提供3D运动估计的弱监督。我们对来自英国生物银行研究的580名受试者的2D Cine CMR图像进行了广泛评估,用于左心室心肌的3D运动跟踪。实验结果表明,该方法在定量和定性上优于竞争方法。
translated by 谷歌翻译
已显示自动深度学习分割模型可提高分割效率和准确性。但是,训练强大的分割模型需要大量标记的训练样本,这可能是不切实际的。这项研究旨在开发一个深度学习框架,用于生成可用于增强网络培训的合成病变。病变合成网络是一种修改的生成对抗网络(GAN)。具体而言,我们创新了部分卷积策略来构建一个类似于Unet的发电机。该鉴别器是使用具有梯度惩罚和光谱归一化的Wasserstein GAN设计的。开发了基于主成分分析的掩模生成方法,以模拟各种病变形状。然后通过病变合成网络将生成的面膜转换为肝病。评估了病变的合成框架的病变纹理,并使用合成病变来训练病变分割网络,以进一步验证该框架的有效性。所有网络均经过LIT的公共数据集训练和测试。与所采用的两个纹理参数(GLCM-能量和GLCM相关)相比,该方法产生的合成病变具有非常相似的直方图分布。 GLCM-能量和GlCM相关的Kullback-Lebler差异分别为0.01和0.10。包括肿瘤分割网络中的合成病变包括U-NET的分割骰子性能从67.3%显着提高到71.4%(p <0.05)。同时,体积的精度和灵敏度从74.6%提高到76.0%(p = 0.23)和66.1%至70.9%(p <0.01)。合成数据可显着提高分割性能。
translated by 谷歌翻译
We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-ofthe-art.
translated by 谷歌翻译
我们提出了一种称为基于DNN的基于DNN的框架,称为基于增强的相关匹配的视频帧插值网络,以支持4K的高分辨率,其具有大规模的运动和遮挡。考虑到根据分辨率的网络模型的可扩展性,所提出的方案采用经常性金字塔架构,该架构分享每个金字塔层之间的参数进行光学流量估计。在所提出的流程估计中,通过追踪具有最大相关性的位置来递归地改进光学流。基于前扭曲的相关匹配可以通过排除遮挡区域周围的错误扭曲特征来提高流量更新的准确性。基于最终双向流动,使用翘曲和混合网络合成任意时间位置的中间帧,通过细化网络进一步改善。实验结果表明,所提出的方案在4K视频数据和低分辨率基准数据集中占据了之前的工作,以及具有最小型号参数的客观和主观质量。
translated by 谷歌翻译
The lack of efficient segmentation methods and fully-labeled datasets limits the comprehensive assessment of optical coherence tomography angiography (OCTA) microstructures like retinal vessel network (RVN) and foveal avascular zone (FAZ), which are of great value in ophthalmic and systematic diseases evaluation. Here, we introduce an innovative OCTA microstructure segmentation network (OMSN) by combining an encoder-decoder-based architecture with multi-scale skip connections and the split-attention-based residual network ResNeSt, paying specific attention to OCTA microstructural features while facilitating better model convergence and feature representations. The proposed OMSN achieves excellent single/multi-task performances for RVN or/and FAZ segmentation. Especially, the evaluation metrics on multi-task models outperform single-task models on the same dataset. On this basis, a fully annotated retinal OCTA segmentation (FAROS) dataset is constructed semi-automatically, filling the vacancy of a pixel-level fully-labeled OCTA dataset. OMSN multi-task segmentation model retrained with FAROS further certifies its outstanding accuracy for simultaneous RVN and FAZ segmentation.
translated by 谷歌翻译
Cross-modality magnetic resonance (MR) image synthesis aims to produce missing modalities from existing ones. Currently, several methods based on deep neural networks have been developed using both source- and target-modalities in a supervised learning manner. However, it remains challenging to obtain a large amount of completely paired multi-modal training data, which inhibits the effectiveness of existing methods. In this paper, we propose a novel Self-supervised Learning-based Multi-scale Transformer Network (SLMT-Net) for cross-modality MR image synthesis, consisting of two stages, \ie, a pre-training stage and a fine-tuning stage. During the pre-training stage, we propose an Edge-preserving Masked AutoEncoder (Edge-MAE), which preserves the contextual and edge information by simultaneously conducting the image reconstruction and the edge generation. Besides, a patch-wise loss is proposed to treat the input patches differently regarding their reconstruction difficulty, by measuring the difference between the reconstructed image and the ground-truth. In this case, our Edge-MAE can fully leverage a large amount of unpaired multi-modal data to learn effective feature representations. During the fine-tuning stage, we present a Multi-scale Transformer U-Net (MT-UNet) to synthesize the target-modality images, in which a Dual-scale Selective Fusion (DSF) module is proposed to fully integrate multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Moreover, we use the pre-trained encoder as a feature consistency module to measure the difference between high-level features of the synthesized image and the ground truth one. Experimental results show the effectiveness of the proposed SLMT-Net, and our model can reliably synthesize high-quality images when the training set is partially unpaired. Our code will be publicly available at https://github.com/lyhkevin/SLMT-Net.
translated by 谷歌翻译
前列腺活检和图像引导的治疗程序通常是在与磁共振图像(MRI)的超声指导下进行的。准确的图像融合依赖于超声图像上前列腺的准确分割。然而,超声图像中降低的信噪比和工件(例如,斑点和阴影)限制了自动前列腺分割技术的性能,并将这些方法推广到新的图像域是本质上很难的。在这项研究中,我们通过引入一种新型的2.5D深神经网络来解决这些挑战,用于超声图像上的前列腺分割。我们的方法通过组合有监督的域适应技术和知识蒸馏损失,解决了转移学习和填充方法的局限性(即,在更新模型权重时,在更新模型权重时的性能下降)。知识蒸馏损失允许保留先前学习的知识,并在新数据集上的模型填充后降低性能下降。此外,我们的方法依赖于注意模块,该模块认为模型特征定位信息以提高分割精度。我们对一个机构的764名受试者进行了培训,并仅使用后续机构中的十个受试者对我们的模型进行了审核。我们分析了方法在三个大型数据集上的性能,其中包括来自三个不同机构的2067名受试者。我们的方法达到了平均骰子相似性系数(骰子)为$ 94.0 \ pm0.03 $,而Hausdorff距离(HD95)为2.28 $ mm $,在第一机构的独立受试者中。此外,我们的模型在其他两个机构的研究中都很好地概括了(骰子:$ 91.0 \ pm0.03 $; hd95:3.7 $ mm $ and Dice:$ 82.0 \ pm0.03 $; hd95 $; hd95:7.1 $ mm $)。
translated by 谷歌翻译
我们提出了Video-Transunet,这是一种深层体系结构,例如通过将时间融合到Transunet深度学习框架中构建的医学CT视频中的细分。特别是,我们的方法通过Resnet CNN主链,通过时间上下文模块(TCM)混合的多帧功能(TCM),通过视觉变压器进行非本地关注以及通过基于UNET的卷积为多个目标的重建功能混合的强框架表示强的框架表示 - 具有多个头部的卷积架构。我们表明,在视频荧光吞咽研究(VFSS)CT序列中,对推注和咽/喉的分割进行测试时,这种新的网络设计可以显着优于其他最先进的系统。在我们的VFSS2022数据集上,它达到了$ 0.8796 \%$的骰子系数,平均表面距离为$ 1.0379 $。请注意,准确跟踪咽注:在临床实践中特别重要,因为它构成了吞咽损伤诊断的主要方法。我们的发现表明,所提出的模型确实可以通过利用时间信息并通过显着的边距提高分割性能来增强Transunet架构。我们发布关键源代码,网络权重和地面真相注释,以简化性能再现。
translated by 谷歌翻译
Delimiting salt inclusions from migrated images is a time-consuming activity that relies on highly human-curated analysis and is subject to interpretation errors or limitations of the methods available. We propose to use migrated images produced from an inaccurate velocity model (with a reasonable approximation of sediment velocity, but without salt inclusions) to predict the correct salt inclusions shape using a Convolutional Neural Network (CNN). Our approach relies on subsurface Common Image Gathers to focus the sediments' reflections around the zero offset and to spread the energy of salt reflections over large offsets. Using synthetic data, we trained a U-Net to use common-offset subsurface images as input channels for the CNN and the correct salt-masks as network output. The network learned to predict the salt inclusions masks with high accuracy; moreover, it also performed well when applied to synthetic benchmark data sets that were not previously introduced. Our training process tuned the U-Net to successfully learn the shape of complex salt bodies from partially focused subsurface offset images.
translated by 谷歌翻译
Magnetic resonance (MR) and computer tomography (CT) images are two typical types of medical images that provide mutually-complementary information for accurate clinical diagnosis and treatment. However, obtaining both images may be limited due to some considerations such as cost, radiation dose and modality missing. Recently, medical image synthesis has aroused gaining research interest to cope with this limitation. In this paper, we propose a bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN), to synthesize medical images from unpaired data. Specifically, a dual contrast loss is introduced into the discriminators to indirectly build constraints between real source and synthetic images by taking advantage of samples from the source domain as negative samples and enforce the synthetic images to fall far away from the source domain. In addition, cross-entropy and structural similarity index (SSIM) are integrated into the DC-cycleGAN in order to consider both the luminance and structure of samples when synthesizing images. The experimental results indicate that DC-cycleGAN is able to produce promising results as compared with other cycleGAN-based medical image synthesis methods such as cycleGAN, RegGAN, DualGAN, and NiceGAN. The code will be available at https://github.com/JiayuanWang-JW/DC-cycleGAN.
translated by 谷歌翻译
在本文中,提出了一种新的深度学习框架,用于血管流动的时间超分辨率模拟,其中从低时间分辨率的流动模拟结果产生高时分分辨时变血管流动模拟。在我们的框架中,Point-Cloud用于表示复杂的血管模型,建议电阻 - 时间辅助表模型用于提取时变流场的时间空间特征,最后我们可以重建高精度和高精度高分辨率流场通过解码器模块。特别地,从速度的矢量特征提出了速度的幅度损失和方向损失。并且这两个度量的组合构成了网络培训的最终损失函数。给出了几个例子来说明血管流动时间超分辨率模拟所提出的框架的有效和效率。
translated by 谷歌翻译
Background. Functional assessment of right ventricle (RV) using gated myocardial perfusion single-photon emission computed tomography (MPS) heavily relies on the precise extraction of right ventricular contours. In this paper, we present a new deep-learning-based model integrating both the spatial and temporal features in gated MPS images to perform the segmentation of the RV epicardium and endocardium. Methods. By integrating the spatial features from each cardiac frame of the gated MPS and the temporal features from the sequential cardiac frames of the gated MPS, we developed a Spatial-Temporal V-Net (ST-VNet) for automatic extraction of RV endocardial and epicardial contours. In the ST-VNet, a V-Net is employed to hierarchically extract spatial features, and convolutional long-term short-term memory (ConvLSTM) units are added to the skip-connection pathway to extract the temporal features. The input of the ST-VNet is ECG-gated sequential frames of the MPS images and the output is the probability map of the epicardial or endocardial masks. A Dice similarity coefficient (DSC) loss which penalizes the discrepancy between the model prediction and the ground truth was adopted to optimize the segmentation model. Results. Our segmentation model was trained and validated on a retrospective dataset with 45 subjects, and the cardiac cycle of each subject was divided into 8 gates. The proposed ST-VNet achieved a DSC of 0.8914 and 0.8157 for the RV epicardium and endocardium segmentation, respectively. The mean absolute error, the mean squared error, and the Pearson correlation coefficient of the RV ejection fraction (RVEF) between the ground truth and the model prediction were 0.0609, 0.0830, and 0.6985. Conclusion. Our proposed ST-VNet is an effective model for RV segmentation. It has great promise for clinical use in RV functional assessment.
translated by 谷歌翻译
关节2D心脏分割和3D体积重建是建立统计心脏解剖模型的基础,并了解运动模式的功能机制。但是,由于CINE MR和高主体间方差的平面分辨率低,精确分割心脏图像并重建3D体积是具有挑战性的。在这项研究中,我们提出了一个基于潜在空间的端到端框架DeepRecon,该框架会产生多个临床上基本的结果,包括准确的图像分割,合成高分辨率3D图像和3D重建体积。我们的方法确定了Cine图像的最佳潜在表示,其中包含心脏结构的准确语义信息。特别是,我们的模型共同生成具有准确的语义信息的合成图像,并使用最佳潜在表示对心脏结构进行分割。我们进一步探索了3D形状重建和4D运动模式通过不同的潜在空间操纵策略进行适应的下游应用。同时生成的高分辨率图像具有评估心脏形状和运动的高可解释价值。实验性结果证明了我们的有效性在多个方面的方法,包括2D分割,3D重建,下游4D运动模式适应性。
translated by 谷歌翻译
视频通常将流和连续的视觉数据记录为离散的连续帧。由于存储成本对于高保真度的视频来说是昂贵的,因此大多数存储以相对较低的分辨率和帧速率存储。最新的时空视频超分辨率(STVSR)的工作是开发出来的,以将时间插值和空间超分辨率纳入统一框架。但是,其中大多数仅支持固定的上采样量表,这限制了其灵活性和应用。在这项工作中,我们没有遵循离散表示,我们提出了视频隐式神经表示(videoinr),并显示了其对STVSR的应用。学到的隐式神经表示可以解码为任意空间分辨率和帧速率的视频。我们表明,Videoinr在常见的上采样量表上使用最先进的STVSR方法实现了竞争性能,并且在连续和训练的分布量表上显着优于先前的作品。我们的项目页面位于http://zeyuan-chen.com/videoinr/。
translated by 谷歌翻译
深度学习技术的进步为生物医学图像分析应用产生了巨大的贡献。随着乳腺癌是女性中最致命的疾病,早期检测是提高生存能力的关键手段。如超声波的医学成像呈现出色器官功能的良好视觉表现;然而,对于任何分析这种扫描的放射科学家,这种扫描是挑战和耗时,这延迟了诊断过程。虽然提出了各种深度学习的方法,但是通过乳房超声成像介绍了具有最有效的残余交叉空间关注引导u-Net(RCA-IUnet)模型的最小训练参数,以进一步改善肿瘤分割不同肿瘤尺寸的分割性能。 RCA-IUNET模型跟随U-Net拓扑,剩余初始化深度可分离卷积和混合池(MAX池和光谱池)层。此外,添加了交叉空间注意滤波器以抑制无关的特征并专注于目标结构。建议模型的分割性能在使用标准分割评估指标的两个公共数据集上验证,其中它表现出其他最先进的分段模型。
translated by 谷歌翻译
最近,使用像涂鸦这样的弱注释进行弱监督的图像分割引起了人们的关注,因为与像素/体素水平上的耗时和标签密集型标记相比,这种注释更容易获得。但是,由于涂鸦缺乏感兴趣区域(ROI)的结构信息,因此现有的基于涂鸦的方法的边界定位不良。此外,大多数当前方法都是为2D图像分割而设计的,如果直接应用于图像切片,它们不会完全利用体积信息。在本文中,我们提出了一个基于涂鸦的体积图像分割,Scribble2D5,该图像对3D各向异性图像进行分割并改善边界预测。为了实现这一目标,我们使用提出的标签传播模块增强了2.5D注意的UNET,以扩展涂鸦的语义信息以及静态和主动边界预测的组合,以学习ROI的边界并正常其形状。在三个公共数据集上进行的广泛实验证明了Scribble2d5显着优于当前基于涂鸦的方法,并处理了完全监督的方法的性能。我们的代码可在线提供。
translated by 谷歌翻译
Standard video frame interpolation methods first estimate optical flow between input frames and then synthesize an intermediate frame guided by motion. Recent ap-proaches merge these two steps into a single convolution process by convolving input frames with spatially adaptive kernels that account for motion and re-sampling simultaneously. These methods require large kernels to handle large motion, which limits the number of pixels whose kernels can be estimated at once due to the large memory demand. To address this problem, this paper formulates frame interpolation as local separable convolution over input frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D kernels require significantly fewer parameters to be estimated. Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously. Since our method is able to estimate kernels and synthesizes the whole video frame at once, it allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. This deep neural network is trained end-to-end using widely available video data without any human annotation. Both qualitative and quantitative experiments show that our method provides a practical solution to high-quality video frame interpolation.
translated by 谷歌翻译