磁共振光谱成像(MRSI)是量化体内代谢物的必不可少的工具,但是低空间分辨率限制了其临床应用。基于深度学习的超分辨率方法为改善MRSI的空间分辨率提供了有希望的结果,但是与实验获得的高分辨率图像相比,超级分辨图像通常是模糊的。已经使用生成对抗网络进行了尝试,以提高图像视觉质量。在这项工作中,我们考虑了另一种类型的生成模型,即基于流的模型,与对抗网络相比,训练更稳定和可解释。具体而言,我们提出了一个基于流动的增强器网络,以提高超分辨率MRSI的视觉质量。与以前的基于流的模型不同,我们的增强器网络包含了来自其他图像模式(MRI)的解剖信息,并使用可学习的基础分布。此外,我们施加指南丢失和数据一致性丢失,以鼓励网络在保持高忠诚度的同时以高视觉质量生成图像。从25名高级神经胶质瘤患者获得的1H-MRSI数据集上进行的实验表明,我们的增强子网络的表现优于对抗网络和基线基线方法。我们的方法还允许视觉质量调整和不确定性估计。
translated by 谷歌翻译
磁共振光谱成像(MRSI)是研究人体代谢活动的宝贵工具,但目前的应用仅限于低空间分辨率。现有的基于深度学习的MRSI超分辨率方法需要培训一个单独的网络,为每个升级因素训练,这是耗时的,并且记忆力低下。我们使用过滤器缩放策略来解决这个多尺度的超分辨率问题,该级别的缩放策略根据升级因素调节卷积过滤器,以便可以将单个网络用于各种高尺度因素。观察每个代谢物具有不同的空间特征,我们还根据特定的代谢产物调节网络。此外,我们的网络基于对抗损失的重量,因此可以在单个网络中调整超级分辨代谢图的感知清晰度。我们使用新型的多条件模块结合了这些网络条件。实验是在15名高级神经胶质瘤患者的1H-MRSI数据集上进行的。结果表明,所提出的网络在多种多尺度超分辨率方法中实现了最佳性能,并且可以提供具有可调清晰度的超级分辨代谢图。
translated by 谷歌翻译
超级分辨率是一个不良问题,其中基本真理的高分辨率图像仅代表合理解决方案的空间中的一种可能性。然而,主导范式是采用像素 - 明智的损失,例如L_1,其驱动预测模糊的平均值。当与对抗性损失相结合时,这导致了根本相互矛盾的目标,这降低了最终质量。我们通过重新审视L_1丢失来解决此问题,并表明它对应于单层条件流程。灵感来自这一关系,我们探讨了一般流动作为L_1目标的忠诚替代品。我们证明,在与对抗性损失结合时,更深流量的灵活性导致更好的视觉质量和一致性。我们对三个数据集和比例因子进行广泛的用户研究,其中我们的方法被证明了为光逼真的超分辨率优于最先进的方法。代码和培训的型号可在:git.io/adflow
translated by 谷歌翻译
基于流量的生成超分辨率(SR)模型学会生产一组可行的SR解决方案,称为SR空间。 SR溶液的多样性随着潜在变量的温度($ \ tau $)的增加而增加,这引入了样品溶液之间纹理的随机变化,从而导致视觉伪像和低忠诚度。在本文中,我们提出了一种简单但有效的图像结合/融合方法,以获得消除随机伪像的单个SR图像,并改善忠诚度,而不会显着损害感知质量。我们通过从流量模型跨越的SR空间中的一系列可行的光真实解决方案中受益,从而实现这一目标。我们提出了不同的图像结合和融合策略,这些策略提供了多种途径,可以根据手头任务的保真度与感知质量要求,以可控的方式将SR Slace样本解决方案移至感知延伸平面中更为理想的目的地。实验结果表明,与流量模型和经过对抗训练的模型所产生的样本SR图像相比,我们的图像结合/融合策略在定量指标和视觉质量方面实现了更有希望的感知依赖权衡。
translated by 谷歌翻译
缩短采集时间和减少动作伪影是磁共振成像中最重要的两个问题。作为一个有前途的解决方案,已经研究了基于深度学习的高质量MR图像恢复,以产生从缩短采集时间获取的较低分辨率图像的更高分辨率和自由运动伪影图像,而不降低额外的获取时间或修改脉冲序列。然而,仍有许多问题仍然存在,以防止深度学习方法在临床环境中变得实用。具体而言,大多数先前的作品专注于网络模型,但忽略了各种下采样策略对采集时间的影响。此外,长推理时间和高GPU消耗也是瓶颈,以便在诊所部署大部分产品。此外,先验研究采用回顾性运动伪像产生随机运动,导致运动伪影的无法控制的严重程度。更重要的是,医生不确定生成的MR图像是否值得信赖,使诊断困难。为了克服所有这些问题,我们雇用了一个统一的2D深度学习神经网络,用于3D MRI超级分辨率和运动伪影,展示这种框架可以在3D MRI恢复任务中实现更好的性能与最艺术方法的其他状态,并且仍然存在GPU消耗和推理时间明显低,从而更易于部署。我们还基于加速度分析了几种下式采样策略,包括在平面内和穿过平面下采样的多种组合,并开发了一种可控和可量化的运动伪影生成方法。最后,计算并用于估计生成图像的准确性的像素 - 明智的不确定性,提供可靠诊断的附加信息。
translated by 谷歌翻译
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In MRI, restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3D HR image acquisition typically requests long scan time and, results in small spatial coverage and low SNR. Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR-LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.
translated by 谷歌翻译
通过将自然图像的复杂分布近似通过可逆神经网络(INN)近似于潜在空间中的简单拖延分布,已成功地用于生成图像超分辨率(SR)。这些模型可以使用潜在空间中的随机采样点从一个低分辨率(LR)输入中生成多个逼真的SR图像,从而模拟图像升级的不足的性质,其中多个高分辨率(HR)图像对应于同一LR。最近,INN中的可逆过程也通过双向图像重新缩放模型(如IRN和HCFLOW)成功使用,以优化降尺度和逆向上尺度的关节,从而显着改善了高尺度的图像质量。尽管它们也被优化用于图像降尺度,但图像降尺度的不良性质可以根据不同的插值内核和重新采样方法将一个HR图像缩小到多个LR图像。除了代表图像放大的不确定性的原始缩小潜在变量外,还引入了图像降压过程中的模型变化。这种双重可变变量增强功能适用于不同的图像重新缩放模型,并且在广泛的实验中显示,它可以始终如一地提高图像升级精度,而无需牺牲缩小的LR图像中的图像质量。它还显示可有效增强基于Inn的其他模型,用于图像恢复应用(例如图像隐藏)。
translated by 谷歌翻译
图像缩小和升级是两个基本的重新划分操作。一旦图像缩小,由于信息丢失,难以通过Upscaling重建。为了使这两个过程更加兼容并提高重建性能,一些努力将它们模拟为联合编码解码任务,其中约束是缩小(即编码)的低分辨率(LR)图像必须保留原始视觉外观。要实现此约束,大多数方法通过使用原始高分辨率(HR)图像的双向较低的LR版本监督缩减模块。然而,这种双向LR引导可以是随后的上升(即解码)的次优,并限制最终的重建性能。在本文中,不直接应用LR引导,我们提出了一种额外的可逆性流动指导模块(FGM),其可以在较次编制的情况下将次要表示转换为视觉上可粘合图像并在升级期间重新转换。从FGM的可逆性受益,较次要的代表可以摆脱LR指导,不会打扰较低的升级过程。它允许我们删除对缩小模块的限制,并以端到端的方式优化缩减和上升模块。以这种方式,这两个模块可以协作以最大限度地提高HR重建性能。广泛的实验表明,所提出的方法可以在缩小和重建图像上实现最先进的(SOTA)性能。
translated by 谷歌翻译
Single-image super-resolution (SISR) networks trained with perceptual and adversarial losses provide high-contrast outputs compared to those of networks trained with distortion-oriented losses, such as L1 or L2. However, it has been shown that using a single perceptual loss is insufficient for accurately restoring locally varying diverse shapes in images, often generating undesirable artifacts or unnatural details. For this reason, combinations of various losses, such as perceptual, adversarial, and distortion losses, have been attempted, yet it remains challenging to find optimal combinations. Hence, in this paper, we propose a new SISR framework that applies optimal objectives for each region to generate plausible results in overall areas of high-resolution outputs. Specifically, the framework comprises two models: a predictive model that infers an optimal objective map for a given low-resolution (LR) input and a generative model that applies a target objective map to produce the corresponding SR output. The generative model is trained over our proposed objective trajectory representing a set of essential objectives, which enables the single network to learn various SR results corresponding to combined losses on the trajectory. The predictive model is trained using pairs of LR images and corresponding optimal objective maps searched from the objective trajectory. Experimental results on five benchmarks show that the proposed method outperforms state-of-the-art perception-driven SR methods in LPIPS, DISTS, PSNR, and SSIM metrics. The visual results also demonstrate the superiority of our method in perception-oriented reconstruction. The code and models are available at https://github.com/seungho-snu/SROOE.
translated by 谷歌翻译
Magnetic Resonance Fingerprinting (MRF) is an efficient quantitative MRI technique that can extract important tissue and system parameters such as T1, T2, B0, and B1 from a single scan. This property also makes it attractive for retrospectively synthesizing contrast-weighted images. In general, contrast-weighted images like T1-weighted, T2-weighted, etc., can be synthesized directly from parameter maps through spin-dynamics simulation (i.e., Bloch or Extended Phase Graph models). However, these approaches often exhibit artifacts due to imperfections in the mapping, the sequence modeling, and the data acquisition. Here we propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation. To implement our direct contrast synthesis (DCS) method, we deploy a conditional Generative Adversarial Network (GAN) framework and propose a multi-branch U-Net as the generator. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics. We also demonstrate cases where our trained model is able to mitigate in-flow and spiral off-resonance artifacts that are typically seen in MRF reconstructions and thus more faithfully represent conventional spin echo-based contrast-weighted images.
translated by 谷歌翻译
Low-field (LF) MRI scanners have the power to revolutionize medical imaging by providing a portable and cheaper alternative to high-field MRI scanners. However, such scanners are usually significantly noisier and lower quality than their high-field counterparts. The aim of this paper is to improve the SNR and overall image quality of low-field MRI scans to improve diagnostic capability. To address this issue, we propose a Nested U-Net neural network architecture super-resolution algorithm that outperforms previously suggested deep learning methods with an average PSNR of 78.83 and SSIM of 0.9551. We tested our network on artificial noisy downsampled synthetic data from a major T1 weighted MRI image dataset called the T1-mix dataset. One board-certified radiologist scored 25 images on the Likert scale (1-5) assessing overall image quality, anatomical structure, and diagnostic confidence across our architecture and other published works (SR DenseNet, Generator Block, SRCNN, etc.). We also introduce a new type of loss function called natural log mean squared error (NLMSE). In conclusion, we present a more accurate deep learning method for single image super-resolution applied to synthetic low-field MRI via a Nested U-Net architecture.
translated by 谷歌翻译
仅使用少量数据学习神经网络是一个重要的研究主题,具有巨大的应用潜力。在本文中,我们介绍了基于归一化流量的成像中反问题的变异建模的常规化器。我们的常规器称为PatchNR,涉及在很少的图像的贴片上学习的正常流。特别是,培训独立于考虑的逆问题,因此可以将相同的正规化程序用于在同一类图像上作用的不同前向操作员。通过研究斑块的分布与整个图像类别的分布,我们证明我们的变分模型确实是一种地图方法。如果有其他监督信息,我们的模型可以推广到有条件的补丁。材料图像和低剂量或限量角度计算机断层扫描(CT)的层分辨率的数值示例表明,我们的方法在具有相似假设的方法之间提供了高质量的结果,但仅需要很少的数据。
translated by 谷歌翻译
具有高分辨率(HR)的磁共振成像(MRI)提供了更详细的信息,以进行准确的诊断和定量图像分析。尽管取得了重大进展,但大多数现有的医学图像重建网络都有两个缺陷:1)所有这些缺陷都是在黑盒原理中设计的,因此缺乏足够的解释性并进一步限制其实际应用。可解释的神经网络模型引起了重大兴趣,因为它们在处理医学图像时增强了临床实践所需的可信赖性。 2)大多数现有的SR重建方法仅使用单个对比度或使用简单的多对比度融合机制,从而忽略了对SR改进至关重要的不同对比度之间的复杂关系。为了解决这些问题,在本文中,提出了一种新颖的模型引导的可解释的深层展开网络(MGDUN),用于医学图像SR重建。模型引导的图像SR重建方法求解手动设计的目标函数以重建HR MRI。我们通过将MRI观察矩阵和显式多对比度关系矩阵考虑到末端到端优化期间,将迭代的MGDUN算法展示为新型模型引导的深层展开网络。多对比度IXI数据集和Brats 2019数据集进行了广泛的实验,证明了我们提出的模型的优势。
translated by 谷歌翻译
超声波术提供廉价,广泛可接近和紧凑的医疗成像解决方案。然而,与其他成像方式相比,例如CT和MRI,超声图像臭名昭着地遭受强大的散斑噪声,其源自子波长散射的随机干扰。这恶化了超声图像质量并使解释具有挑战性。我们在此提出了一种基于从高质量MRI图像中学到的深生成前的最大-A-Bouthiori估计的新的无监督超声斑点和图像去噪方法。为了模拟生成组织反射率,我们利用标准化流量,近年来已经表现出在各种应用中建模信号前沿的强大。为了促进拓展,我们将先前和培训我们的流量模型从NYU FastMri(完全采样)数据集的补丁上。然后将该之前用于迭代去噪方案的推理。我们首先验证我们在嘈杂的MRI数据(无前域移位)上的学习前沿的实用程序,然后转向从PICMU和CUBDL数据集的模拟和体内超声图像上的评估性能。结果表明,该方法优于定量和定性的其他(无监督)超声的去噪方法(NLM和OBNLM)。
translated by 谷歌翻译
Because of the necessity to obtain high-quality images with minimal radiation doses, such as in low-field magnetic resonance imaging, super-resolution reconstruction in medical imaging has become more popular (MRI). However, due to the complexity and high aesthetic requirements of medical imaging, image super-resolution reconstruction remains a difficult challenge. In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like PSNR and SSIM, our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.
translated by 谷歌翻译
磁共振成像(MRI)在临床中很重要,可以产生高分辨率图像进行诊断,但其获取时间很长,对于高分辨率图像。基于深度学习的MRI超级分辨率方法可以减少扫描时间而无需复杂的序列编程,但由于训练数据和测试数据之间的差异,可能会产生其他伪像。数据一致性层可以改善深度学习结果,但需要原始的K空间数据。在这项工作中,我们提出了基于幅度图像的数据一致性深度学习MRI超级分辨率方法,以提高超级分辨率图像的质量,而无需原始K空间数据。我们的实验表明,与没有数据一致性模块的同一卷积神经网络(CNN)块相比,提出的方法可以改善超级分辨率图像的NRMSE和SSIM。
translated by 谷歌翻译
\ textit {objection:}基于gadolinium的对比剂(GBCA)已被广泛用于更好地可视化脑磁共振成像中的疾病(MRI)。然而,大脑和身体内部的gadolin量引起了人们对使用GBCA的安全问题。因此,在提供类似的对比度信息的同时,可以减少甚至消除GBCA暴露的新方法的发展将在临床上具有重大用途。 \ textit {方法:}在这项工作中,我们提出了一种基于深度学习的方法,用于对脑肿瘤患者的对比增强T1合成。 3D高分辨率完全卷积网络(FCN)通过处理和聚合并行的多尺度信息保持高分辨率信息,旨在将前对比度MRI序列映射到对比度增强的MRI序列。具体而言,将三个前对比的MRI序列T1,T2和表观扩散系数图(ADC)用作输入,而对比后T1序列则被用作目标输出。为了减轻正常组织与肿瘤区域之间的数据不平衡问题,我们引入了局部损失,以改善肿瘤区域的贡献,从而可以更好地增强对肿瘤的增强结果。 \ textIt {结果:}进行了广泛的定量和视觉评估,我们提出的模型在大脑中达到28.24db的PSNR,在肿瘤区域达到21.2db。 \ textit {结论和意义:}我们的结果表明,用深度学习产生的合成对比图像代替GBCA的潜力。代码可在\ url {https://github.com/chenchao666/contrast-enhanced-mri-synthesis中获得
translated by 谷歌翻译
机器学习模型通常培训端到端和监督设置,使用配对(输入,输出)数据。示例包括最近的超分辨率方法,用于在(低分辨率,高分辨率)图像上培训。然而,这些端到端的方法每当输入中存在分布偏移时需要重新训练(例如,夜间图像VS日光)或相关的潜在变量(例如,相机模糊或手动运动)。在这项工作中,我们利用最先进的(SOTA)生成模型(这里是Stylegan2)来构建强大的图像前提,这使得贝叶斯定理应用于许多下游重建任务。我们的方法是通过生成模型(BRGM)的贝叶斯重建,使用单个预先训练的发生器模型来解决不同的图像恢复任务,即超级分辨率和绘画,通过与不同的前向腐败模型相结合。我们将发电机模型的重量保持固定,并通过估计产生重建图像的输入潜在的跳过载体来重建图像来估计图像。我们进一步使用变分推理来近似潜伏向量的后部分布,我们对多种解决方案进行采样。我们在三个大型和多样化的数据集中展示了BRGM:(i)来自Flick的60,000个图像面向高质量的数据集(II)来自MIMIC III的高质量数据集(II)240,000胸X射线,(III)的组合收集5脑MRI数据集,具有7,329个扫描。在所有三个数据集和没有任何DataSet特定的HyperParameter调整,我们的简单方法会在超级分辨率和绘画上对当前的特定任务最先进的方法产生性能竞争力,同时更加稳定,而不需要任何培训。我们的源代码和预先训练的型号可在线获取:https://razvanmarinescu.github.io/brgm/。
translated by 谷歌翻译
在实践中,很难收集配对的培训数据,但是不合格的样本广泛存在。当前的方法旨在通过探索损坏的数据和清洁数据之间的关系来从未配对样本中生成合成的培训数据。这项工作提出了Lud-Vae,这是一种从边际分布中采样的数据中学习关节概率密度函数的深层生成方法。我们的方法基于一个经过精心设计的概率图形模型,在该模型中,干净和损坏的数据域在条件上是独立的。使用变异推断,我们最大化证据下限(ELBO)以估计关节概率密度函数。此外,我们表明在推理不变假设下没有配对样品的情况下,ELBO是可以计算的。该属性在未配对的环境中提供了我们方法的数学原理。最后,我们将我们的方法应用于现实世界图像denoising,超分辨率和低光图像增强任务,并使用Lud-vae生成的合成数据训练模型。实验结果验证了我们方法比其他方法的优势。
translated by 谷歌翻译
Conditional normalizing flows can generate diverse image samples for solving inverse problems. Most normalizing flows for inverse problems in imaging employ the conditional affine coupling layer that can generate diverse images quickly. However, unintended severe artifacts are occasionally observed in the output of them. In this work, we address this critical issue by investigating the origins of these artifacts and proposing the conditions to avoid them. First of all, we empirically and theoretically reveal that these problems are caused by ``exploding variance'' in the conditional affine coupling layer for certain out-of-distribution (OOD) conditional inputs. Then, we further validated that the probability of causing erroneous artifacts in pixels is highly correlated with a Mahalanobis distance-based OOD score for inverse problems in imaging. Lastly, based on our investigations, we propose a remark to avoid exploding variance and then based on it, we suggest a simple remedy that substitutes the affine coupling layers with the modified rational quadratic spline coupling layers in normalizing flows, to encourage the robustness of generated image samples. Our experimental results demonstrated that our suggested methods effectively suppressed critical artifacts occurring in normalizing flows for super-resolution space generation and low-light image enhancement without compromising performance.
translated by 谷歌翻译