诸如类风湿性关节炎的风湿性疾病的发病通常是亚临床的,这导致挑战疾病的早期检测。然而,可以使用诸如MRI或CT的成像技术来检测解剖结构的特征变化。现代成像技术,如化学交换饱和度转移(CEST)MRI驱动希望进一步通过体内代谢物的成像来改善早期检测。为了图像在患者的关节中的小结构,通常是由于疾病发生而导致的第一个区域之一,所以必须为CEST MR成像进行高分辨率。然而,目前,由于收购的潜在物理限制,CEST MR因其潜在的物理限制而受到固有的低分辨率。在这项工作中,我们将建立了基于神经网络的超分辨率方法的建立的上抽样技术。我们可以表明,神经网络能够从低分辨率到高分辨率不饱和CEST图像的映射显着优于当前方法。在测试设定的情况下,使用Reset神经网络可以实现32.29dB(+ 10%),NRMSE为0.14(+ 28%)的NRMSE,以及0.85(+ 15%)的SSSim,大大提高了基线。这项工作为超分辨率CEST MRI的神经网络预期调查铺平了道路,并且可能导致较早的风湿病发作的检测。
translated by 谷歌翻译
磁共振成像(MRI)在临床中很重要,可以产生高分辨率图像进行诊断,但其获取时间很长,对于高分辨率图像。基于深度学习的MRI超级分辨率方法可以减少扫描时间而无需复杂的序列编程,但由于训练数据和测试数据之间的差异,可能会产生其他伪像。数据一致性层可以改善深度学习结果,但需要原始的K空间数据。在这项工作中,我们提出了基于幅度图像的数据一致性深度学习MRI超级分辨率方法,以提高超级分辨率图像的质量,而无需原始K空间数据。我们的实验表明,与没有数据一致性模块的同一卷积神经网络(CNN)块相比,提出的方法可以改善超级分辨率图像的NRMSE和SSIM。
translated by 谷歌翻译
Low-field (LF) MRI scanners have the power to revolutionize medical imaging by providing a portable and cheaper alternative to high-field MRI scanners. However, such scanners are usually significantly noisier and lower quality than their high-field counterparts. The aim of this paper is to improve the SNR and overall image quality of low-field MRI scans to improve diagnostic capability. To address this issue, we propose a Nested U-Net neural network architecture super-resolution algorithm that outperforms previously suggested deep learning methods with an average PSNR of 78.83 and SSIM of 0.9551. We tested our network on artificial noisy downsampled synthetic data from a major T1 weighted MRI image dataset called the T1-mix dataset. One board-certified radiologist scored 25 images on the Likert scale (1-5) assessing overall image quality, anatomical structure, and diagnostic confidence across our architecture and other published works (SR DenseNet, Generator Block, SRCNN, etc.). We also introduce a new type of loss function called natural log mean squared error (NLMSE). In conclusion, we present a more accurate deep learning method for single image super-resolution applied to synthetic low-field MRI via a Nested U-Net architecture.
translated by 谷歌翻译
缩短采集时间和减少动作伪影是磁共振成像中最重要的两个问题。作为一个有前途的解决方案,已经研究了基于深度学习的高质量MR图像恢复,以产生从缩短采集时间获取的较低分辨率图像的更高分辨率和自由运动伪影图像,而不降低额外的获取时间或修改脉冲序列。然而,仍有许多问题仍然存在,以防止深度学习方法在临床环境中变得实用。具体而言,大多数先前的作品专注于网络模型,但忽略了各种下采样策略对采集时间的影响。此外,长推理时间和高GPU消耗也是瓶颈,以便在诊所部署大部分产品。此外,先验研究采用回顾性运动伪像产生随机运动,导致运动伪影的无法控制的严重程度。更重要的是,医生不确定生成的MR图像是否值得信赖,使诊断困难。为了克服所有这些问题,我们雇用了一个统一的2D深度学习神经网络,用于3D MRI超级分辨率和运动伪影,展示这种框架可以在3D MRI恢复任务中实现更好的性能与最艺术方法的其他状态,并且仍然存在GPU消耗和推理时间明显低,从而更易于部署。我们还基于加速度分析了几种下式采样策略,包括在平面内和穿过平面下采样的多种组合,并开发了一种可控和可量化的运动伪影生成方法。最后,计算并用于估计生成图像的准确性的像素 - 明智的不确定性,提供可靠诊断的附加信息。
translated by 谷歌翻译
可以使用超分辨率方法改善医学图像的空间分辨率。实际增强的超级分辨率生成对抗网络(Real-Esrgan)是最近用于产生较高分辨率图像的最新有效方法之一,给定较低分辨率的输入图像。在本文中,我们应用这种方法来增强2D MR图像的空间分辨率。在我们提出的方法中,我们稍微修改了从脑肿瘤分割挑战(BRATS)2018数据集中训练2D磁共振图像(MRI)的结构。通过计算SSIM(结构相似性指数量度),NRMSE(归一化根平方误),MAE(平均绝对误差)和VIF(视觉信息保真度)值,通过计算SSIM(结构相似性指数量度)进行定性和定量验证。
translated by 谷歌翻译
Because of the necessity to obtain high-quality images with minimal radiation doses, such as in low-field magnetic resonance imaging, super-resolution reconstruction in medical imaging has become more popular (MRI). However, due to the complexity and high aesthetic requirements of medical imaging, image super-resolution reconstruction remains a difficult challenge. In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like PSNR and SSIM, our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.
translated by 谷歌翻译
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In MRI, restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3D HR image acquisition typically requests long scan time and, results in small spatial coverage and low SNR. Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR-LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.
translated by 谷歌翻译
具有高分辨率的视网膜光学相干断层扫描术(八八)对于视网膜脉管系统的定量和分析很重要。然而,八颗图像的分辨率与相同采样频率的视野成反比,这不利于临床医生分析较大的血管区域。在本文中,我们提出了一个新型的基于稀疏的域适应超分辨率网络(SASR),以重建现实的6x6 mm2/低分辨率/低分辨率(LR)八八粒图像,以重建高分辨率(HR)表示。更具体地说,我们首先对3x3 mm2/高分辨率(HR)图像进行简单降解,以获得合成的LR图像。然后,采用一种有效的注册方法在6x6 mm2图像中以其相应的3x3 mm2图像区域注册合成LR,以获得裁切的逼真的LR图像。然后,我们提出了一个多级超分辨率模型,用于对合成数据进行全面监督的重建,从而通过生成的对流策略指导现实的LR图像重建现实的LR图像,该策略允许合成和现实的LR图像可以在特征中统一。领域。最后,新型的稀疏边缘感知损失旨在动态优化容器边缘结构。在两个八八集中进行的广泛实验表明,我们的方法的性能优于最先进的超分辨率重建方法。此外,我们还研究了重建结果对视网膜结构分割的性能,这进一步验证了我们方法的有效性。
translated by 谷歌翻译
Magnetic Resonance Fingerprinting (MRF) is an efficient quantitative MRI technique that can extract important tissue and system parameters such as T1, T2, B0, and B1 from a single scan. This property also makes it attractive for retrospectively synthesizing contrast-weighted images. In general, contrast-weighted images like T1-weighted, T2-weighted, etc., can be synthesized directly from parameter maps through spin-dynamics simulation (i.e., Bloch or Extended Phase Graph models). However, these approaches often exhibit artifacts due to imperfections in the mapping, the sequence modeling, and the data acquisition. Here we propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation. To implement our direct contrast synthesis (DCS) method, we deploy a conditional Generative Adversarial Network (GAN) framework and propose a multi-branch U-Net as the generator. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics. We also demonstrate cases where our trained model is able to mitigate in-flow and spiral off-resonance artifacts that are typically seen in MRF reconstructions and thus more faithfully represent conventional spin echo-based contrast-weighted images.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
具有高分辨率(HR)的磁共振成像(MRI)提供了更详细的信息,以进行准确的诊断和定量图像分析。尽管取得了重大进展,但大多数现有的医学图像重建网络都有两个缺陷:1)所有这些缺陷都是在黑盒原理中设计的,因此缺乏足够的解释性并进一步限制其实际应用。可解释的神经网络模型引起了重大兴趣,因为它们在处理医学图像时增强了临床实践所需的可信赖性。 2)大多数现有的SR重建方法仅使用单个对比度或使用简单的多对比度融合机制,从而忽略了对SR改进至关重要的不同对比度之间的复杂关系。为了解决这些问题,在本文中,提出了一种新颖的模型引导的可解释的深层展开网络(MGDUN),用于医学图像SR重建。模型引导的图像SR重建方法求解手动设计的目标函数以重建HR MRI。我们通过将MRI观察矩阵和显式多对比度关系矩阵考虑到末端到端优化期间,将迭代的MGDUN算法展示为新型模型引导的深层展开网络。多对比度IXI数据集和Brats 2019数据集进行了广泛的实验,证明了我们提出的模型的优势。
translated by 谷歌翻译
在2D多板磁共振(MR)采集中,平面信号通常比面内信号较低。尽管当代超分辨率(SR)方法旨在恢复基本的高分辨率量,但估计的高频信息是通过端到端数据驱动的培训隐含的,而不是明确说明和寻求。为了解决这个问题,我们根据完美的重建过滤库重新构架SR问题声明,使我们能够识别并直接估计缺失的信息。在这项工作中,我们提出了一种两阶段的方法,以近似于与特定扫描的各向异性采集相对应的完美重建过滤库。在第1阶段,我们使用梯度下降估算缺失的过滤器,在第2阶段,我们使用深网来学习从粗系数到细节系数的映射。此外,提出的公式不依赖外部训练数据,从而规避了对域移位校正的需求。在我们的方法下,特别是在“切片差距”方案中提高了SR性能,这可能是由于框架施加的解决方案空间的限制。
translated by 谷歌翻译
在相应的辅助对比的指导下,目标对比度的超级分辨磁共振(MR)图像(提供了其他解剖信息)是快速MR成像的新解决方案。但是,当前的多对比超分辨率(SR)方法倾向于直接连接不同的对比度,从而忽略了它们在不同的线索中的关系,例如在高强度和低强度区域中。在这项研究中,我们提出了一个可分离的注意网络(包括高强度的优先注意力和低强度分离注意力),名为SANET。我们的卫生网可以借助辅助对比度探索“正向”和“反向”方向中高强度和低强度区域的区域,同时学习目标对比MR的SR的更清晰的解剖结构和边缘信息图片。 SANET提供了三个吸引人的好处:(1)这是第一个探索可分离的注意机制的模型,该机制使用辅助对比来预测高强度和低强度区域,将更多的注意力转移到精炼这些区域和这些区域之间的任何不确定细节和纠正重建结果中的细小区域。 (2)提出了一个多阶段集成模块,以学习多个阶段的多对比度融合的响应,获得融合表示之间的依赖性,并提高其表示能力。 (3)在FastMRI和Clinical \ textit {in Vivo}数据集上进行了各种最先进的多对比度SR方法的广泛实验,证明了我们模型的优势。
translated by 谷歌翻译
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.
translated by 谷歌翻译
基于深度学习的脑磁共振成像(MRI)重建方法有可能加速MRI采集过程。尽管如此,科学界缺乏适当的基准,以评估高分辨率大脑图像的MRI重建质量,并评估这些所提出的算法在存在小而且预期的数据分布班次存在下的表现。多线圈磁共振图像(MC-MRI)重建挑战提供了一种基准,其目的在于使用高分辨率,三维,T1加权MRI扫描的大型数据集。挑战有两个主要目标:1)比较该数据集和2)上的不同的MRI重建模型,并评估这些模型的概括性,以通过不同数量的接收器线圈获取的数据。在本文中,我们描述了挑战实验设计,并总结了一系列基线和艺术脑MRI重建模型的结果。我们提供有关目前MRI重建最先进的相关比较信息,并突出挑战在更广泛的临床采用之前获得所需的普遍模型。 MC-MRI基准数据,评估代码和当前挑战排行榜可公开可用。它们为脑MRI重建领域的未来发展提供了客观性能评估。
translated by 谷歌翻译
磁共振光谱成像(MRSI)是研究人体代谢活动的宝贵工具,但目前的应用仅限于低空间分辨率。现有的基于深度学习的MRSI超分辨率方法需要培训一个单独的网络,为每个升级因素训练,这是耗时的,并且记忆力低下。我们使用过滤器缩放策略来解决这个多尺度的超分辨率问题,该级别的缩放策略根据升级因素调节卷积过滤器,以便可以将单个网络用于各种高尺度因素。观察每个代谢物具有不同的空间特征,我们还根据特定的代谢产物调节网络。此外,我们的网络基于对抗损失的重量,因此可以在单个网络中调整超级分辨代谢图的感知清晰度。我们使用新型的多条件模块结合了这些网络条件。实验是在15名高级神经胶质瘤患者的1H-MRSI数据集上进行的。结果表明,所提出的网络在多种多尺度超分辨率方法中实现了最佳性能,并且可以提供具有可调清晰度的超级分辨代谢图。
translated by 谷歌翻译
心血管血流动力学的变化与主动脉反流(AR)的发展密切相关,一种瓣膜心脏病。源自血液流量的压力梯度用于表示AR发作并评估其严重程度。可以使用四维(4D)流磁共振成像(MRI)来非侵入地获得这些度量,其中精度主要取决于空间分辨率。然而,分辨率不足通常由4D流动MRI和复杂的AR血流动力学的限制产生。为了解决这个问题,将计算流体动力学模拟转化为合成4D流动MRI数据,并用于培训各种神经网络。这些网络生成了超级分辨率,具有upsample因子的全场相位图像为4.结果显示速度误差,高结构相似度得分和从以前的工作的改进的学习能力。在两组体内4D流动MRI数据上进行进一步验证,并在去噪流量图像中展示了成功。这种方法呈现了以非侵入性方式全面分析AR血液动力学的机会。
translated by 谷歌翻译
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image superresolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4× upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
translated by 谷歌翻译
目的:开发一种适用于具有非平滑相位变化的扩散加权(DW)图像的鲁棒部分傅里叶(PF)重建算法。方法:基于展开的近端分裂算法,导出了一种神经网络架构,其在经常复卷卷积实现的数据一致性操作和正则化之间交替。为了利用相关性,在考虑到置换方面,共同重建相同切片的多重重复。该算法在60名志愿者的DW肝脏数据上培训,并回顾性和预期的不同解剖和分辨率的次样本数据评估。结果:该方法能够在定量措施以及感知图像质量方面具有显着优异地优于追溯子采样数据的传统PF技术。在这种情况下,发现重复的联合重建以及特定类型的经常性网络展开展开是有益的重建质量。在预期的PF采样数据上,所提出的方法使得DW成像能够在不牺牲图像分辨率或引入额外的伪影的情况下进行DW成像。或者,它可以用来对抗具有更高分辨率的获取的TE增加。此外,可以向展示训练集中的解剖学和对比度显示普遍性的脑数据。结论:这项工作表明,即使在易于相位变化的解剖中的强力PF因子中,DW数据的强大PF重建也是可行的。由于所提出的方法不依赖于阶段的平滑度前沿,而是使用学习的经常性卷积,因此可以避免传统PF方法的伪像。
translated by 谷歌翻译
近年来,使用基于深入学习的架构的状态,在图像超分辨率的任务中有几个进步。先前发布的许多基于超分辨率的技术,需要高端和顶部的图形处理单元(GPU)来执行图像超分辨率。随着深度学习方法的进步越来越大,神经网络已经变得越来越多地计算饥饿。我们返回了一步,并专注于创建实时有效的解决方案。我们提出了一种在其内存足迹方面更快更小的架构。所提出的架构使用深度明智的可分离卷积来提取特征,并且它与其他超分辨率的GAN(生成对抗网络)进行接受,同时保持实时推断和低存储器占用。即使在带宽条件不佳,实时超分辨率也能够流式传输高分辨率介质内容。在维持准确性和延迟之间的有效权衡之间,我们能够生产可比较的性能模型,该性能模型是超分辨率GAN的大小的一个 - 八(1/8),并且计算的速度比超分辨率的GAN快74倍。
translated by 谷歌翻译