Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results on high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods.
translated by 谷歌翻译
通过源至目标模态丢失图像的插图可以促进医学成像中的下游任务。合成目标图像的普遍方法涉及通过生成对抗网络(GAN)的单发映射。然而,隐式表征图像分布的GAN模型可能会受到样本保真度和多样性的有限。在这里,我们提出了一种基于对抗扩散建模Syndiff的新方法,以提高医学图像合成的可靠性。为了捕获图像分布的直接相关性,Syndiff利用条件扩散过程逐步将噪声和源图像映射到目标图像上。对于推断期间的快速准确图像采样,大扩散步骤与反向扩散方向的对抗投影结合在一起。为了对未配对的数据集进行培训,设计了一个循环一致的体系结构,并使用两个耦合的扩散过程,以合成给定源的目标和给定的目标。报告了有关联合竞争性GAN和扩散模型在多对比度MRI和MRI-CT翻译中的效用的广泛评估。我们的示威表明,Syndiff在定性和定量上都可以针对竞争基线提供出色的性能。
translated by 谷歌翻译
生成时间连贯的高保真视频是生成建模研究中的重要里程碑。我们通过提出一个视频生成的扩散模型来取得这一里程碑的进步,该模型显示出非常有希望的初始结果。我们的模型是标准图像扩散体系结构的自然扩展,它可以从图像和视频数据中共同训练,我们发现这可以减少Minibatch梯度的方差并加快优化。为了生成长而更高的分辨率视频,我们引入了一种新的条件抽样技术,用于空间和时间视频扩展,该技术的性能比以前提出的方法更好。我们介绍了大型文本条件的视频生成任务,以及最新的结果,以实现视频预测和无条件视频生成的确定基准。可从https://video-diffusion.github.io/获得补充材料
translated by 谷歌翻译
DeNoising扩散模型代表了计算机视觉中最新的主题,在生成建模领域表现出了显着的结果。扩散模型是一个基于两个阶段的深层生成模型,一个正向扩散阶段和反向扩散阶段。在正向扩散阶段,通过添加高斯噪声,输入数据在几个步骤中逐渐受到干扰。在反向阶段,模型的任务是通过学习逐步逆转扩散过程来恢复原始输入数据。尽管已知的计算负担,即由于采样过程中涉及的步骤数量,扩散模型对生成样品的质量和多样性得到了广泛赞赏。在这项调查中,我们对视觉中应用的denoising扩散模型的文章进行了全面综述,包括该领域的理论和实际贡献。首先,我们识别并介绍了三个通用扩散建模框架,这些框架基于扩散概率模型,噪声调节得分网络和随机微分方程。我们进一步讨论了扩散模型与其他深层生成模型之间的关系,包括变异自动编码器,生成对抗网络,基于能量的模型,自回归模型和正常流量。然后,我们介绍了计算机视觉中应用的扩散模型的多角度分类。最后,我们说明了扩散模型的当前局限性,并设想了一些有趣的未来研究方向。
translated by 谷歌翻译
可变形图像配准是医学成像和计算机视觉的基本任务之一。经典登记算法通常依赖于迭代优化方法来提供准确的变形,这需要高计算成本。虽然已经开发了许多基于深度学习的方法来进行快速图像登记,但估计具有较少拓扑折叠问题的变形场仍然挑战。此外,这些方法仅使登记到单个固定图像,并且不可能在移动和固定图像之间获得连续变化的登记结果。为了解决这个问题,我们介绍了一种新的扩散模型的概率图像配准方法,称为DemageUseMorph。具体而言,我们的模型了解移动和固定图像之间变形的得分函数。类似于现有的扩散模型,DiffUsemorph不仅通过反向扩散过程提供合成变形图像,而且还使运动图像的各种水平与潜在的空间一起。在2D面部表达图像和3D脑图像登记任务上的实验结果表明,我们的方法可以通过拓扑保存能力提供灵活和准确的变形。
translated by 谷歌翻译
深度神经网络在医学图像分析中带来了显着突破。但是,由于其渴望数据的性质,医学成像项目中适度的数据集大小可能会阻碍其全部潜力。生成合成数据提供了一种有希望的替代方案,可以补充培训数据集并进行更大范围的医学图像研究。最近,扩散模型通过产生逼真的合成图像引起了计算机视觉社区的注意。在这项研究中,我们使用潜在扩散模型探索从高分辨率3D脑图像中生成合成图像。我们使用来自英国生物银行数据集的T1W MRI图像(n = 31,740)来训练我们的模型,以了解脑图像的概率分布,该脑图像以协变量为基础,例如年龄,性别和大脑结构量。我们发现我们的模型创建了现实的数据,并且可以使用条件变量有效地控制数据生成。除此之外,我们创建了一个带有100,000次脑图像的合成数据集,并使科学界公开使用。
translated by 谷歌翻译
Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However, oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between the probabilistic distribution learned by the model and the distribution of natural images. The delicate conditions gradually enlarge the divergence during each sampling timestep. To address this issue, we introduce a new method that brings the predicted samples to the training data manifold using a pretrained unconditional diffusion model. The unconditional model acts as a regularizer and reduces the divergence introduced by the conditional model at each sampling step. We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks. The improvements obtained by our method suggest that the priors can be incorporated as a general plugin for improving conditional diffusion models.
translated by 谷歌翻译
We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.
translated by 谷歌翻译
There has been a recent explosion of impressive generative models that can produce high quality images (or videos) conditioned on text descriptions. However, all such approaches rely on conditional sentences that contain unambiguous descriptions of scenes and main actors in them. Therefore employing such models for more complex task of story visualization, where naturally references and co-references exist, and one requires to reason about when to maintain consistency of actors and backgrounds across frames/scenes, and when not to, based on story progression, remains a challenge. In this work, we address the aforementioned challenges and propose a novel autoregressive diffusion-based framework with a visual memory module that implicitly captures the actor and background context across the generated frames. Sentence-conditioned soft attention over the memories enables effective reference resolution and learns to maintain scene and actor consistency when needed. To validate the effectiveness of our approach, we extend the MUGEN dataset and introduce additional characters, backgrounds and referencing in multi-sentence storylines. Our experiments for story generation on the MUGEN, the PororoSV and the FlintstonesSV dataset show that our method not only outperforms prior state-of-the-art in generating frames with high visual quality, which are consistent with the story, but also models appropriate correspondences between the characters and the background.
translated by 谷歌翻译
We introduce M-VADER: a diffusion model (DM) for image generation where the output can be specified using arbitrary combinations of images and text. We show how M-VADER enables the generation of images specified using combinations of image and text, and combinations of multiple images. Previously, a number of successful DM image generation algorithms have been introduced that make it possible to specify the output image using a text prompt. Inspired by the success of those models, and led by the notion that language was already developed to describe the elements of visual contexts that humans find most important, we introduce an embedding model closely related to a vision-language model. Specifically, we introduce the embedding model S-MAGMA: a 13 billion parameter multimodal decoder combining components from an autoregressive vision-language model MAGMA and biases finetuned for semantic search.
translated by 谷歌翻译
Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, end-to-end optimized video diffusion model inspired by recent advances in neural video compression. The model successively generates future frames by correcting a deterministic next-frame prediction using a stochastic residual generated by an inverse diffusion process. We compare this approach against five baselines on four datasets involving natural and simulation-based videos. We find significant improvements in terms of perceptual quality for all datasets. Furthermore, by introducing a scalable version of the Continuous Ranked Probability Score (CRPS) applicable to video, we show that our model also outperforms existing approaches in their probabilistic frame forecasting ability.
translated by 谷歌翻译
基于分数的生成模型和扩散概率模型已经成功地在连续域中产生高质量样本,例如图像和音频。然而,由于他们的LangeVin启发了采样机制,它们对离散和顺序数据的应用受到限制。在这项工作中,我们通过参数化在预先训练的变化性AutiaceOder的连续潜空间中的离散域参数,介绍了一种用于训练延伸模型的技术。我们的方法是非自回归的,并学习通过反向过程生成潜在嵌入的序列,并通过恒定数量的迭代细化步骤提供并行生成。与在相同连续嵌入的自回归语言模型相比,我们将这种技术应用于建模符号音乐,并显示出强大的无条件生成和后HOC条件缺陷结果。
translated by 谷歌翻译
We present an end-to-end Transformer based Latent Diffusion model for image synthesis. On the ImageNet class conditioned generation task we show that a Transformer based Latent Diffusion model achieves a 14.1FID which is comparable to the 13.1FID score of a UNet based architecture. In addition to showing the application of Transformer models for Diffusion based image synthesis this simplification in architecture allows easy fusion and modeling of text and image data. The multi-head attention mechanism of Transformers enables simplified interaction between the image and text features which removes the requirement for crossattention mechanism in UNet based Diffusion models.
translated by 谷歌翻译
基于文本的运动生成模型正在引起人们对它们在游戏,动画或机器人行业中自动化运动过程的潜力的兴趣激增。在本文中,我们提出了一种基于扩散的运动合成和名为Flame的编辑模型。受扩散模型中最新成功的启发,我们将基于扩散的生成模型集成到运动域中。火焰可以产生与给定文本很好地对齐的高保真动作。此外,它可以编辑运动的各个部分,无论是在框架和联合方面,而无需进行任何微调。火焰涉及我们设计的新的基于变压器的架构,以更好地处理运动数据,这对于管理可变长度运动和良好的自由形式文本至关重要。在实验中,我们表明火焰在三个文本数据集上实现了最新的一代表演:HumanML3D,Babel和Kit。我们还证明,火焰的编辑能力可以扩展到其他任务,例如运动预测或运动内部,这些任务先前已被专用模型涵盖。
translated by 谷歌翻译
理想的音乐合成器应具有互动性和表现力,并实时产生高保真音频,以进行任意组合仪器和音符。最近的神经合成器在特定于域的模型之间表现出了折衷,这些模型仅对特定仪器或可以训练所有音乐训练但最小的控制和缓慢发电的原始波形模型提供了详细的控制。在这项工作中,我们专注于神经合成器的中间立场,这些基础可以从MIDI序列中产生音频,并实时使用仪器的任意组合。这使得具有单个模型的各种转录数据集的培训,这又提供了对各种仪器的组合和仪器的控制级别的控制。我们使用一个简单的两阶段过程:MIDI到具有编码器变压器的频谱图,然后使用生成对抗网络(GAN)频谱图逆变器将频谱图到音频。我们将训练解码器作为自回归模型进行了比较,并将其视为一种脱氧扩散概率模型(DDPM),并发现DDPM方法在定性上是优越的,并且通过音频重建和fr \'echet距离指标来衡量。鉴于这种方法的互动性和普遍性,我们发现这是迈向互动和表达性神经综合的有前途的第一步,以实现工具和音符的任意组合。
translated by 谷歌翻译
通过将图像形成过程分解成逐个申请的去噪自身额,扩散模型(DMS)实现了最先进的合成导致图像数据和超越。另外,它们的配方允许引导机构来控制图像生成过程而不会再刷新。然而,由于这些模型通常在像素空间中直接操作,因此强大的DMS的优化通常消耗数百个GPU天,并且由于顺序评估,推理是昂贵的。为了在保留其质量和灵活性的同时启用有限计算资源的DM培训,我们将它们应用于强大的佩带自动化器的潜在空间。与以前的工作相比,这种代表上的培训扩散模型允许第一次达到复杂性降低和细节保存之间的近乎最佳点,极大地提高了视觉保真度。通过将跨关注层引入模型架构中,我们将扩散模型转化为强大而柔性的发电机,以进行诸如文本或边界盒和高分辨率合成的通用调节输入,以卷积方式变得可以实现。我们的潜在扩散模型(LDMS)实现了一种新的技术状态,可在各种任务中进行图像修复和高竞争性能,包括无条件图像生成,语义场景合成和超级分辨率,同时与基于像素的DMS相比显着降低计算要求。代码可在https://github.com/compvis/lattent-diffusion获得。
translated by 谷歌翻译
Score-based diffusion models have captured widespread attention and funded fast progress of recent vision generative tasks. In this paper, we focus on diffusion model backbone which has been much neglected before. We systematically explore vision Transformers as diffusion learners for various generative tasks. With our improvements the performance of vanilla ViT-based backbone (IU-ViT) is boosted to be on par with traditional U-Net-based methods. We further provide a hypothesis on the implication of disentangling the generative backbone as an encoder-decoder structure and show proof-of-concept experiments verifying the effectiveness of a stronger encoder for generative tasks with ASymmetriC ENcoder Decoder (ASCEND). Our improvements achieve competitive results on CIFAR-10, CelebA, LSUN, CUB Bird and large-resolution text-to-image tasks. To the best of our knowledge, we are the first to successfully train a single diffusion model on text-to-image task beyond 64x64 resolution. We hope this will motivate people to rethink the modeling choices and the training pipelines for diffusion-based generative models.
translated by 谷歌翻译
非自动进取的生成变压器最近表现出令人印象深刻的图像产生性能,并且比自动回归对应物更快。但是,从视觉令牌的真实关节分布中进行的最佳并行采样仍然是一个开放的挑战。在本文中,我们介绍了代币批评,这是一种辅助模型,用于指导非自动性生成变压器的采样。鉴于掩盖和重建的真实图像,对代币批判性模型进行了训练,以区分哪种视觉令牌属于原始图像,哪些是由生成变压器采样的。在非自动回归迭代采样过程中,令牌批评者用于选择要接受的代币以及拒绝和重新取样的代币。再加上最先进的生成变压器令牌 - 批判性可显着提高其性能,并且在挑战性的课堂条件化成像生成中,就产生的图像质量和多样性之间的权衡取舍了最近的扩散模型和gan 。
translated by 谷歌翻译
预测和预测序列中缺少信息的未来结果或原因是代理商能够做出智能决策的关键能力。这需要强大的时间连贯的生成能力。扩散模型最近在几个生成任务中表现出巨大的成功,但在视频域中并未广泛探索。我们提出随机遮罩视频扩散(RAMVID),该扩散将图像扩散模型扩展到使用3D卷积的视频,并在训练过程中引入了一种新的调理技术。通过改变我们条件的面膜,该模型能够执行视频预测,填充和上采样。由于在大多数有条件训练的扩散模型中,我们不使用串联在面罩上条件条件,因此我们能够减少内存足迹。我们在两个基准数据集上评估了该模型以进行视频预测,一个用于视频生成的模型,我们在其中实现了竞争成果。在动力学-600上,我们实现了视频预测的最先进。
translated by 谷歌翻译
我们介绍了文本到图像生成的矢量量化扩散(VQ-扩散)模型。该方法基于矢量量化变分性AutoEncoder(VQ-VAE),其潜像通过最近开发的去噪扩散概率(DDPM)的条件变体为基础。我们发现这种潜在空间方法非常适合于图像到图像生成任务,因为它不仅消除了具有现有方法的单向偏差,还允许我们结合掩模和更换的扩散策略,以避免积累错误,这是现有方法的严重问题。我们的实验表明,与具有类似数量的参数数量的传统自回归(AR)模型相比,VQ扩散产生明显更好的文本到图像生成结果。与以前的基于GAN的文本到图像方法相比,我们的VQ扩散可以通过大边缘处理更复杂的场景并提高合成的图像质量。最后,我们表明我们的方法中的图像生成计算可以通过Reparameter化进行高效。利用传统的AR方法,文本到图像生成时间随输出图像分辨率线性增加,因此即使对于正常尺寸图像也是相当耗时的。 VQ-扩散使我们能够在质量和速度之间实现更好的权衡。我们的实验表明,具有Reparameterization的VQ扩散模型比传统的AR方法快15倍,同时实现更好的图像质量。
translated by 谷歌翻译