The success of Deep Generative Models at high-resolution image generation has led to their extensive utilization for style editing of real images. Most existing methods work on the principle of inverting real images onto their latent space, followed by determining controllable directions. Both inversion of real images and determination of controllable latent directions are computationally expensive operations. Moreover, the determination of controllable latent directions requires additional human supervision. This work aims to explore the efficacy of mask-guided feature modulation in the latent space of a Deep Generative Model as a solution to these bottlenecks. To this end, we present the SemanticStyle Autoencoder (SSAE), a deep Generative Autoencoder model that leverages semantic mask-guided latent space manipulation for highly localized photorealistic style editing of real images. We present qualitative and quantitative results for the same and their analysis. This work shall serve as a guiding primer for future work.
translated by 谷歌翻译
图像恢复是从降级版本中恢复清洁图像的任务。在大多数情况下,劣化是空间变化的,并且它需要恢复网络到本地化并恢复受影响的区域。在本文中,我们提出了一种适用于处理受实际发生的伪像(如模糊,雨杆)的图像中的图像中降解的图像特异性和空间不同性质的新方法。与直接学习劣化和清洁图像之间的映射直接学习映射的现有方法不同,我们将恢复任务分解为劣化定位和降级的区域引导恢复的两个阶段。我们的前提是使用劣化掩模预测的辅助任务来指导恢复过程。我们展示了对此辅助任务培训的模型包含重要地区知识,可以利用使用细心知识蒸馏技术来指导恢复网络的培训。此外,我们提出了掩模引导的卷积和全局上下文聚合模块,专注于恢复劣化区域。通过实现强大基线的显着改善,证明了所提出的方法的有效性。
translated by 谷歌翻译
通过最近使用深神经网络,图像纯洁方法显示出显着的改进。然而,许多这些技术经常产生与周围区域不一致的扭曲的结构或模糊纹理。该问题植根于编码器层的无效,在建立缺失地区的完全和忠实的嵌入时。为了解决这个问题,两阶段方法部署了两个单独的网络,用于对染色图像的粗略和精细估计。一些方法利用手工制作的特征,如边缘或轮廓,以指导重建过程。由于多个发电机网络,手工特征有限,并且在地面真理中存在的信息的次优,这些方法遭受巨大的计算开销。通过这些观察结果,我们提出了一种基于蒸馏的方法,用于以自适应方式为编码器层提供直接特征级监督。我们部署交叉和自蒸馏技术,并讨论了对编码器中专用完成块的需要,以实现蒸馏靶。我们对多个数据集进行广泛的评估以验证我们的方法。
translated by 谷歌翻译
本文解决了视频解训的挑战性问题。现有的大多数作品依赖于用于时间信息融合的隐式或显式对齐,其由于错误的对准而增加计算成本或导致次优的性能。在这项研究中,我们提出了一个分解的时空关注,以在不考虑的情况下完全使用可用信息的空间和时间来执行非本地操作。与现有融合技术相比,它显示出优异的性能,同时高效。多个数据集的广泛实验证明了我们方法的优越性。
translated by 谷歌翻译
本文铲球动态场景去模糊的问题。虽然终端到终端的全卷积的设计最近提出的国家的最先进的非匀速运动去模糊,他们的表现复杂的权衡仍是次优的。现有的方法在普通卷积层,内核尺寸的数量,来与模型的大小和推理速度的增加的负担,一个简单的增量实现大的感受野。在这项工作中,我们提出了一个有效的像素适应并配内和跨不同的图像处理大量的模糊变化周到的设计。我们还提出了一种有效的内容感知全局 - 局部滤波模块通过不仅考虑像素的全局依赖关系还动态使用相邻像素是显著提高性能。我们使用上述模块构成的补丁分层架构周到隐式地发现存在于所述输入图像并依次模糊的空间变化进行的中间特征局部和全局调制。与现有技术的上去模糊基准广泛的定性和定量的比较表明了该网络的优越性。
translated by 谷歌翻译
This paper tackles the problem of motion deblurring of dynamic scenes. Although end-to-end fully convolutional designs have recently advanced the state-of-the-art in nonuniform motion deblurring, their performance-complexity trade-off is still sub-optimal. Existing approaches achieve a large receptive field by increasing the number of generic convolution layers and kernel-size, but this comes at the expense of of the increase in model size and inference speed. In this work, we propose an efficient pixel adaptive and feature attentive design for handling large blur variations across different spatial locations and process each test image adaptively. We also propose an effective content-aware global-local filtering module that significantly improves performance by considering not only global dependencies but also by dynamically exploiting neighboring pixel information. We use a patch-hierarchical attentive architecture composed of the above module that implicitly discovers the spatial variations in the blur present in the input image and in turn, performs local and global modulation of intermediate features. Extensive qualitative and quantitative comparisons with prior art on deblurring benchmarks demonstrate that our design offers significant improvements over the state-of-the-art in accuracy as well as speed.
translated by 谷歌翻译
Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings -- videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).
translated by 谷歌翻译
“行动”在人类与世界互动并使他们实现理想的目标方面起着至关重要的作用。结果,对人类的最常识(CS)知识围绕着行动。尽管“关于行动与变革的推理”(RAC)在知识代表社区中得到了广泛的研究,但它最近引起了NLP和计算机视觉研究人员的兴趣。本文调查了现有的任务,基准数据集,各种技术和模型,以及它们在视觉和语言领域中RAC中进步的各自绩效。最后,我们总结了我们的关键要点,讨论该研究领域面临的目前挑战,并概述了未来研究的潜在方向。
translated by 谷歌翻译
该项目旨在使用称为KubeFlow [1]的开源工具(端到端ML堆栈编排工具包)探索在Kubernetes上部署机器学习模型的过程。我们以管道形式创建端到端的机器学习模型,并分析各个点,包括设置,部署模型,性能,限制,限制和功能。我们希望我们的项目几乎像一个研讨会/入门报告一样,可以帮助Vanilla Cloud/Kubernetes用户对KubeFlow的零知识使用KubeFlow来部署ML模型。从不同的云上的设置到通过互联网提供训练有素的模型 - 我们提供详细信息和指标,详细介绍KubeFlow的性能。
translated by 谷歌翻译