One of the key challenges in deploying RL to real-world applications is to adapt to variations of unknown environment contexts, such as changing terrains in robotic tasks and fluctuated bandwidth in congestion control. Existing works on adaptation to unknown environment contexts either assume the contexts are the same for the whole episode or assume the context variables are Markovian. However, in many real-world applications, the environment context usually stays stable for a stochastic period and then changes in an abrupt and unpredictable manner within an episode, resulting in a segment structure, which existing works fail to address. To leverage the segment structure of piecewise stable context in real-world applications, in this paper, we propose a \textit{\textbf{Se}gmented \textbf{C}ontext \textbf{B}elief \textbf{A}ugmented \textbf{D}eep~(SeCBAD)} RL method. Our method can jointly infer the belief distribution over latent context with the posterior over segment length and perform more accurate belief context inference with observed data within the current context segment. The inferred belief context can be leveraged to augment the state, leading to a policy that can adapt to abrupt variations in context. We demonstrate empirically that SeCBAD can infer context segment length accurately and outperform existing methods on a toy grid world environment and Mujuco tasks with piecewise-stable context.
translated by 谷歌翻译
Conversational recommender systems (CRSs) often utilize external knowledge graphs (KGs) to introduce rich semantic information and recommend relevant items through natural language dialogues. However, original KGs employed in existing CRSs are often incomplete and sparse, which limits the reasoning capability in recommendation. Moreover, only few of existing studies exploit the dialogue context to dynamically refine knowledge from KGs for better recommendation. To address the above issues, we propose the Variational Reasoning over Incomplete KGs Conversational Recommender (VRICR). Our key idea is to incorporate the large dialogue corpus naturally accompanied with CRSs to enhance the incomplete KGs; and perform dynamic knowledge reasoning conditioned on the dialogue context. Specifically, we denote the dialogue-specific subgraphs of KGs as latent variables with categorical priors for adaptive knowledge graphs refactor. We propose a variational Bayesian method to approximate posterior distributions over dialogue-specific subgraphs, which not only leverages the dialogue corpus for restructuring missing entity relations but also dynamically selects knowledge based on the dialogue context. Finally, we infuse the dialogue-specific subgraphs to decode the recommendation and responses. We conduct experiments on two benchmark CRSs datasets. Experimental results confirm the effectiveness of our proposed method.
translated by 谷歌翻译
Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
One-shot segmentation of brain tissues is typically a dual-model iterative learning: a registration model (reg-model) warps a carefully-labeled atlas onto unlabeled images to initialize their pseudo masks for training a segmentation model (seg-model); the seg-model revises the pseudo masks to enhance the reg-model for a better warping in the next iteration. However, there is a key weakness in such dual-model iteration that the spatial misalignment inevitably caused by the reg-model could misguide the seg-model, which makes it converge on an inferior segmentation performance eventually. In this paper, we propose a novel image-aligned style transformation to reinforce the dual-model iterative learning for robust one-shot segmentation of brain tissues. Specifically, we first utilize the reg-model to warp the atlas onto an unlabeled image, and then employ the Fourier-based amplitude exchange with perturbation to transplant the style of the unlabeled image into the aligned atlas. This allows the subsequent seg-model to learn on the aligned and style-transferred copies of the atlas instead of unlabeled images, which naturally guarantees the correct spatial correspondence of an image-mask training pair, without sacrificing the diversity of intensity patterns carried by the unlabeled images. Furthermore, we introduce a feature-aware content consistency in addition to the image-level similarity to constrain the reg-model for a promising initialization, which avoids the collapse of image-aligned style transformation in the first iteration. Experimental results on two public datasets demonstrate 1) a competitive segmentation performance of our method compared to the fully-supervised method, and 2) a superior performance over other state-of-the-art with an increase of average Dice by up to 4.67%. The source code is available at: https://github.com/JinxLv/One-shot-segmentation-via-IST.
translated by 谷歌翻译
3D多对象跟踪(MOT)确保在连续动态检测过程中保持一致性,有利于自动驾驶中随后的运动计划和导航任务。但是,基于摄像头的方法在闭塞情况下受到影响,准确跟踪基于激光雷达的方法的对象的不规则运动可能是具有挑战性的。某些融合方法效果很好,但不认为在遮挡下出现外观特征的不可信问题。同时,错误检测问题也显着影响跟踪。因此,我们根据组合的外观运动优化(Camo-Mot)提出了一种新颖的相机融合3D MOT框架,该框架使用相机和激光镜数据,并大大减少了由遮挡和错误检测引起的跟踪故障。对于遮挡问题,我们是第一个提出遮挡头来有效地选择最佳对象外观的人,从而减少了闭塞的影响。为了减少错误检测在跟踪中的影响,我们根据置信得分设计一个运动成本矩阵,从而提高了3D空间中的定位和对象预测准确性。由于现有的多目标跟踪方法仅考虑一个类别,因此我们还建议建立多类损失,以在多类别场景中实现多目标跟踪。在Kitti和Nuscenes跟踪基准测试上进行了一系列验证实验。我们提出的方法在KITTI测试数据集上的所有多模式MOT方法中实现了最先进的性能和最低的身份开关(IDS)值(CAR为23,行人为137)。并且我们提出的方法在Nuscenes测试数据集上以75.3%的AMOTA进行了所有算法中的最新性能。
translated by 谷歌翻译
我们提出了一些动态神经辐射场(FDNERF),这是第一种基于NERF的方法,能够根据少量动态图像重建和表达3D面的表达编辑。与需要密集图像作为输入的现有动态NERF不同,并且只能为单个身份建模,我们的方法可以使跨不同人的不同人进行面对重建。与设计用于建模静态场景的最先进的几杆NERF相比,提出的FDNERF接受视图的动态输入,并支持任意的面部表达编辑,即产生具有输入超出输入的新表达式的面孔。为了处理动态输入之间的不一致之处,我们引入了精心设计的条件特征翘曲(CFW)模块,以在2D特征空间中执行表达条件的翘曲,这也是身份自适应和3D约束。结果,不同表达式的特征被转换为目标的特征。然后,我们根据这些视图一致的特征构建一个辐射场,并使用体积渲染来合成建模面的新型视图。进行定量和定性评估的广泛实验表明,我们的方法在3D面重建和表达编辑任务上都优于现有的动态和几乎没有射击的NERF。我们的代码和模型将在接受后提供。
translated by 谷歌翻译
最近,随机梯度下降(SGD)及其变体已成为机器学习(ML)问题大规模优化的主要方法。已经提出了各种策略来调整步骤尺寸,从自适应步骤大小到启发式方法,以更改每次迭代中的步骤大小。此外,动力已被广泛用于ML任务以加速训练过程。然而,我们对它们的理论理解存在差距。在这项工作中,我们开始通过为一些启发式优化方法提供正式保证并提出改进的算法来缩小这一差距。首先,我们分析了凸面和非凸口设置的Adagrad(延迟Adagrad)步骤大小的广义版本,这表明这些步骤尺寸允许算法自动适应随机梯度的噪声水平。我们首次显示延迟Adagrad的足够条件,以确保梯度几乎融合到零。此外,我们对延迟的Adagrad及其在非凸面设置中的动量变体进行了高概率分析。其次,我们用指数级和余弦的步骤分析了SGD,在经验上取得了成功,但缺乏理论支持。我们在平滑和非凸的设置中为它们提供了最初的收敛保证,有或没有polyak-{\ l} ojasiewicz(pl)条件。我们还显示了它们在PL条件下适应噪声的良好特性。第三,我们研究动量方法的最后迭代。我们证明了SGD的最后一个迭代的凸设置中的第一个下限,并以恒定的动量。此外,我们研究了一类跟随基于领先的领导者的动量算法,并随着动量和收缩的更新而增加。我们表明,他们的最后一个迭代具有最佳的收敛性,用于无约束的凸随机优化问题。
translated by 谷歌翻译
主动学习是自动化机器学习系统的重要技术。与旨在自动化神经网络体系结构设计的神经体系结构搜索(NAS)相反,主动学习旨在自动化培训数据选择。对于训练长尾巴的任务尤其重要,在该任务中,在该任务中,稀疏的样品分布稀疏。主动学习通过逐步培训模型,以有效的数据选择来减轻昂贵的数据注释问题。它没有注释所有未标记的样本,而是迭代选择并注释最有价值的样本。主动学习在图像分类中很受欢迎,但在对象检测中尚未得到充分探索。当前的大多数对象检测方法都通过不同的设置进行评估,因此很难公平地比较其性能。为了促进该领域的研究,本文贡献了一个活跃的学习基准框架,称为Albench,用于评估对象检测中的主动学习。该Albench框架在自动深层模型训练系统上开发,易于使用,与不同的主动学习算法兼容,并确保使用相同的培训和测试协议。我们希望这种自动化的基准系统能够帮助研究人员轻松复制文学的表现,并与先前的艺术进行客观的比较。该代码将通过GitHub发布。
translated by 谷歌翻译
深度估计是某些领域的关键技术之一,例如自动驾驶和机器人导航。但是,使用单个传感器的传统方法不可避免地受到传感器的性能的限制。因此,提出了一种融合激光镜头和立体声摄像机的精度和健壮方法。该方法完全结合了LiDAR和立体声摄像机的优势,这些摄像头可以保留LIDAR高精度和图像的高分辨率的优势。与传统的立体声匹配方法相比,对象和照明条件的质地对算法的影响较小。首先,将LIDAR数据的深度转换为立体声摄像机的差异。由于LiDAR数据的密度在Y轴上相对稀疏,因此使用插值方法对转换的差异图进行了更采样。其次,为了充分利用精确的差异图,融合了差异图和立体声匹配以传播准确的差异。最后,将视差图转换为深度图。此外,转换后的差异图还可以提高算法的速度。我们在Kitti基准测试中评估了拟议的管道。该实验表明,我们的算法比几种经典方法具有更高的精度。
translated by 谷歌翻译
隐式辐射功能作为重建和渲染3D场景的照片真实观点的强大场景表示形式出现。但是,这些表示的编辑性差。另一方面,诸如多边形网格之类的显式表示允许易于编辑,但不适合重建动态的人头中的准确细节,例如精细的面部特征,头发,牙齿,牙齿和眼睛。在这项工作中,我们提出了神经参数化(NEP),这是一种混合表示,提供了隐式和显式方法的优势。 NEP能够进行照片真实的渲染,同时允许对场景的几何形状和外观进行细粒度编辑。我们首先通过将3D几何形状参数化为2D纹理空间来解开几何形状和外观。我们通过引入显式线性变形层来启用几何编辑性。变形由一组稀疏的密钥点控制,可以明确和直观地移位以编辑几何形状。对于外观,我们开发了一个混合2D纹理,该纹理由明确的纹理图组成,以易于编辑和隐式视图以及时间相关的残差,以建模时间和视图变化。我们将我们的方法与几个重建和编辑基线进行比较。结果表明,NEP在保持高编辑性的同时达到了几乎相同的渲染精度。
translated by 谷歌翻译