The mainstream of the existing approaches for video prediction builds up their models based on a Single-In-Single-Out (SISO) architecture, which takes the current frame as input to predict the next frame in a recursive manner. This way often leads to severe performance degradation when they try to extrapolate a longer period of future, thus limiting the practical use of the prediction model. Alternatively, a Multi-In-Multi-Out (MIMO) architecture that outputs all the future frames at one shot naturally breaks the recursive manner and therefore prevents error accumulation. However, only a few MIMO models for video prediction are proposed and they only achieve inferior performance due to the date. The real strength of the MIMO model in this area is not well noticed and is largely under-explored. Motivated by that, we conduct a comprehensive investigation in this paper to thoroughly exploit how far a simple MIMO architecture can go. Surprisingly, our empirical studies reveal that a simple MIMO model can outperform the state-of-the-art work with a large margin much more than expected, especially in dealing with longterm error accumulation. After exploring a number of ways and designs, we propose a new MIMO architecture based on extending the pure Transformer with local spatio-temporal blocks and a new multi-output decoder, namely MIMO-VP, to establish a new standard in video prediction. We evaluate our model in four highly competitive benchmarks (Moving MNIST, Human3.6M, Weather, KITTI). Extensive experiments show that our model wins 1st place on all the benchmarks with remarkable performance gains and surpasses the best SISO model in all aspects including efficiency, quantity, and quality. We believe our model can serve as a new baseline to facilitate the future research of video prediction tasks. The code will be released.
translated by 谷歌翻译
本文考虑通过模型量化提高联邦学习(FL)的无线通信和计算效率。在提出的Bitwidth FL方案中,Edge设备将其本地FL模型参数的量化版本训练并传输到协调服务器,从而将它们汇总为量化的全局模型并同步设备。目的是共同确定用于本地FL模型量化的位宽度以及每次迭代中参与FL训练的设备集。该问题被视为一个优化问题,其目标是在每卷工具采样预算和延迟要求下最大程度地减少量化FL的训练损失。为了得出解决方案,进行分析表征,以显示有限的无线资源和诱导的量化误差如何影响所提出的FL方法的性能。分析结果表明,两个连续迭代之间的FL训练损失的改善取决于设备的选择和量化方案以及所学模型固有的几个参数。给定基于线性回归的这些模型属性的估计值,可以证明FL训练过程可以描述为马尔可夫决策过程(MDP),然后提出了基于模型的增强学习(RL)方法来优化动作的方法选择迭代。与无模型RL相比,这种基于模型的RL方法利用FL训练过程的派生数学表征来发现有效的设备选择和量化方案,而无需强加其他设备通信开销。仿真结果表明,与模型无RL方法和标准FL方法相比,提出的FL算法可以减少29%和63%的收敛时间。
translated by 谷歌翻译
在本文中,提出了用于文本数据传输的语义通信框架。在研究的模型中,基站(BS)从文本数据中提取语义信息,并将其传输到每个用户。语义信息由由一组语义三元组组成的知识图(kg)建模。收到语义信息后,每个用户都使用图形到文本生成模型恢复原始文本。为了衡量所考虑的语义通信框架的性能,提出了共同捕获恢复文本的语义准确性和完整性的语义相似性(MSS)的指标。由于无线资源限制,BS可能无法将整个语义信息传输给每个用户并满足传输延迟约束。因此,BS必须为每个用户选择适当的资源块,并确定和将一部分语义信息传输给用户。因此,我们制定了一个优化问题,其目标是通过共同优化资源分配策略并确定要传输的部分语义信息来最大化总MSS。为了解决这个问题,提出了与注意力网络集成的基于近端优化的强化增强学习(RL)算法。所提出的算法可以使用注意网络在语义信息中评估每个三重组的重要性,然后在语义信息中三元组的重要性分布与总MSS之间建立关系。与传统的RL算法相比,所提出的算法可以动态调整其学习率,从而确保收敛到本地最佳解决方案。
translated by 谷歌翻译
我们介绍了软件Robustar的初步发布,该版本旨在通过数据驱动的视角提高视觉分类机器学习模型的鲁棒性。基于最近的理解,即缺乏机器学习模型的鲁棒性是该模型学习虚假特征的趋势,我们旨在通过在训练前从数据中删除数据的杂种特征来从数据角度解决此问题。特别是,我们介绍了一种软件,可以通过允许用户注释图像像素级别的虚假功能来帮助用户更好地为训练图像分类模型准备数据。为了促进这一过程,我们的软件还利用了最近的进步来帮助识别值得关注的潜在图像和像素,并通过新注释的数据继续培训。我们的软件托管在GitHub存储库https://github.com/haohanwang/robustar。
translated by 谷歌翻译
我们提出了一个新的框架,以重建整体3D室内场景,包括单视图像的房间背景和室内对象。由于室内场景的严重阻塞,现有方法只能产生具有有限几何质量的室内物体的3D形状。为了解决这个问题,我们提出了一个与实例一致的隐式函数(InstPifu),以进行详细的对象重建。与实例对齐的注意模块结合使用,我们的方法有权将混合的局部特征与遮挡实例相结合。此外,与以前的方法不同,该方法仅代表房间背景为3D边界框,深度图或一组平面,我们通过隐式表示恢复了背景的精细几何形状。在E SUN RGB-D,PIX3D,3D-FUTURE和3D-FRONT数据集上进行的广泛实验表明,我们的方法在背景和前景对象重建中均优于现有方法。我们的代码和模型将公开可用。
translated by 谷歌翻译
How to improve discriminative feature learning is central in classification. Existing works address this problem by explicitly increasing inter-class separability and intra-class similarity, whether by constructing positive and negative pairs for contrastive learning or posing tighter class separating margins. These methods do not exploit the similarity between different classes as they adhere to i.i.d. assumption in data. In this paper, we embrace the real-world data distribution setting that some classes share semantic overlaps due to their similar appearances or concepts. Regarding this hypothesis, we propose a novel regularization to improve discriminative learning. We first calibrate the estimated highest likelihood of one sample based on its semantically neighboring classes, then encourage the overall likelihood predictions to be deterministic by imposing an adaptive exponential penalty. As the gradient of the proposed method is roughly proportional to the uncertainty of the predicted likelihoods, we name it adaptive discriminative regularization (ADR), trained along with a standard cross entropy loss in classification. Extensive experiments demonstrate that it can yield consistent and non-trivial performance improvements in a variety of visual classification tasks (over 10 benchmarks). Furthermore, we find it is robust to long-tailed and noisy label data distribution. Its flexible design enables its compatibility with mainstream classification architectures and losses.
translated by 谷歌翻译
在开放的书本回答(OBQA)任务中,从分散注意力的信息中选择相关段落和句子对于推理问题的答案至关重要。 HOTPOTQA数据集旨在教授和评估系统以进行段落排名和句子选择。许多现有框架使用单独的模型分别选择相关段落和句子。这样的系统不仅在模型的参数方面具有很高的复杂性,而且还无法将训练这两个任务训练在一起的优势,因为一项任务可能对另一个任务有益。在这项工作中,我们提出了一个简单而有效的框架,可以通过共同排名段落和选择句子来解决这些限制。此外,我们提出一致性和相似性约束,以促进段落排名和句子选择之间的相关性和相互作用。实验表明,我们的框架可以与以前的系统实现竞争性结果,并就相关句子的确切匹配而优于28 \%在HOTPOTQA数据集上。
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
Gaussian process state-space model (GPSSM) is a fully probabilistic state-space model that has attracted much attention over the past decade. However, the outputs of the transition function in the existing GPSSMs are assumed to be independent, meaning that the GPSSMs cannot exploit the inductive biases between different outputs and lose certain model capacities. To address this issue, this paper proposes an output-dependent and more realistic GPSSM by utilizing the well-known, simple yet practical linear model of coregionalization (LMC) framework to represent the output dependency. To jointly learn the output-dependent GPSSM and infer the latent states, we propose a variational sparse GP-based learning method that only gently increases the computational complexity. Experiments on both synthetic and real datasets demonstrate the superiority of the output-dependent GPSSM in terms of learning and inference performance.
translated by 谷歌翻译
Accurate polyp segmentation is of great importance for colorectal cancer diagnosis and treatment. However, due to the high cost of producing accurate mask annotations, existing polyp segmentation methods suffer from severe data shortage and impaired model generalization. Reversely, coarse polyp bounding box annotations are more accessible. Thus, in this paper, we propose a boosted BoxPolyp model to make full use of both accurate mask and extra coarse box annotations. In practice, box annotations are applied to alleviate the over-fitting issue of previous polyp segmentation models, which generate fine-grained polyp area through the iterative boosted segmentation model. To achieve this goal, a fusion filter sampling (FFS) module is firstly proposed to generate pixel-wise pseudo labels from box annotations with less noise, leading to significant performance improvements. Besides, considering the appearance consistency of the same polyp, an image consistency (IC) loss is designed. Such IC loss explicitly narrows the distance between features extracted by two different networks, which improves the robustness of the model. Note that our BoxPolyp is a plug-and-play model, which can be merged into any appealing backbone. Quantitative and qualitative experimental results on five challenging benchmarks confirm that our proposed model outperforms previous state-of-the-art methods by a large margin.
translated by 谷歌翻译