Recent advances in visual representation learning allowed to build an abundance of powerful off-the-shelf features that are ready-to-use for numerous downstream tasks. This work aims to assess how well these features preserve information about the objects, such as their spatial location, their visual properties and their relative relationships. We propose to do so by evaluating them in the context of visual reasoning, where multiple objects with complex relationships and different attributes are at play. More specifically, we introduce a protocol to evaluate visual representations for the task of Visual Question Answering. In order to decouple visual feature extraction from reasoning, we design a specific attention-based reasoning module which is trained on the frozen visual representations to be evaluated, in a spirit similar to standard feature evaluations relying on shallow networks. We compare two types of visual representations, densely extracted local features and object-centric ones, against the performances of a perfect image representation using ground truth. Our main findings are two-fold. First, despite excellent performances on classical proxy tasks, such representations fall short for solving complex reasoning problem. Second, object-centric features better preserve the critical information necessary to perform visual reasoning. In our proposed framework we show how to methodologically approach this evaluation.
translated by 谷歌翻译
3D human whole-body pose estimation aims to localize precise 3D keypoints on the entire human body, including the face, hands, body, and feet. Due to the lack of a large-scale fully annotated 3D whole-body dataset, a common approach has been to train several deep networks separately on datasets dedicated to specific body parts, and combine them during inference. This approach suffers from complex training and inference pipelines because of the different biases in each dataset used. It also lacks a common benchmark which makes it difficult to compare different methods. To address these issues, we introduce Human3.6M 3D WholeBody (H3WB) which provides whole-body annotations for the Human3.6M dataset using the COCO Wholebody layout. H3WB is a large scale dataset with 133 whole-body keypoint annotations on 100K images, made possible by our new multi-view pipeline. Along with H3WB, we propose 3 tasks: i) 3D whole-body pose lifting from 2D complete whole-body pose, ii) 3D whole-body pose lifting from 2D incomplete whole-body pose, iii) 3D whole-body pose estimation from a single RGB image. We also report several baselines from popular methods for these tasks. The dataset is publicly available at \url{https://github.com/wholebody3d/wholebody3d}.
translated by 谷歌翻译
课堂学习学习需要可塑性和稳定性,以便在保留过去的知识的同时从新数据中学习。由于灾难性的遗忘,当没有内存缓冲区可用时,在这两个属性之间找到妥协尤其具有挑战性。主流方法需要存储两个深层模型,因为它们使用微调与以前的增量状态的知识蒸馏一起整合了新类。我们提出了一种具有相似数量参数但分布不同的方法,以便在可塑性和稳定性之间找到更好的平衡。遵循已经通过基于转移的增量方法部署的方法,我们在初始状态后冻结了功能提取器。最古老的增量状态的类对这种冷冻提取器进行训练,以确保稳定性。使用部分微调模型预测最近的类别以引入可塑性。我们提出的可塑性层可以纳入任何用于无内存增量学习的基于转移的方法,并将其应用于两种此类方法。评估是通过三个大型数据集进行的。结果表明,与现有方法相比,所有测试的配置中均获得了性能提高。
translated by 谷歌翻译
在学习断开分布时,已知生成对抗网络(GAN)面临模型错误指定。实际上,从单峰潜伏分布到断开连接的连续映射是不可能的,因此甘斯一定会在目标分布支持之外生成样品。这提出了一个基本问题:最小化这些领域的衡量标准的潜在空间分区是什么?基于几何测量理论的最新结果,我们证明,最佳甘恩必须将其潜在空间构造为“简单群集” - 一个voronoi分区,其中细胞是凸锥 - 当潜在空间的尺寸大于大于的数量时模式。在此配置中,每个Voronoi单元格映射到数据的不同模式。我们在gan学习断开的歧管的最佳精度上得出了上限和下限。有趣的是,这两个界限具有相同的减小顺序:$ \ sqrt {\ log m} $,$ m $是模式的数量。最后,我们执行了几项实验,以表现出潜在空间的几何形状,并在实验上表明gan具有与理论相似的几何形状。
translated by 谷歌翻译
关于观察者网络的最新工作显示出关于语义分割的分布(OOD)检测的有希望的结果。这些方法在精确定位图像(即异常)中的兴趣点上很难。这种限制是由于像素水平上细粒度预测的难度。为了解决这个问题,我们向观察者提供实例知识。我们通过利用实例掩码预测来扩展obsnet的方法。我们使用其他类别的对象检测器来过滤和汇总观察者预测。最后,我们预测图像中每个实例的唯一异常得分。我们表明,我们提出的方法准确地将三个数据集中的分布对象准确地分发对象。
translated by 谷歌翻译
当今最先进的机器学习型号几乎无法审查。解释性方法的主要挑战是通过揭示导致给定决定的策略,通过表征其内部状态或研究基础数据表示来帮助研究人员开放这些黑匣子。为了应对这一挑战,我们开发了Xplique:一种用于解释性的软件库,其中包括代表性的解释性方法以及相关的评估指标。它与最受欢迎的学习库之一接口:Tensorflow以及其他图书馆,包括Pytorch,Scikit-Learn和Theano。该代码是根据MIT许可证获得许可的,可在Github.com/deel-ai/xplique上免费获得。
translated by 谷歌翻译
计算机愿景的进步正在推动IM-Age操作的限制,具有在各种任务上采样详细图像的生成模型。但是,通常为每个特定任务开发和培训专门的模型,即使许多图像编辑任务共享相似之处。在去噪,染色或图像合成中,一个始终旨在从低质量的那样产生现实形象。在本文中,我们旨在迈出朝着图像编辑的统一方法。为此,我们提出Edibert,这是一个在由矢量量化的自动编码器构建的离散潜在空间中培训的双向变压器。我们认为这种双向模型适用于图像操纵,因为可以将任何补丁根据整个图像重新采样。使用这种独特和简单的培训目标,我们表明由此产生的模型与各种任务的最先进的性能相匹配:图像去噪,图像完成和图像组成。
translated by 谷歌翻译
本文侧重于异步八卦协议中的非渐近扩散时间。异步八卦协议被设计为通过随机交换相关图中的消息来在节点网络中执行分布式计算。为了在节点之间实现共识,必须更新最少数量的消息。我们为常规案例提供了与此类数量的概率。我们为完全连接的图表提供了一个明确的公式,其仅根据节点的数量和任何图表的近似,这取决于图形的频谱。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译