现实世界中的数据是高维的:即使在压缩后,书籍,图像或音乐表演也很容易包含数十万个元素。但是,最常用的自回归模型,变压器非常昂贵,以缩放捕获这种远程结构所需的输入和层数。我们开发了感知者AR,这是一种自回归的模态 - 不合骨架构,它使用交叉注意力将远程输入映射到少数潜在的潜在,同时还可以维护端到端的因果关系掩盖。感知器AR可以直接进行十万个令牌,从而实现了实用的长篇小写密度估计,而无需手工制作的稀疏模式或记忆机制。当对图像或音乐进行培训时,感知器AR会生成具有清晰长期连贯性和结构的输出。我们的架构还获得了长期基准测试的最新可能性,包括64 x 64个Imagenet图像和PG-19书籍。
translated by 谷歌翻译
Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams. To this end, we introduce the notion of a multimodal versatile network -a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51, Kinetics600, Audioset and ESC-50 when compared to previous self-supervised work. Our models are publicly available [1, 2, 3]. * Equal contribution. † Work done during an internship at DeepMind. 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
Timeyou have a little pressure you are cutting the wood readjusting the table saw I am using a roller sure you applied glue Figure 1: We describe an efficient approach to learn visual representations from misaligned and noisy narrations (bottom) automatically extracted from instructional videos (top). Our video representations are learnt from scratch without relying on any manually annotated visual dataset yet outperform all self-supervised and many fully-supervised methods on several video recognition benchmarks.
translated by 谷歌翻译
Learning text-video embeddings usually requires a dataset of video clips with manually provided captions. However, such datasets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations. The contributions of this work are three-fold. First, we introduce HowTo100M: a large-scale dataset of 136 million video clips sourced from 1.22M narrated instructional web videos depicting humans performing and describing over 23k different visual tasks. Our data collection procedure is fast, scalable and does not require any additional manual annotation. Second, we demonstrate that a text-video embedding trained on this data leads to state-ofthe-art results for text-to-video retrieval and action localization on instructional video datasets such as YouCook2 or CrossTask. Finally, we show that this embedding transfers well to other domains: fine-tuning on generic Youtube videos (MSR-VTT dataset) and movies (LSMDC dataset) outperforms models trained on these datasets alone. Our dataset, code and models are publicly available [1]. * Equal contribution.
translated by 谷歌翻译
Model extraction is a major threat for embedded deep neural network models that leverages an extended attack surface. Indeed, by physically accessing a device, an adversary may exploit side-channel leakages to extract critical information of a model (i.e., its architecture or internal parameters). Different adversarial objectives are possible including a fidelity-based scenario where the architecture and parameters are precisely extracted (model cloning). We focus this work on software implementation of deep neural networks embedded in a high-end 32-bit microcontroller (Cortex-M7) and expose several challenges related to fidelity-based parameters extraction through side-channel analysis, from the basic multiplication operation to the feed-forward connection through the layers. To precisely extract the value of parameters represented in the single-precision floating point IEEE-754 standard, we propose an iterative process that is evaluated with both simulations and traces from a Cortex-M7 target. To our knowledge, this work is the first to target such an high-end 32-bit platform. Importantly, we raise and discuss the remaining challenges for the complete extraction of a deep neural network model, more particularly the critical case of biases.
translated by 谷歌翻译
避免障碍物与汽车,机器人或飞机等车辆之间的碰撞对于自动化和自主权的发展至关重要。为了简化问题,许多避免碰撞算法和证明认为车辆是一个质量质量,尽管实际车辆不是点。在本文中,我们考虑了一个凸多边形车辆,其非零区域沿着二维轨迹行驶。我们得出了一个易于检查的,无量词的公式,以检查给定的障碍物是否会随着计划轨迹移动的车辆碰撞。我们将我们的方法应用于两个避免飞机碰撞的案例研究并研究其性能。
translated by 谷歌翻译
我们介绍了DeepNash,这是一种能够学习从头开始播放不完美的信息游戏策略的自主代理,直到人类的专家级别。 Stratego是人工智能(AI)尚未掌握的少数标志性棋盘游戏之一。这个受欢迎的游戏具有$ 10^{535} $节点的巨大游戏树,即,$ 10^{175} $倍的$倍于GO。它具有在不完美的信息下需要决策的其他复杂性,类似于德克萨斯州Hold'em扑克,该扑克的游戏树较小(以$ 10^{164} $节点为单位)。 Stratego中的决策是在许多离散的动作上做出的,而动作与结果之间没有明显的联系。情节很长,在球员获胜之前经常有数百次动作,而Stratego中的情况则不能像扑克中那样轻松地分解成管理大小的子问题。由于这些原因,Stratego几十年来一直是AI领域的巨大挑战,现有的AI方法几乎没有达到业余比赛水平。 Deepnash使用游戏理论,无模型的深钢筋学习方法,而无需搜索,该方法学会通过自我播放来掌握Stratego。 DeepNash的关键组成部分的正则化NASH Dynamics(R-NAD)算法通过直接修改基础多项式学习动力学来收敛到近似NASH平衡,而不是围绕它“循环”。 Deepnash在Stratego中击败了现有的最先进的AI方法,并在Gravon Games平台上获得了年度(2022年)和历史前3名,并与人类专家竞争。
translated by 谷歌翻译
我们引入了基于仿真的摊销贝叶斯推理方案,以推断随机步行的参数。我们的方法通过无可能的方法了解了步行参数的后验分布。在第一步中,对图形神经网络进行了模拟数据培训,以学习随机步行的优化低维摘要统计数据。在第二步中,可逆神经网络使用变分推断从学习的汇总统计数据中产生参数的后验分布。我们应用我们的方法来从单轨迹推断布朗尼运动模型的参数。摊销推理过程的计算复杂性与轨迹长度线性缩放,其精度比例与cram {\'e} r-rao相似,在较大的长度上结合。该方法对位置噪声是强大的,并且比训练期间看到的轨迹更长的轨迹更长。最后,我们适应了该方案,以表明环境中的有限去相关时间可以从单个轨迹中推断出来。
translated by 谷歌翻译
人形机器人可以在危险情况下取代人类,但大多数此类情况对他们来说同样危险,这意味着他们有很大的损害和下降的机会。我们假设人形机器人主要用于建筑物,这使它们可能靠近墙壁。为了避免跌倒,他们可以像人类那样靠在最接近的墙上,只要他们在几毫秒内找到手放手的地方。本文介绍了一种称为D-Reflex的方法,该方法学习了一个神经网络,该神经网络在墙壁方向,墙壁距离和机器人的姿势下选择此接触位置。然后,全身控制器使用此接触位置来达到稳定的姿势。我们表明,D-Reflex允许模拟的Talos机器人(1.75m,100kg,30自由度)避免了超过75%的可避免跌倒,并且可以在真正的机器人上工作。
translated by 谷歌翻译
本工作详细介绍了3D级不变特征变换(SIFT)算法的高效实现,用于从大组体积的体积图像数据的机器学习的目的。 3D SIFT代码的主要操作在图形处理单元(GPU)上实现,包括从刻度空间金字塔的卷积,子采样和4D峰值检测。使用3D MRI人脑体积的不同人的3D MRI人脑体积来量化性能改进。基于二进制强大的独立基本特征(简要)代码提出了计算有效的3D Keypoint描述符,包括新颖的描述符,我们调用排名强大的独立基本特征(Rrief),并与原始3D Sift-andal方法\ CITEP {Toews2013 effity}相比。 。 GPU实现提供了超出优化CPU实现的大约7倍的加速,其中33秒到0.2秒,用于具有大约3000个关键点的3D尺寸(145,174,145)体素的3D卷到0.2秒。值得注意的加速包括卷积操作(20x),4d峰值检测(3x),子采样(3x)和高斯金字塔结构(2x)。高效的描述符与标准SIFT-RANDS描述符相比,使用2x的加速和6倍的内存节省,以减少的关键点对应关系,在计算效率和算法性能之间揭示折衷。我们实现的加速将允许对较大数据集进行更有效的分析。我们的优化GPU实现了3D Sift-Rank Extractor的HTTPS://github.com/carluerjb/3d_sift_cuda可用。
translated by 谷歌翻译