如最近的研究所示,支持机器智能的系统容易受到对抗性操纵或自然分配变化产生的测试案例的影响。这引起了人们对现实应用程序部署机器学习算法的极大关注,尤其是在自动驾驶(AD)等安全性领域中。另一方面,由于自然主义场景的传统广告测试需要数亿英里,这是由于现实世界中安全关键方案的高度和稀有性。结果,已经探索了几种自动驾驶评估方法,但是,这些方法通常是基于不同的仿真平台,安全性 - 关键的情况的类型,场景生成算法和驾驶路线变化的方法。因此,尽管在自动驾驶测试方面进行了大量努力,但在相似条件下,比较和了解不同测试场景产生算法和测试机制的有效性和效率仍然是一项挑战。在本文中,我们旨在提供第一个统一的平台Safebench,以整合不同类型的安全性测试方案,场景生成算法以及其他变体,例如驾驶路线和环境。同时,我们实施了4种基于深入学习的AD算法,具有4种类型的输入(例如,鸟类视图,相机,相机),以对SafeBench进行公平的比较。我们发现,我们的生成的测试场景确实更具挑战性,并观察到在良性和关键安全测试方案下的广告代理的性能之间的权衡。我们认为,我们的统一平台安全基地用于大规模和有效的自动驾驶测试,将激发新的测试场景生成和安全AD算法的开发。 SafeBench可从https://safebench.github.io获得。
translated by 谷歌翻译
过去几年目睹了提高自治车辆激光器的感知性能的兴趣越来越兴趣。虽然大多数现有的工作都侧重于开发新的深度学习算法或模型架构,但我们研究了物理设计的视角,即多个激光雷达的不同放置如何影响基于学习的感知的问题。为此,我们介绍了一种易于计算的信息理论代理度量,以定量和快速评估不同类型对象的3D检测的激光雷达放置。我们还在现实的Carla模拟器中提供了一个新的数据收集,检测模型培训和评估框架,以评估不同的多激光雷达配置。通过自动驾驶公司设计灵感的多种普遍的展示,我们通过广泛的实验表明了我们在基提上不同代表算法的替代公制和对象检测性能之间的相关性,验证了我们激光雷达展示率评估方法的有效性。我们的结果表明,在基于3D点云的对象检测中,传感器放置是不可忽略的,这将在具有挑战性的3D对象检测设置方面有助于平均精度的5%〜10%。我们认为这是第一次定量调查激光雷达放置对感知性能的影响的研究之一。
translated by 谷歌翻译
不同的环境对长期自主驾驶的户外强大的视觉感知构成了巨大挑战,以及对不同环境影响的学习算法的概括仍然是一个公开问题。虽然最近单眼深度预测得到了很好的研究,但很少有很多工作,专注于不同环境的强大的基于学习的深度预测,例如,由于缺乏如此多环境的现实世界数据集和基准测试,不断变化照明和季节。为此,基于CMU Visual Location DataSet建立了第一个跨赛季单眼深度预测数据集和基准赛季。为了基准不同环境下的深度估计性能,我们使用几个新配制的指标调查来自Kitti基准的代表性和最近的最先进的开源监督,自我监督和域适应深度预测方法。通过对所提出的数据集进行广泛的实验评估,定性和定量分析了多种环境对性能和鲁棒性的影响,表明即使微调,长期单眼深度预测也仍然具有挑战性。我们进一步提供了承诺的途径,即自我监督的培训和立体声几何约束有助于提高改变环境的鲁棒性。数据集可在https://seasondepth.github.io上找到,并且在https://github.com/seasondepth/seasondepth上提供基准工具包。
translated by 谷歌翻译
N-Way K-Shot方案的几乎没有学习是机器学习的一个开放挑战。已经提出了许多方法来解决此问题,例如匹配的网络和剪辑适配器。尽管这些方法已经显示出很大的进步,但这些方法成功的机制尚未得到很好的探索。在本文中,我们通过因果机制来解释这些少量学习方法。我们表明,现有方法可以看作是前门调整的特定形式,即消除混杂因素的效果。基于此,我们介绍了一种通用的因果方法,用于几次学习,它不仅考虑了示例之间的关系,还考虑了表示的多样性。实验结果证明了我们在各种基准数据集上进行的几个射击分类中提出的方法的优越性。补充材料中有代码。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译