This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2022 iteration, 11 teams participated on a diverse set of 12 scored benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.
translated by 谷歌翻译
行为树起源于视频游戏,是一种控制NPC的方法,但此后在机器人学界获得了吸引力,它是描述执行任务的框架。Behaverify是一种从PY_TREE创建NUXMV模型的工具。对于标准化的复合节点,此过程是自动的,不需要其他用户输入。自动支持各种叶子节点,不需要其他用户输入,但是自定义的叶节点将需要其他用户输入才能正确建模。Behaverify可以提供一个模板以使其更轻松。Behaverify能够创建具有100多个节点的NUXMV模型,NUXMV能够直接和通过反例验证该模型上的各种非平凡LTL属性。该模型具有并行节点,选择器和序列节点。与基于BTCompiler的模型的比较表明,由Behaverify创建的模型表现更好。
translated by 谷歌翻译
这项在进度论文中的这项工作介绍了基于自动编码器的回归神经网络(NN)模型的鲁棒性验证,遵循最新方法,用于鲁棒性验证图像分类NNS。尽管在各种深层神经网络(DNN)中开发验证方法的验证方法持续进展,但尚未考虑对自动编码器模型的稳健性检查。我们通过扩展此类自动编码器网络的现有鲁棒性分析方法来探索研究的开放空间,并检查如何弥合现有DNN验证方法之间的差距。尽管使用自动编码器的分类模型或多或少地与图像分类NN相似,但回归模型的功能却明显不同。我们介绍了基于自动编码器的回归模型的鲁棒性评估指标的两个定义,特别是鲁棒性和非舒适性等级。我们还修改了现有的Imagestar方法,调整变量以照顾回归网络的特定输入类型。该方法是作为NNV的扩展而实现的,然后在数据集上应用和评估,并在使用相同数据集的案例研究实验上实现了该方法。根据作者的理解,这项在进度论文中是第一个显示基于自动编码器NNS的可及性分析的作品。
translated by 谷歌翻译
在过去的几年中,连续的深度学习模型(称为神经普通微分方程(神经odes))受到了广泛关注。尽管它们迅速产生影响,但对于这些系统缺乏正式的分析技术。在本文中,我们考虑了具有不同架构和层次的一般神经odes类,并引入了一种新颖的可及性框架,可以对其行为进行正式分析。为神经ODE的可及性分析而开发的方法是在称为NNVODE的新工具中实现的。具体而言,我们的工作扩展了现有的神经网络验证工具以支持神经ODE。我们通过分析包括用于分类的神经ODE的一组基准以及控制和动态系统的一组基准来证明我们方法的功能和功效,包括评估我们方法对我们方法在现有软件工具中的功效和能力的评估。如果可以这样做,则连续的时间系统可达性文献。
translated by 谷歌翻译
随着机器学习算法和方法的成功,增强学习(RL)已成为越来越重要的研究领域。为了应对围绕RL训练时赋予RL代理的自由的安全问题,有关安全加固学习(SRL)的工作有所增加。但是,这些新的安全方法的审查少于其不安全的对应物。例如,安全方法之间的比较通常缺乏在相似的初始条件边界和超参数设置,使用较差的评估指标以及樱桃挑选最佳训练运行的情况下进行的公平评估,而不是在多个随机种子上平均。在这项工作中,我们使用评估最佳实践进行消融研究,以调查运行时间保证(RTA)的影响,该研究可以监视系统状态并干预以确保安全性,以确保安全性。通过研究在政策和非政策RL算法中的多种RTA方法,我们试图了解哪种RTA方法最有效,无论代理是否依赖RTA,以及奖励成型的重要性与RL代理培训中安全探索的重要性。我们的结论阐明了SRL的最有希望的方向,我们的评估方法为在未来的SRL工作中进行更好的比较奠定了基础。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Scientists and philosophers have debated whether humans can trust advanced artificial intelligence (AI) agents to respect humanity's best interests. Yet what about the reverse? Will advanced AI agents trust humans? Gauging an AI agent's trust in humans is challenging because--absent costs for dishonesty--such agents might respond falsely about their trust in humans. Here we present a method for incentivizing machine decisions without altering an AI agent's underlying algorithms or goal orientation. In two separate experiments, we then employ this method in hundreds of trust games between an AI agent (a Large Language Model (LLM) from OpenAI) and a human experimenter (author TJ). In our first experiment, we find that the AI agent decides to trust humans at higher rates when facing actual incentives than when making hypothetical decisions. Our second experiment replicates and extends these findings by automating game play and by homogenizing question wording. We again observe higher rates of trust when the AI agent faces real incentives. Across both experiments, the AI agent's trust decisions appear unrelated to the magnitude of stakes. Furthermore, to address the possibility that the AI agent's trust decisions reflect a preference for uncertainty, the experiments include two conditions that present the AI agent with a non-social decision task that provides the opportunity to choose a certain or uncertain option; in those conditions, the AI agent consistently chooses the certain option. Our experiments suggest that one of the most advanced AI language models to date alters its social behavior in response to incentives and displays behavior consistent with trust toward a human interlocutor when incentivized.
translated by 谷歌翻译
3D shapes have complementary abstractions from low-level geometry to part-based hierarchies to languages, which convey different levels of information. This paper presents a unified framework to translate between pairs of shape abstractions: $\textit{Text}$ $\Longleftrightarrow$ $\textit{Point Cloud}$ $\Longleftrightarrow$ $\textit{Program}$. We propose $\textbf{Neural Shape Compiler}$ to model the abstraction transformation as a conditional generation process. It converts 3D shapes of three abstract types into unified discrete shape code, transforms each shape code into code of other abstract types through the proposed $\textit{ShapeCode Transformer}$, and decodes them to output the target shape abstraction. Point Cloud code is obtained in a class-agnostic way by the proposed $\textit{Point}$VQVAE. On Text2Shape, ShapeGlot, ABO, Genre, and Program Synthetic datasets, Neural Shape Compiler shows strengths in $\textit{Text}$ $\Longrightarrow$ $\textit{Point Cloud}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Text}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Program}$, and Point Cloud Completion tasks. Additionally, Neural Shape Compiler benefits from jointly training on all heterogeneous data and tasks.
translated by 谷歌翻译
The xView2 competition and xBD dataset spurred significant advancements in overhead building damage detection, but the competition's pixel level scoring can lead to reduced solution performance in areas with tight clusters of buildings or uninformative context. We seek to advance automatic building damage assessment for disaster relief by proposing an auxiliary challenge to the original xView2 competition. This new challenge involves a new dataset and metrics indicating solution performance when damage is more local and limited than in xBD. Our challenge measures a network's ability to identify individual buildings and their damage level without excessive reliance on the buildings' surroundings. Methods that succeed on this challenge will provide more fine-grained, precise damage information than original xView2 solutions. The best-performing xView2 networks' performances dropped noticeably in our new limited/local damage detection task. The common causes of failure observed are that (1) building objects and their classifications are not separated well, and (2) when they are, the classification is strongly biased by surrounding buildings and other damage context. Thus, we release our augmented version of the dataset with additional object-level scoring metrics https://gitlab.kitware.com/dennis.melamed/xfbd to test independence and separability of building objects, alongside the pixel-level performance metrics of the original competition. We also experiment with new baseline models which improve independence and separability of building damage predictions. Our results indicate that building damage detection is not a fully-solved problem, and we invite others to use and build on our dataset augmentations and metrics.
translated by 谷歌翻译
We present Mu$^{2}$SLAM, a multilingual sequence-to-sequence model pre-trained jointly on unlabeled speech, unlabeled text and supervised data spanning Automatic Speech Recognition (ASR), Automatic Speech Translation (AST) and Machine Translation (MT), in over 100 languages. By leveraging a quantized representation of speech as a target, Mu$^{2}$SLAM trains the speech-text models with a sequence-to-sequence masked denoising objective similar to T5 on the decoder and a masked language modeling (MLM) objective on the encoder, for both unlabeled speech and text, while utilizing the supervised tasks to improve cross-lingual and cross-modal representation alignment within the model. On CoVoST AST, Mu$^{2}$SLAM establishes a new state-of-the-art for models trained on public datasets, improving on xx-en translation over the previous best by 1.9 BLEU points and on en-xx translation by 1.1 BLEU points. On Voxpopuli ASR, our model matches the performance of an mSLAM model fine-tuned with an RNN-T decoder, despite using a relatively weaker sequence-to-sequence architecture. On text understanding tasks, our model improves by more than 6\% over mSLAM on XNLI, getting closer to the performance of mT5 models of comparable capacity on XNLI and TydiQA, paving the way towards a single model for all speech and text understanding tasks.
translated by 谷歌翻译