目前,用于训练语言模型的最广泛的神经网络架构是所谓的BERT,导致各种自然语言处理(NLP)任务的改进。通常,BERT模型中的参数的数量越大,这些NLP任务中获得的结果越好。不幸的是,内存消耗和训练持续时间随着这些模型的大小而大大增加。在本文中,我们调查了较小的BERT模型的各种训练技术:我们将不同的方法与Albert,Roberta和相对位置编码等其他BERT变体相结合。此外,我们提出了两个新的微调修改,导致更好的性能:类开始终端标记和修改形式的线性链条条件随机字段。此外,我们介绍了整个词的注意力,从而降低了伯特存储器的使用,并导致性能的小幅增加,与古典的多重关注相比。我们评估了这些技术的五个公共德国命名实体识别(NER)任务,其中两条由这篇文章引入了两项任务。
translated by 谷歌翻译
During training, reinforcement learning systems interact with the world without considering the safety of their actions. When deployed into the real world, such systems can be dangerous and cause harm to their surroundings. Often, dangerous situations can be mitigated by defining a set of rules that the system should not violate under any conditions. For example, in robot navigation, one safety rule would be to avoid colliding with surrounding objects and people. In this work, we define safety rules in terms of the relationships between the agent and objects and use them to prevent reinforcement learning systems from performing potentially harmful actions. We propose a new safe epsilon-greedy algorithm that uses safety rules to override agents' actions if they are considered to be unsafe. In our experiments, we show that a safe epsilon-greedy policy significantly increases the safety of the agent during training, improves the learning efficiency resulting in much faster convergence, and achieves better performance than the base model.
translated by 谷歌翻译
Various methods using machine and deep learning have been proposed to tackle different tasks in predictive process monitoring, forecasting for an ongoing case e.g. the most likely next event or suffix, its remaining time, or an outcome-related variable. Recurrent neural networks (RNNs), and more specifically long short-term memory nets (LSTMs), stand out in terms of popularity. In this work, we investigate the capabilities of such an LSTM to actually learn the underlying process model structure of an event log. We introduce an evaluation framework that combines variant-based resampling and custom metrics for fitness, precision and generalization. We evaluate 4 hypotheses concerning the learning capabilities of LSTMs, the effect of overfitting countermeasures, the level of incompleteness in the training set and the level of parallelism in the underlying process model. We confirm that LSTMs can struggle to learn process model structure, even with simplistic process data and in a very lenient setup. Taking the correct anti-overfitting measures can alleviate the problem. However, these measures did not present themselves to be optimal when selecting hyperparameters purely on predicting accuracy. We also found that decreasing the amount of information seen by the LSTM during training, causes a sharp drop in generalization and precision scores. In our experiments, we could not identify a relationship between the extent of parallelism in the model and the generalization capability, but they do indicate that the process' complexity might have impact.
translated by 谷歌翻译
The underlying dynamics and patterns of 3D surface meshes deforming over time can be discovered by unsupervised learning, especially autoencoders, which calculate low-dimensional embeddings of the surfaces. To study the deformation patterns of unseen shapes by transfer learning, we want to train an autoencoder that can analyze new surface meshes without training a new network. Here, most state-of-the-art autoencoders cannot handle meshes of different connectivity and therefore have limited to no generalization capacities to new meshes. Also, reconstruction errors strongly increase in comparison to the errors for the training shapes. To address this, we propose a novel spectral CoSMA (Convolutional Semi-Regular Mesh Autoencoder) network. This patch-based approach is combined with a surface-aware training. It reconstructs surfaces not presented during training and generalizes the deformation behavior of the surfaces' patches. The novel approach reconstructs unseen meshes from different datasets in superior quality compared to state-of-the-art autoencoders that have been trained on these shapes. Our transfer learning errors on unseen shapes are 40% lower than those from models learned directly on the data. Furthermore, baseline autoencoders detect deformation patterns of unseen mesh sequences only for the whole shape. In contrast, due to the employed regional patches and stable reconstruction quality, we can localize where on the surfaces these deformation patterns manifest.
translated by 谷歌翻译
尽管存在许多减少卷积神经网络(CNN)过度拟合的方法,但仍不清楚如何自信地衡量过度拟合的程度。但是,反映过度拟合水平的度量可能非常有用,可对不同体系结构的比较和评估各种技术来应对过度拟合。由于过度拟合的神经网络倾向于记住训练数据中的噪声而不是普遍看不见的数据,因此我们研究了训练精度在增加数据扰动的存在并研究与过度拟合的联系时如何变化。尽管以前的工作仅针对标签噪声,但我们还是研究了一系列技术,以将噪声注入训练数据,包括对抗性扰动和输入损坏。基于此,我们定义了两个新的指标,可以自信地区分正确的模型和过度拟合模型。为了进行评估,我们得出了事先已知过度拟合行为的模型池。为了测试各种因素的效果,我们基于VGG和Resnet引入了架构中的几种反拟合措施,并研究其影响,包括正则化技术,训练集大小和参数数量。最后,我们通过测量模型池外几个CNN体系结构的过度拟合度来评估所提出的指标的适用性。
translated by 谷歌翻译
基于对抗斑块的攻击旨在欺骗一个有意产生的噪声的神经网络,该网络集中在输入图像的特定区域中。在这项工作中,我们对不同的贴片生成参数进行了深入的分析,包括初始化,贴剂大小,尤其是在训练过程中将贴剂放置在图像中。我们专注于对象消失的攻击,并以Yolov3作为白色盒子设置中的攻击的模型运行实验,并使用COCO数据集中的图像。我们的实验表明,在训练期间,将斑块插入大小增加的窗口内,与固定位置相比,攻击强度显着提高。当斑块在训练过程中随机定位时,获得了最佳结果,而贴片位置则在批处理中也有所不同。
translated by 谷歌翻译
病理学家对患病组织的视觉微观研究一直是一个多世纪以来癌症诊断和预后的基石。最近,深度学习方法在组织图像的分析和分类方面取得了重大进步。但是,关于此类模型在生成组织病理学图像的实用性方面的工作有限。这些合成图像在病理学中有多种应用,包括教育,熟练程度测试,隐私和数据共享的公用事业。最近,引入了扩散概率模型以生成高质量的图像。在这里,我们首次研究了此类模型的潜在用途以及优先的形态加权和颜色归一化,以合成脑癌的高质量组织病理学图像。我们的详细结果表明,与生成对抗网络相比,扩散概率模型能够合成各种组织病理学图像,并且具有较高的性能。
translated by 谷歌翻译
在过去几十年中,功能选择吸引了很多关注,因为它可以降低数据维度,同时保持功能的原始物理含义,这比功能提取可以更好地解释性。但是,大多数现有的功能选择方法,尤其是基于深度学习的方法,通常集中在仅具有很高分数的功能上,但忽略了那些在训练过程中得分较低的人以及重要的候选功能的顺序。这可能是有风险的,因为不幸的是,在培训过程中可能会忽略一些重要和相关的功能,从而导致次优的解决方案或误导性选择。在我们的工作中,我们通过利用较少重要性分数的功能来处理功能选择,并根据新颖的互补功能掩码提出功能选择框架。我们的方法是通用的,可以轻松地集成到现有的基于深度学习的特征选择方法中,以提高其性能。实验是在基准数据集上进行的,并表明所提出的方法可以选择比艺术状态更具代表性和信息性的特征。
translated by 谷歌翻译
目前,大规模部署自动驾驶汽车的核心障碍在于罕见事件的长尾。这些非常具有挑战性,因为它们不经常发生在深度神经网络的培训数据中。为了解决这个问题,我们建议生成其他合成训练数据,涵盖各种各样的角色情况。由于本体可以在启用计算处理的同时代表人类的专家知识,因此我们使用它们来描述场景。我们提出的主体本体论能够模拟文献中所有常见的角案例类别的场景。从这个主体的本体论中,可以得出任意的场景描述本体论。以自动化的方式,可以将它们转换为OpenScenario格式,然后在模拟中执行。这样,也可以生成具有挑战性的测试和评估方案。
translated by 谷歌翻译
大量的研究与逼真的传感器数据的产生有关。激光点云是由复杂的模拟或学习的生成模型生成的。通常利用生成的数据来启用或改善下游感知算法。这些程序来自两个主要问题:首先,如何评估生成数据的现实主义?其次,更现实的数据还会导致更好的感知表现吗?本文解决了问题,并提出了一个新颖的指标,以量化LiDar Point Cloud的现实主义。通过训练代理分类任务,可以从现实世界和合成点云中学到相关功能。在一系列实验中,我们证明了我们的指标的应用来确定生成的LiDAR数据的现实主义,并将我们的度量的现实主义估计与分割模型的性能进行比较。我们确认我们的指标为下游细分性能提供了指示。
translated by 谷歌翻译