长期内记忆LSTM的结构通过捕获传感器激活的顺序及其时间依赖性,证明了智能家庭中日常生活识别活动的效率。尽管如此,它们仍然在处理语义和传感器的上下文方面仍然失败。超过孤立的ID及其有序的激活值,传感器也携带含义。实际上,他们的性质和活化类型可以翻译各种活动。他们的日志彼此相关,创建全局上下文。我们建议使用并比较两种自然语言处理嵌入方法,以增强活动序列分类任务中的基于LSTM的结构:Word2VEC,静态语义嵌入和ELMO,一个上下文嵌入。结果,在真正的智能家庭数据集上,表明该方法提供了有用的信息,例如传感器组织地图,并且在日常活动类之间产生了不太困惑。它有助于更​​好地在具有其他居民或宠物的竞争活动的数据集上执行。我们的测试还表明,嵌入式可以在不同的数据集上预先预先估计,而不是目标,从而实现转移学习。因此,我们表明考虑到传感器的背景和他们的语义增加了分类性能并启用转移学习。
translated by 谷歌翻译
Classical reinforcement learning (RL) techniques are generally concerned with the design of decision-making policies driven by the maximisation of the expected outcome. Nevertheless, this approach does not take into consideration the potential risk associated with the actions taken, which may be critical in certain applications. To address that issue, the present research work introduces a novel methodology based on distributional RL to derive sequential decision-making policies that are sensitive to the risk, the latter being modelled by the tail of the return probability distribution. The core idea is to replace the $Q$ function generally standing at the core of learning schemes in RL by another function taking into account both the expected return and the risk. Named the risk-based utility function $U$, it can be extracted from the random return distribution $Z$ naturally learnt by any distributional RL algorithm. This enables to span the complete potential trade-off between risk minimisation and expected return maximisation, in contrast to fully risk-averse methodologies. Fundamentally, this research yields a truly practical and accessible solution for learning risk-sensitive policies with minimal modification to the distributional RL algorithm, and with an emphasis on the interpretability of the resulting decision-making process.
translated by 谷歌翻译
Deep learning models are being increasingly applied to imbalanced data in high stakes fields such as medicine, autonomous driving, and intelligence analysis. Imbalanced data compounds the black-box nature of deep networks because the relationships between classes may be highly skewed and unclear. This can reduce trust by model users and hamper the progress of developers of imbalanced learning algorithms. Existing methods that investigate imbalanced data complexity are geared toward binary classification, shallow learning models and low dimensional data. In addition, current eXplainable Artificial Intelligence (XAI) techniques mainly focus on converting opaque deep learning models into simpler models (e.g., decision trees) or mapping predictions for specific instances to inputs, instead of examining global data properties and complexities. Therefore, there is a need for a framework that is tailored to modern deep networks, that incorporates large, high dimensional, multi-class datasets, and uncovers data complexities commonly found in imbalanced data (e.g., class overlap, sub-concepts, and outlier instances). We propose a set of techniques that can be used by both deep learning model users to identify, visualize and understand class prototypes, sub-concepts and outlier instances; and by imbalanced learning algorithm developers to detect features and class exemplars that are key to model performance. Our framework also identifies instances that reside on the border of class decision boundaries, which can carry highly discriminative information. Unlike many existing XAI techniques which map model decisions to gray-scale pixel locations, we use saliency through back-propagation to identify and aggregate image color bands across entire classes. Our framework is publicly available at \url{https://github.com/dd1github/XAI_for_Imbalanced_Learning}
translated by 谷歌翻译
A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem. Our method is a case of amortized interpretability models, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularised maximum likelihood for our general model. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map. Using these datasets, we show experimentally that using multiple imputation provides more reasonable interpretations.
translated by 谷歌翻译
In this paper, we identify the best learning scenario to train a team of agents to compete against multiple possible strategies of opposing teams. We evaluate cooperative value-based methods in a mixed cooperative-competitive environment. We restrict ourselves to the case of a symmetric, partially observable, two-team Markov game. We selected three training methods based on the centralised training and decentralised execution (CTDE) paradigm: QMIX, MAVEN and QVMix. For each method, we considered three learning scenarios differentiated by the variety of team policies encountered during training. For our experiments, we modified the StarCraft Multi-Agent Challenge environment to create competitive environments where both teams could learn and compete simultaneously. Our results suggest that training against multiple evolving strategies achieves the best results when, for scoring their performances, teams are faced with several strategies.
translated by 谷歌翻译
Words of estimative probability (WEP) are expressions of a statement's plausibility (probably, maybe, likely, doubt, likely, unlikely, impossible...). Multiple surveys demonstrate the agreement of human evaluators when assigning numerical probability levels to WEP. For example, highly likely corresponds to a median chance of 0.90+-0.08 in Fagen-Ulmschneider (2015)'s survey. In this work, we measure the ability of neural language processing models to capture the consensual probability level associated to each WEP. Firstly, we use the UNLI dataset (Chen et al., 2020) which associates premises and hypotheses with their perceived joint probability p, to construct prompts, e.g. "[PREMISE]. [WEP], [HYPOTHESIS]." and assess whether language models can predict whether the WEP consensual probability level is close to p. Secondly, we construct a dataset of WEP-based probabilistic reasoning, to test whether language models can reason with WEP compositions. When prompted "[EVENTA] is likely. [EVENTB] is impossible.", a causal language model should not express that [EVENTA&B] is likely. We show that both tasks are unsolved by off-the-shelf English language models, but that fine-tuning leads to transferable improvement.
translated by 谷歌翻译
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, i.e., when training labels are strongly correlated with undesirable features. To prevent a network from learning such features, recent methods augment training data such that examples displaying spurious correlations (i.e., bias-aligned examples) become a minority, whereas the other, bias-conflicting examples become prevalent. However, these approaches are sometimes difficult to train and scale to real-world data because they rely on generative models or disentangled representations. We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples. Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features. Identifying such pairs requires comparing examples with respect to unknown biased features. For this, we utilize an auxiliary contrastive model with the popular heuristic that biased features are learned preferentially during training. Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
translated by 谷歌翻译
计算优化问题解决方案解决方案的雅各布是机器学习中的一个核心问题,其应用程序在超参数优化,元学习,优化为层和数据集蒸馏中的应用程序,仅举几例。展开的分化是一种流行的启发式方法,它使用迭代求解器近似溶液,并通过计算路径区分它。这项工作提供了对梯度下降和Chebyshev方法的二次目标的这种方法的非反应收敛速率分析。我们表明,为了确保雅各布的融合,我们可以1)选择较大的学习率,导致快速渐近地收敛,但接受该算法可能具有任意长的燃烧阶段或2)选择较小的学习率直接但较慢的收敛性。我们将这种现象称为展开的诅咒。最后,我们讨论了相对于这种方法的开放问题,例如为最佳展开策略得出实用的更新规则,并与Sobolev正交多项式领域建立了新的联系。
translated by 谷歌翻译
我们在本文中介绍了我们认为是视频游戏机翻译的首次尝试之一。我们的研究表明,只有有限的内域数据训练的模型超出了可公开可用的系统,随后的人类评估揭示了最终翻译中的有趣发现。本文的第一部分介绍了视频游戏翻译的一些挑战,一些现有文献以及本实验中使用的系统和数据集。最后一节讨论了我们对所得翻译的分析以及这种自动化系统的潜在好处。一个这样的发现突出了该模型学习从英语到法语的视频游戏翻译的典型规则和模式的能力。因此,我们的结论表明,鉴于令人鼓舞的结果,工作的高度重复性以及翻译人员在该领域中通常不良的工作条件,视频游戏机译的具体情况可能非常有用。但是,与文化部门中MT的其他用例一样,我们认为这在很大程度上取决于该工具的适当实施,该工具应与人类翻译人员进行交互方式来刺激创造力,而不是为了生产力而不是原始的后编辑。
translated by 谷歌翻译
我们介绍Audiolm,这是具有长期一致性高质量音频产生的框架。 Audiolm将输入音频映射到一系列离散令牌,并将音频生成作为此表示空间中的语言建模任务。我们展示了现有的音频令牌如何在重建质量和长期结构之间提供不同的权衡,我们提出了一个混合代币化计划来实现这两个目标。也就是说,我们利用在音频中预先训练的蒙版语言模型的离散激活来捕获长期结构和神经音频编解码器产生的离散代码,以实现高质量的合成。通过培训大型原始音频波形,Audiolm学会了在简短的提示下产生自然和连贯的连续性。当接受演讲训练时,没有任何笔录或注释,Audiolm会在句法和语义上产生可行的语音连续性,同时还为看不见的说话者保持说话者身份和韵律。此外,我们演示了我们的方法如何通过产生连贯的钢琴音乐连续性来超越语音,尽管受过训练而没有任何象征性的音乐代表。
translated by 谷歌翻译