心电图(ECG)是一种有效且无侵入性诊断工具,可测量心脏的电活动。解释ECG信号检测各种异常是一个具有挑战性的任务,需要专业知识。最近,利用深度神经网络的ECG分类来帮助医疗从业者变得流行,但他们的黑匣子自然妨碍了临床实施。已经提出了几种基于显着性的可解释性技术,但它们仅表明重要特征的位置而不是实际功能。我们提出了一种名为QLST的新型解释性技术,一种基于查询的潜空间遍历技术,可以提供对任何ECG分类模型的解释。使用QLST,我们训练一个神经网络,该网络网络学习在大学医院数据集训练的变分性AutoEncoder的潜在空间中,超过80万家ECG为28个疾病。我们通过实验证明我们可以通过通过这些遍历来解释不同的黑匣子分类器。
translated by 谷歌翻译
本文报告了基准数据驱动的自动共鸣手势生成的第二个基因挑战。参与的团队使用相同的语音和运动数据集来构建手势生成系统。所有这些系统生成的运动都使用标准化的可视化管道将视频渲染到视频中,并在几个大型众包用户研究中进行了评估。与比较不同的研究论文不同,结果差异仅是由于方法之间的差异,从而实现了系统之间的直接比较。今年的数据集基于18个小时的全身运动捕获,包括手指,参与二元对话的不同人。十个团队参加了两层挑战:全身和上身手势。对于每个层,我们都评估了手势运动的人类风格及其对特定语音信号的适当性。我们的评估使人类的忠诚度与手势适当性解脱,这是该领域的主要挑战。评估结果是一场革命和启示。某些合成条件被评为比人类运动捕获更明显的人类样。据我们所知,这从未在高保真的头像上展示过。另一方面,发现所有合成运动比原始运动捕获记录要小得多。其他材料可通过项目网站https://youngwoo-yoon.github.io/geneachallenge2022/获得
translated by 谷歌翻译
对自然语言处理资源中的偏置模式的提高意识,如BERT,具有许多度量来量化“偏见”和“公平”。但是,如果没有完全不可能,请比较不同指标的结果和评估这些度量的作品仍然困难。我们调查了对预用语言模型的公平度量标准的现有文献,并通过实验评估兼容性,包括语言模型中的偏差,如在其下游任务中。我们通过传统文献调查和相关分析的混合来实现这一目标,以及运行实证评估。我们发现许多指标不兼容,高度依赖于(i)模板,(ii)属性和目标种子和(iii)选择嵌入式。这些结果表明,公平或偏见评估对情境化语言模型仍然具有挑战性,如果不是至少高度主观。为了提高未来的比较和公平评估,我们建议避免嵌入基于的指标并专注于下游任务中的公平评估。
translated by 谷歌翻译
我们调查预测中的合奏技术,并检查其使用与Covid-19大流行早期类似的非季度时间系列的潜力。开发改进的预测方法是必不可少的,因为它们在关键阶段为组织和决策者提供数据驱动的决策。我们建议使用后期数据融合,使用两个预测模型的堆叠集合和两个元特征,并在初步预测阶段证明其预测力。最终的集合包括先知和长期短期内存(LSTM)神经网络作为基础模型。基础模型由多层的Perceptron(MLP)组合,考虑到元素,表示与每个基础模型的预测精度最高的相关性。我们进一步表明,包含Meta-Features通常会在七和十四天的两个预测视野中提高集合的预测准确性。该研究强化了以前的工作,并展示了与深层学习模型相结合的传统统计模型的价值,以生产更多来自不同领域和季节性的时间序列的预测模型。
translated by 谷歌翻译
在媒体流媒体的普及之后,许多视频流服务是不断购买新的视频内容来挖掘它们的潜在利润。因此,必须处理新添加的内容,以便建议给合适的用户。在本文中,我们通过探索各种深度学习功能提供视频建议的潜力来解决新的项目冷启动问题。调查的深度学习功能包括从视频内容中捕获视觉外观,音频和运动信息的功能。我们还探讨了不同的融合方法来评估这些功能模式如何组合以完全利用它们捕获的互补信息。关于电影建议的真实视频数据集的实验表明,深度学习功能优于手工制作的功能。特别是,使用深度学习音频功能和以自行信型的深度学习功能生成的建议优于MFCC和最先进的IDT功能。此外,与手工制作特征和文本元数据的各种深度学习特征的组合产生了显着的建议改善,而不是仅相结合的前者。
translated by 谷歌翻译
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译