解释性学者通过手动采样文档,应用代码以及将代码精炼和整理成类别,直到出现有意义的主题,从而从文本语料库中产生知识。鉴于大量的语料库,机器学习可以帮助扩展此数据采样和分析,但先前的研究表明,专家通常关注算法可能破坏或推动解释性奖学金。我们采用以人为本的设计方法来解决围绕机器辅助解释性研究的关注,以构建学术研究,该研究将机器中的集群算法纳入了脚手架解释性文本分析。随着学者将代码应用于文档和完善它们,所得编码的模式用作结构化元数据,该元数据限制了从语料库推断出的层次文档和单词簇。这些集群的交互式可视化可以帮助学者们战略性地对文档进行进一步的洞察力进行洞察力。 Scholastic证明了采用熟悉隐喻的以人为中心的算法设计和可视化如何通过交互式主题建模和文档群集来支持归纳和解释性研究方法。
translated by 谷歌翻译
Often clickbait articles have a title that is phrased as a question or vague teaser that entices the user to click on the link and read the article to find the explanation. We developed a system that will automatically find the answer or explanation of the clickbait hook from the website text so that the user does not need to read through the text themselves. We fine-tune an extractive question and answering model (RoBERTa) and an abstractive one (T5), using data scraped from the 'StopClickbait' Facebook pages and Reddit's 'SavedYouAClick' subforum. We find that both extractive and abstractive models improve significantly after finetuning. We find that the extractive model performs slightly better according to ROUGE scores, while the abstractive one has a slight edge in terms of BERTscores.
translated by 谷歌翻译
We propose a fully unsupervised method to detect bias in contextualized embeddings. The method leverages the assortative information latently encoded by social networks and combines orthogonality regularization, structured sparsity learning, and graph neural networks to find the embedding subspace capturing this information. As a concrete example, we focus on the phenomenon of ideological bias: we introduce the concept of an ideological subspace, show how it can be found by applying our method to online discussion forums, and present techniques to probe it. Our experiments suggest that the ideological subspace encodes abstract evaluative semantics and reflects changes in the political left-right spectrum during the presidency of Donald Trump.
translated by 谷歌翻译
An effective aggregation of node features into a graph-level representation via readout functions is an essential step in numerous learning tasks involving graph neural networks. Typically, readouts are simple and non-adaptive functions designed such that the resulting hypothesis space is permutation invariant. Prior work on deep sets indicates that such readouts might require complex node embeddings that can be difficult to learn via standard neighborhood aggregation schemes. Motivated by this, we investigate the potential of adaptive readouts given by neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. We argue that in some problems such as binding affinity prediction where molecules are typically presented in a canonical form it might be possible to relax the constraints on permutation invariance of the hypothesis space and learn a more effective model of the affinity by employing an adaptive readout function. Our empirical results demonstrate the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics. Moreover, we observe a consistent improvement over standard readouts (i.e., sum, max, and mean) relative to the number of neighborhood aggregation iterations and different convolutional operators.
translated by 谷歌翻译
深度学习算法的最新进展为解决许多医学图像分析问题带来了重大好处。培训深度学习模型通常需要具有专家标记注释的大型数据集。但是,获取专家标记的注释不仅昂贵,而且主观,容易出错,并且观察者内部变异性会引入标签。由于解剖学的模棱两可,使用深度学习模型来细分医学图像时,这尤其是一个问题。基于图像的医学诊断工具使用经过不正确分段标签训练的深度学习模型可以导致错误的诊断和治疗建议。与单评论注释相比,多评价者注释可能更适合于使用小型培训集的深度学习模型进行训练。本文的目的是开发和评估一种基于MRI中病变特征的多评价者注释和解剖学知识来生成概率标签的方法,以及一种使用概率的标签使用归一化活动性损失作为A的病变特征的解剖学知识,以训练分割模型”。耐噪声损失的功能。通过将17个膝盖MRI扫描的二进制基础真理进行比较,以评估该模型,以用于临床分割和检测骨髓病变(BML)。该方法与二进制跨透镜损失函数相比,该方法成功提高了精度14,召回22和骰子得分8%。总体而言,这项工作的结果表明,使用软标签的拟议归一化主动损失成功地减轻了嘈杂标签的影响。
translated by 谷歌翻译
目的:心电图(ECG)信号通常会遭受噪声干扰,例如基线徘徊。心电图信号的高质量和高保真重建对于诊断心血管疾病具有重要意义。因此,本文提出了一种新型的心电图基线徘徊和降噪技术。方法:我们以特定于心电图信号的条件方式扩展模型,即心电图基线徘徊和噪声去除(Descod-ECG)的基于深度分数的扩散模型。此外,我们部署了一个多拍的平均策略,以改善信号重建。我们在QT数据库和MIT-BIH噪声应力测试数据库上进行了实验,以验证该方法的可行性。采用基线方法进行比较,包括传统的基于数字过滤器和基于深度学习的方法。结果:数量评估结果表明,所提出的方法在四个基于距离的相似性指标(平方距离的总和,最大绝对正方形,根距离的百分比和余弦相似性)上获得了出色的性能,并具有3.771 $ \ pm $ 5.713 au,$ 5.713 au, 0.329 $ \ pm $ 0.258 au,40.527 $ \ pm $ 26.258 \%和0.926 $ \ pm $ 0.087。与最佳基线方法相比,这至少导致了至少20%的总体改进。结论:本文证明了Descod-ECG的最新性能用于ECG噪声,该噪声可以更好地近似真实的数据分布和在极端噪声腐败下较高的稳定性。意义:这项研究是最早扩展基于条件扩散的生成模型以去除ECG噪声的研究之一,并且Descod-ECG具有广泛用于生物医学应用的潜力。
translated by 谷歌翻译
Geographic features are commonly used to improve the performance of pretrained language models (PLMs) on NLP tasks where they are intuitively beneficial (e.g., geolocation prediction, dialect feature prediction). Existing methods, however, leverage geographic information in task-specific fine-tuning and fail to integrate it into the geo-linguistic knowledge encoded by PLMs, which would make it transferable across different tasks. In this paper, we introduce an approach to task-agnostic geoadaptation of PLMs that forces them to learn associations between linguistic phenomena and geographic locations. Geoadaptation is an intermediate training step that couples language modeling and geolocation prediction in a multi-task learning setup. In our main set of experiments, we geoadapt BERTi\'{c}, a PLM for Bosnian-Croatian-Montenegrin-Serbian (BCMS), using a corpus of geotagged BCMS tweets. Evaluation on three tasks, namely fine-tuned as well as zero-shot geolocation prediction and zero-shot prediction of dialect features, shows that geoadaptation is very effective: e.g., we obtain state-of-the-art performance in supervised geolocation prediction and report massive gains over geographically uninformed PLMs on zero-shot geolocation prediction. Moreover, in follow-up experiments we successfully geoadapt two other PLMs, specifically ScandiBERT on Norwegian, Swedish, and Danish tweets and GermanBERT on Jodel posts in German from Austria, Germany, and Switzerland, proving that the benefits of geoadaptation are not limited to a particular language area and PLM.
translated by 谷歌翻译
标记数据是大多数自然语言处理任务的基础。但是,标记数据很困难,并且通常对正确的数据标签应该是什么不同的有效信念。到目前为止,数据集创建者已承认注释主观性,但在注释过程中没有主动管理它。这导致部分主观的数据集未能提供明确的下游使用。要解决此问题,我们提出了两个对比的数据注释范式。描述性范式鼓励注释主观性,而规定的范式则劝阻。描述性注释允许对不同信念进行测量和建模,而规定的注释使得能够培训持续应用一个信仰的模型。我们讨论实施宗旨的福利和挑战,并争辩说,数据集创建者应该明确瞄准一个或另一个,以促进其数据集的预期使用。最后,我们设计了一个注释实验,以说明两种范例之间的对比。
translated by 谷歌翻译
当由于模型的复杂性或数据丰富而不是可行的,LAPPAlt方法,LAPPAlt近似和变分方法等近似推断方法是流行的方法。在本文中,我们提出了一种混合近似方法,即低秩变分贝叶斯校正(VBC),其使用LAPLACE方法并随后对后轴进行变分贝叶斯校正。这项成本基本上是Laplace方法确保该方法可扩展性的方法。我们用模拟和实际数据说明了该方法及其优势,小而大规模。
translated by 谷歌翻译
图表表示学习方法为网络中的节点生成数值矢量表示,从而能够在标准机器学习模型中使用。这些方法旨在保留关系信息,使得图表中类似的节点在表示空间中彼此接近。相似性可以很大程度上基于两个概念之一:连接或结构作用。在节点结构角色重要的任务中,基于连接的方法表现出差的性能。最近的工作已经开始专注于学习方法的可扩展性,将数百万到数十亿节点和边缘的大规模图。许多无监督的节点表示学习算法无法缩放到大图,并且无法生成未经证明节点的节点表示。在这项工作中,我们提出了推理SiR-Gn,该模型在随机图上预先训练,然后快速计算节点表示,包括非常大的网络。我们证明该模型能够捕获节点的结构角色信息,并在未经网络上的节点和图形分类任务中显示出优异的性能。此外,我们观察到推理SIR-GN的可扩展性与大规模图表的最快电流方法相当。
translated by 谷歌翻译