为了从复杂的数据中提取基本信息,计算机科学家一直在开发学习低维表示模式的机器学习模型。从机器学习研究的这种进步来看,不仅计算机科学家,而且社会科学家也从中受益并推进了他们的研究,因为人类的行为或社会现象在于复杂的数据。为了记录这一新兴趋势,我们调查了最近的研究,该研究将单词嵌入技术应用于人类行为挖掘,建立分类法以说明被调查论文中使用的方法和程序,并突出显示最新的新兴趋势,将单词嵌入模型应用于非文本人类人类行为数据。这项调查进行了一个简单的实验,警告文献中使用的共同相似性测量,即使它们在总级别返回一致的结果,也可能产生不同的结果。
translated by 谷歌翻译
在科学研究中,该方法是解决科学问题和关键研究对象的必不可少手段。随着科学的发展,正在提出,修改和使用许多科学方法。作者在抽象和身体文本中描述了该方法的详细信息,并且反映该方法名称的学术文献中的关键实体称为方法实体。在大量的学术文献中探索各种方法实体有助于学者了解现有方法,为研究任务选择适当的方法并提出新方法。此外,方法实体的演变可以揭示纪律的发展并促进知识发现。因此,本文对方法论和经验作品进行了系统的综述,重点是从全文学术文献中提取方法实体,并努力使用这些提取的方法实体来建立知识服务。首先提出了本综述涉及的关键概念的定义。基于这些定义,我们系统地审查了提取和评估方法实体的方法和指标,重点是每种方法的利弊。我们还调查了如何使用提取的方法实体来构建新应用程序。最后,讨论了现有作品的限制以及潜在的下一步。
translated by 谷歌翻译
Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language-the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. We replicate these using a widely used, purely statistical machine-learning model-namely, the GloVe word embedding-trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere exposure to everyday language can account for the biases we replicate here.
translated by 谷歌翻译
本次调查绘制了用于分析社交媒体数据的生成方法的研究状态的广泛的全景照片(Sota)。它填补了空白,因为现有的调查文章在其范围内或被约会。我们包括两个重要方面,目前正在挖掘和建模社交媒体的重要性:动态和网络。社会动态对于了解影响影响或疾病的传播,友谊的形成,友谊的形成等,另一方面,可以捕获各种复杂关系,提供额外的洞察力和识别否则将不会被注意的重要模式。
translated by 谷歌翻译
自然语言处理(NLP)是一个人工智能领域,它应用信息技术来处理人类语言,在一定程度上理解并在各种应用中使用它。在过去的几年中,该领域已经迅速发展,现在采用了深层神经网络的现代变体来从大型文本语料库中提取相关模式。这项工作的主要目的是调查NLP在药理学领域的最新使用。正如我们的工作所表明的那样,NLP是药理学高度相关的信息提取和处理方法。它已被广泛使用,从智能搜索到成千上万的医疗文件到在社交媒体中找到对抗性药物相互作用的痕迹。我们将覆盖范围分为五个类别,以调查现代NLP方法论,常见的任务,相关的文本数据,知识库和有用的编程库。我们将这五个类别分为适当的子类别,描述其主要属性和想法,并以表格形式进行总结。最终的调查介绍了该领域的全面概述,对从业者和感兴趣的观察者有用。
translated by 谷歌翻译
情感是引人入胜的叙事的关键部分:文学向我们讲述了有目标,欲望,激情和意图的人。情绪分析是情感分析更广泛,更大的领域的一部分,并且在文学研究中受到越来越多的关注。过去,文学的情感维度主要在文学诠释学的背景下进行了研究。但是,随着被称为数字人文科学(DH)的研究领域的出现,在文学背景下对情绪的一些研究已经发生了计算转折。鉴于DH仍被形成为一个领域的事实,这一研究方向可以相对较新。在这项调查中,我们概述了现有的情感分析研究机构,以适用于文献。所评论的研究涉及各种主题,包括跟踪情节发展的巨大变化,对文学文本的网络分析以及了解文本的情感以及其他主题。
translated by 谷歌翻译
语言可以用作再现和执行有害刻板印象和偏差的手段,并被分析在许多研究中。在本文中,我们对自然语言处理中的性别偏见进行了304篇论文。我们分析了社会科学中性别及其类别的定义,并将其连接到NLP研究中性别偏见的正式定义。我们调查了在对性别偏见的研究中应用的Lexica和数据集,然后比较和对比方法来检测和减轻性别偏见。我们发现对性别偏见的研究遭受了四个核心限制。 1)大多数研究将性别视为忽视其流动性和连续性的二元变量。 2)大部分工作都在单机设置中进行英语或其他高资源语言进行。 3)尽管在NLP方法中对性别偏见进行了无数的论文,但我们发现大多数新开发的算法都没有测试他们的偏见模型,并无视他们的工作的伦理考虑。 4)最后,在这一研究线上发展的方法基本缺陷涵盖性别偏差的非常有限的定义,缺乏评估基线和管道。我们建议建议克服这些限制作为未来研究的指导。
translated by 谷歌翻译
这项工作介绍了一种新方法,以考虑文本分析中的主观性和一般上下文依赖性,并用作示例检测文本中传达的情绪。所提出的方法通过Marvin Minsky(1974)利用Mikolov等人的文本向量化的框架理论的计算版本来考虑主观性。 (2013),用于基于它们出现的上下文生成单词的分布式表示。我们的方法是基于三个组成部分:1。代表观点的框架/“房间”; 2.代表分析标准的基准 - 在这种情况下,情绪分类,从罗伯特·普特金(1980)的人类情绪研究; 3.要分析的文件。通过使用单词之间的相似性测量,我们能够在我们的案例研究中提取基准中的元素中的元素的相对相关性 - 对于要分析的文件。我们的方法提供了一种措施,考虑到读取文档的实体的角度。该方法可以应用于评估主体性与理解文本的相对值或含义相关的所有情况。主观性可以不限于人体反应,但它可用于提供具有与给定域(“房间”)相关的解释的文本。为了评估我们的方法,我们在政治领域中使用了测试案例。
translated by 谷歌翻译
随着技术的最新发展,有关详细人类时间行为的数据已获得。已经提出了许多方法来挖掘这些人类动态行为数据,并揭示了研究和企业的宝贵见解。但是,大多数方法仅分析动作序列,并且不研究时空的信息,例如以整体方式进行动作之间的时间间隔。尽管动作和动作时间间隔是相互依存的,但将它们整合到不同的天性:时间和动作是一项挑战。为了克服这一挑战,我们提出了一种统一的方法,该方法通过跨期信息(时间间隔)分析用户行动。我们同时嵌入了用户的动作序列及其时间间隔,以获得动作的低维表示以及跨期信息。本文表明,提出的方法使我们能够使用三个现实世界数据集以时间上下文来表征用户操作。本文表明,动作序列和时空用户行为信息的明确建模实现了成功的可解释分析。
translated by 谷歌翻译
语言的自动处理在我们的生活中普遍存在,经常在我们的决策中扮演核心角色,例如为我们的消息和邮件选择措辞,翻译我们的读物,甚至与我们进行完整的对话。单词嵌入是现代自然语言处理系统的关键组成部分。它们提供了一种词的表示,从而提高了许多应用程序的性能,从而是含义的表现。单词嵌入似乎可以捕捉到原始文本中单词的含义的外观,但与此同时,它们还提炼了刻板印象和社会偏见,后来传达给最终应用。这样的偏见可能是歧视性的。检测和减轻这些偏见,以防止自动化过程的歧视行为非常重要,因为它们的规模可能比人类更有害。目前,有许多工具和技术可以检测和减轻单词嵌入中的偏见,但是它们为没有技术技能的人的参与带来了许多障碍。碰巧的是,大多数偏见专家,无论是社会科学家还是对偏见有害,没有这样的技能的环境,并且由于技术障碍而无法参与偏见检测过程。我们研究了现有工具中的障碍,并与不同种类的用户探索了它们的可能性和局限性。通过此探索,我们建议开发一种专门旨在降低技术障碍的工具,并提供探索能力,以满足愿意审核这些技术的专家,科学家和一般人的要求。
translated by 谷歌翻译
Future work sentences (FWS) are the particular sentences in academic papers that contain the author's description of their proposed follow-up research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different future directions embodied in the paper's content. FWS recognition methods will enable subsequent researchers to locate future work sentences more accurately and quickly and reduce the time and cost of acquiring the corpus. The current work on automatic identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, with the Macro F1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted average F1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future work sentences and the abstracts.
translated by 谷歌翻译
本文是Covid-19数据主题检测的背景下的比较研究。主题检测有各种方法,其中在本文中选择了聚类方法。聚类需要距离和计算距离需求嵌入。该研究的目的是同时研究嵌入方法,距离度量和聚类方法及其互动的三个因素。数据集包括与Covid-19相关的Hashtags收集的一个月推文用于本研究。从嵌入方法中选择五种方法,从早期到新方法:Word2Vec,FastText,Glove,BERT和T5。在本文中调查了五种聚类方法,即:K-Means,DBSCAN,光学,光谱和Jarvis-Patrick。还检查了欧几里德距离和余弦距离作为该领域中最重要的距离指标。首先,执行超过7,500个测试来调整参数。然后,通过剪影度量来研究具有距离度量和聚类方法的所有不同组合方法。这些组合的数量是50例。首先,检查这些50个测试的结果。然后,在该方法的所有测试中考虑了每种方法的等级。最后,分别研究了研究的主要变量(嵌入方法,距离度量和聚类方法)。对控制变量进行平均以中和它们的效果。实验结果表明,T5在轮廓度量方面强烈优于其他嵌入方法。在距离度量标准方面,余弦距离弱得多。 DBSCAN在聚类方法方面也优于其他方法。
translated by 谷歌翻译
We identify the task of measuring data to quantitatively characterize the composition of machine learning data and datasets. Similar to an object's height, width, and volume, data measurements quantify different attributes of data along common dimensions that support comparison. Several lines of research have proposed what we refer to as measurements, with differing terminology; we bring some of this work together, particularly in fields of computer vision and language, and build from it to motivate measuring data as a critical component of responsible AI development. Measuring data aids in systematically building and analyzing machine learning (ML) data towards specific goals and gaining better control of what modern ML systems will learn. We conclude with a discussion of the many avenues of future work, the limitations of data measurements, and how to leverage these measurement approaches in research and practice.
translated by 谷歌翻译
使用机器学习算法从未标记的文本中提取知识可能很复杂。文档分类和信息检索是两个应用程序,可以从无监督的学习(例如文本聚类和主题建模)中受益,包括探索性数据分析。但是,无监督的学习范式提出了可重复性问题。初始化可能会导致可变性,具体取决于机器学习算法。此外,关于群集几何形状,扭曲可能会产生误导。在原因中,异常值和异常的存在可能是决定因素。尽管初始化和异常问题与文本群集和主题建模相关,但作者并未找到对它们的深入分析。这项调查提供了这些亚地区的系统文献综述(2011-2022),并提出了共同的术语,因为类似的程序具有不同的术语。作者描述了研究机会,趋势和开放问题。附录总结了与审查的作品直接或间接相关的文本矢量化,分解和聚类算法的理论背景。
translated by 谷歌翻译
How do we design measures of social bias that we trust? While prior work has introduced several measures, no measure has gained widespread trust: instead, mounting evidence argues we should distrust these measures. In this work, we design bias measures that warrant trust based on the cross-disciplinary theory of measurement modeling. To combat the frequently fuzzy treatment of social bias in NLP, we explicitly define social bias, grounded in principles drawn from social science research. We operationalize our definition by proposing a general bias measurement framework DivDist, which we use to instantiate 5 concrete bias measures. To validate our measures, we propose a rigorous testing protocol with 8 testing criteria (e.g. predictive validity: do measures predict biases in US employment?). Through our testing, we demonstrate considerable evidence to trust our measures, showing they overcome conceptual, technical, and empirical deficiencies present in prior measures.
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
Word Embeddings从单词共同发生统计信息中捕获的语言规律学习隐式偏差。通过延长定量单词嵌入中的人类偏差的方法,我们介绍了valnorm,一种新的内在评估任务和方法,以量化人类级字体群体的价值维度与社会心理学。从七种语言(中文,英语,德语,波兰语,葡萄牙语,西班牙语和土耳其语)以及跨越200年的历史英语文本,将Valnorm应用于静态词嵌入式Valnorm在量化非歧视性的非社交组字集的价值方面达到了始终如一的高精度。具体而言,Valnorm实现了r = 0.88的Pearson相关性,用于399个单词的人类判断得分,以建立英语的愉快规范。相比之下,我们使用相同的单词嵌入品测量性别刻板印象,并发现社会偏见因语言而异。我们的结果表明,非歧视性,非社会群组词的价协会代表着七种语言和200多年的广泛共享的协会。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
News articles both shape and reflect public opinion across the political spectrum. Analyzing them for social bias can thus provide valuable insights, such as prevailing stereotypes in society and the media, which are often adopted by NLP models trained on respective data. Recent work has relied on word embedding bias measures, such as WEAT. However, several representation issues of embeddings can harm the measures' accuracy, including low-resource settings and token frequency differences. In this work, we study what kind of embedding algorithm serves best to accurately measure types of social bias known to exist in US online news articles. To cover the whole spectrum of political bias in the US, we collect 500k articles and review psychology literature with respect to expected social bias. We then quantify social bias using WEAT along with embedding algorithms that account for the aforementioned issues. We compare how models trained with the algorithms on news articles represent the expected social bias. Our results suggest that the standard way to quantify bias does not align well with knowledge from psychology. While the proposed algorithms reduce the~gap, they still do not fully match the literature.
translated by 谷歌翻译