连续空间中单词的学习表示可能是NLP中最基本的任务,但是单词以比向量点产品相似性提供的富裕方式相互作用。单词之间的许多关系可以从理论上表达为设置,例如形容词 - 名称化合物(例如“红色汽车” $ \ subseteq $“ Cars”)和同符(例如,“舌头” $ \ cap $应该是与“口”相似,而“舌头” $ \ cap $“语言”应该与“方言”相似)具有自然的理论解释。盒子嵌入是一种新型基于区域的表示,可提供执行这些设定理论操作的能力。在这项工作中,我们提供了对盒子嵌入的模糊集解释,并使用设定理论训练目标学习单词的框表示。我们在各种单词相似性任务上,尤其是在较不常见的单词上表现出改善的性能,并执行定量和定性分析,以探讨Word2box提供的其他独特表达性。
translated by 谷歌翻译
查询嵌入(QE) - 旨在嵌入实体和一阶逻辑(FOL)查询在低维空间中 - 在知识图表中的多跳推理中显示出强大的功率。最近,嵌入实体和具有几何形状的查询成为有希望的方向,因为几何形状可以自然地代表它们之间的答案和逻辑关系。然而,现有的基于几何的模型难以建模否定查询,这显着限制了它们的适用性。为了解决这一挑战,我们提出了一种新型查询嵌入模型,即锥形嵌入式(锥形),即锥形嵌入式(锥形),它是可以处理所有的基于几何的QE模型,包括所有FOL操作,包括结合,分离和否定。具体而言,锥形代表实体和查询作为二维锥体的笛卡尔产品,其中锥体的交叉和联合自然地模拟了结合和分离操作。通过进一步注意到,锥体的补充仍然存在锥体,我们在嵌入空间中设计几何补充运算符进行否定操作。实验表明,锥体在基准数据集上显着优于现有的现有技术。
translated by 谷歌翻译
在大规模不完整的知识图(kgs)上回答复杂的一阶逻辑(fol)查询是一项重要但挑战性的任务。最近的进步将逻辑查询和KG实体嵌入了相同的空间,并通过密集的相似性搜索进行查询。但是,先前研究中设计的大多数逻辑运算符不满足经典逻辑的公理系统,从而限制了其性能。此外,这些逻辑运算符被参数化,因此需要许多复杂的查询作为训练数据,在大多数现实世界中,这些数据通常很难收集甚至无法访问。因此,我们提出了Fuzzqe,这是一种基于模糊逻辑的逻辑查询嵌入框架,用于回答KGS上的查询。 Fuzzqe遵循模糊逻辑以原则性和无学习的方式定义逻辑运算符,在这种方式中,只有实体和关系嵌入才需要学习。 Fuzzqe可以从标记为训练的复杂逻辑查询中进一步受益。在两个基准数据集上进行的广泛实验表明,与最先进的方法相比,Fuzzqe在回答FOL查询方面提供了明显更好的性能。此外,只有KG链接预测训练的Fuzzqe可以实现与经过额外复杂查询数据训练的人的可比性能。
translated by 谷歌翻译
十年自2010年以来,人工智能成功一直处于计算机科学和技术的最前沿,传染媒介空间模型已经巩固了人工智能最前沿的位置。与此同时,量子计算机已经变得更加强大,主要进步的公告经常在新闻中。这些区域的基础的数学技术比有时意识到更多的共同之处。传染媒介空间在20世纪30年代的量子力学的公理心脏上采取了位置,这一采用是从矢量空间的线性几何形状推导逻辑和概率的关键动机。粒子之间的量子相互作用是使用张量产品进行建模的,其也用于表达人工神经网络中的物体和操作。本文介绍了这些常见的数学区域中的一些,包括如何在人工智能(AI)中使用的示例,特别是在自动推理和自然语言处理(NLP)中。讨论的技术包括矢量空间,标量产品,子空间和含义,正交投影和否定,双向矩阵,密度矩阵,正算子和张量产品。应用领域包括信息检索,分类和含义,建模字传感和歧义,知识库的推断和语义构成。其中一些方法可能会在量子硬件上实现。该实施中的许多实际步骤都处于早期阶段,其中一些已经实现了。解释一些常见的数学工具可以帮助AI和量子计算中的研究人员进一步利用这些重叠,识别和沿途探索新方向。
translated by 谷歌翻译
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
使用机器学习算法从未标记的文本中提取知识可能很复杂。文档分类和信息检索是两个应用程序,可以从无监督的学习(例如文本聚类和主题建模)中受益,包括探索性数据分析。但是,无监督的学习范式提出了可重复性问题。初始化可能会导致可变性,具体取决于机器学习算法。此外,关于群集几何形状,扭曲可能会产生误导。在原因中,异常值和异常的存在可能是决定因素。尽管初始化和异常问题与文本群集和主题建模相关,但作者并未找到对它们的深入分析。这项调查提供了这些亚地区的系统文献综述(2011-2022),并提出了共同的术语,因为类似的程序具有不同的术语。作者描述了研究机会,趋势和开放问题。附录总结了与审查的作品直接或间接相关的文本矢量化,分解和聚类算法的理论背景。
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.
translated by 谷歌翻译
已经开发了许多本体论,即描述逻辑(DL)知识库,以提供有关各个领域的丰富知识。本体论由一个ABOX,即两个实体之间或一个概念与实体之间的断言公理组成,以及Tbox,即两个概念之间的术语公理。神经逻辑推理(NLR)是探索此类知识库的基本任务,该任务旨在根据查询和答案的分布式表示,以逻辑操作来回答多跳的查询。尽管以前的NLR方法可以给出特定的实体级答案,即ABOX答案,但它们无法提供描述性概念级答案,即Tbox答案,其中每个概念都是对一组实体的描述。换句话说,以前的NLR方法在忽略Tbox时唯一的原因是本体论的Abox。特别是,提供Tbox答案可以通过描述性概念来推断每个查询的解释,这使用户可以理解答案,并且在应用本体论领域具有极大的有用性。在这项工作中,我们提出了整个Tbox和Abox(TA-NLR)的神经逻辑推理的问题,该问题解决了需要解决在概念上纳入,代表和操作时需要解决的挑战。我们提出了一种原始解决方案,名为Ta-nlr的TAR。首先,我们合并了基于本体论公理的描述以提供概念的来源。然后,我们将概念和查询表示为模糊集,即其元素具有成员程度的集合,以与实体桥接概念和查询。此外,我们设计了涉及概念的概念的概念和查询以进行优化和推理的概念的设计操作员。两个现实世界数据集的广泛实验结果证明了TAR对TA-NLR的有效性。
translated by 谷歌翻译
当前的最佳性能模型用于知识图推理(KGR)将几何学对象或概率分布引入嵌入实体,并将一阶逻辑(fol)查询引入低维矢量空间。它们可以总结为中心尺寸框架(点/框/锥,β/高斯分布等)。但是,它们具有有限的逻辑推理能力。而且很难概括到各种功能,因为中心和大小是一对一的约束,无法具有多个中心或尺寸。为了应对这些挑战,我们相反提出了一个名为“特征逻辑嵌入框架Flex”的新颖的KGR框架,这是第一个KGR框架,它不仅可以真正处理所有运营,包括连词,析取,否定,否定等等,而且还支持各种操作特征空间。具体而言,特征逻辑框架的逻辑部分是基于向量逻辑的,它自然地对所有FOL操作进行了建模。实验表明,FLEX在基准数据集上明显优于现有的最新方法。
translated by 谷歌翻译
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.
translated by 谷歌翻译
内容的离散和连续表示(例如,语言或图像)具有有趣的属性,以便通过机器的理解或推理此内容来探索或推理。该职位论文提出了我们关于离散和持续陈述的作用及其在深度学习领域的作用的意见。目前的神经网络模型计算连续值数据。信息被压缩成密集,分布式嵌入式。通过Stark对比,人类在他们的语言中使用离散符号。此类符号代表了来自共享上下文信息的含义的世界的压缩版本。此外,人工推理涉及在认知水平处符号操纵,这促进了抽象的推理,知识和理解的构成,泛化和高效学习。通过这些见解的动机,在本文中,我们认为,结合离散和持续的陈述及其处理对于构建展示一般情报形式的系统至关重要。我们建议并讨论了几个途径,可以在包含离散元件来结合两种类型的陈述的优点来改进当前神经网络。
translated by 谷歌翻译
We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.
translated by 谷歌翻译
在本文中,我们试图通过引入深度学习模型的句法归纳偏见来建立两所学校之间的联系。我们提出了两个归纳偏见的家族,一个家庭用于选区结构,另一个用于依赖性结构。选区归纳偏见鼓励深度学习模型使用不同的单位(或神经元)分别处理长期和短期信息。这种分离为深度学习模型提供了一种方法,可以从顺序输入中构建潜在的层次表示形式,即更高级别的表示由高级表示形式组成,并且可以分解为一系列低级表示。例如,在不了解地面实际结构的情况下,我们提出的模型学会通过根据其句法结构组成变量和运算符的表示来处理逻辑表达。另一方面,依赖归纳偏置鼓励模型在输入序列中找到实体之间的潜在关系。对于自然语言,潜在关系通常被建模为一个定向依赖图,其中一个单词恰好具有一个父节点和零或几个孩子的节点。将此约束应用于类似变压器的模型之后,我们发现该模型能够诱导接近人类专家注释的有向图,并且在不同任务上也优于标准变压器模型。我们认为,这些实验结果为深度学习模型的未来发展展示了一个有趣的选择。
translated by 谷歌翻译
Machine Learning for Source Code (ML4Code) is an active research field in which extensive experimentation is needed to discover how to best use source code's richly structured information. With this in mind, we introduce JEMMA, an Extensible Java Dataset for ML4Code Applications, which is a large-scale, diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is to lower the barrier to entry in ML4Code by providing the building blocks to experiment with source code models and tasks. JEMMA comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, ASTs, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2 million classes and over 8 million methods. JEMMA is also extensible allowing users to add new properties and representations to the dataset, and evaluate tasks on them. Thus, JEMMA becomes a workbench that researchers can use to experiment with novel representations and tasks operating on source code. To demonstrate the utility of the dataset, we also report results from two empirical studies on our data, ultimately showing that significant work lies ahead in the design of context-aware source code models that can reason over a broader network of source code entities in a software project, the very task that JEMMA is designed to help with.
translated by 谷歌翻译
The ability to compare the semantic similarity between text corpora is important in a variety of natural language processing applications. However, standard methods for evaluating these metrics have yet to be established. We propose a set of automatic and interpretable measures for assessing the characteristics of corpus-level semantic similarity metrics, allowing sensible comparison of their behavior. We demonstrate the effectiveness of our evaluation measures in capturing fundamental characteristics by evaluating them on a collection of classical and state-of-the-art metrics. Our measures revealed that recently-developed metrics are becoming better in identifying semantic distributional mismatch while classical metrics are more sensitive to perturbations in the surface text levels.
translated by 谷歌翻译
最近,越来越多的努力用于学习符号知识库(KB)的持续表示。但是,这些方法要么仅嵌入数据级知识(ABOX),要么在处理概念级知识(Tbox)时受到固有的局限性,即它们不能忠实地对KBS中存在的逻辑结构进行建模。我们提出了Boxel,这是一种几何KB嵌入方法,可以更好地捕获描述逻辑EL ++中的逻辑结构(即Abox和Tbox Axioms)。 Boxel模型在Kb中作为轴平行框,适用于建模概念交叉点,作为点内部的实体以及概念/实体之间的关系作为仿射转换。我们展示了Boxel的理论保证(声音),以保存逻辑结构。也就是说,有损耗0的框嵌入模型是KB​​的(逻辑)模型。实验结果(合理)补充推理和用于蛋白质 - 蛋白质预测的现实世界应用的结果表明,Boxel的表现优于传统知识图嵌入方法以及最先进的EL ++嵌入方法。
translated by 谷歌翻译
For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks. GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models. We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task. However, the low absolute performance of our best model indicates the need for improved general NLU systems.
translated by 谷歌翻译
本次调查绘制了用于分析社交媒体数据的生成方法的研究状态的广泛的全景照片(Sota)。它填补了空白,因为现有的调查文章在其范围内或被约会。我们包括两个重要方面,目前正在挖掘和建模社交媒体的重要性:动态和网络。社会动态对于了解影响影响或疾病的传播,友谊的形成,友谊的形成等,另一方面,可以捕获各种复杂关系,提供额外的洞察力和识别否则将不会被注意的重要模式。
translated by 谷歌翻译