我们研究了自然语言处理中出现的近似对相似矩阵的算法。通常,计算$ N $数据点的相似性矩阵需要$ \ omega(n ^ 2)$相似计算。这种二次缩放是一个重要的瓶颈,尤其是当通过昂贵的功能计算相似性时,例如,通过变压器模型计算。近似方法通过使用恰好计算的相似性的小子集来减少这种二次复杂性,以近似于完整成对相似性矩阵的其余部分。大量工作侧重于正半纤维(PSD)相似矩阵的有效近似,其在内核方法中。然而,关于无限期(非PSD)相似性矩阵的较少被理解得更少,这通常在NLP中产生。通过观察到,许多这些矩阵仍然有点接近PSD,我们将流行的NYSTR \“{o} M方法介绍到无限制地的概述。我们的算法可以应用于任何相似性矩阵并在Sublinear时间运行在矩阵的大小中,使用仅$ O(ns)$相似性计算产生秩的等级$近似。我们表明我们的方法以及CR Cur分解的简单变体,在近似各种相似度方面表现得非常好在NLP任务中产生的矩阵。我们在文档分类,句子相似度和跨文档COREREFED的下游任务中展示了近似相似性矩阵的高精度。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
矩阵近似是大规模代数机器学习方法中的关键元件。最近提出的方法Meka(Si等人,2014)有效地使用了希尔伯特空间中的两个常见假设:通过固有的换档内核功能和数据紧凑性假设获得的内部产品矩阵的低秩属性块集群结构。在这项工作中,我们不仅适用于换档内核,而且扩展Meka,而且还适用于多项式内核和极端学习内核等非静止内核。我们还详细介绍了如何在MEKA中处理非正面半定位内核功能,由近似自身或故意使用通用内核功能引起的。我们展示了一种基于兰兹的估计频谱转变,以发展稳定的正半定梅卡近似,也可用于经典凸优化框架。此外,我们支持我们的调查结果,具有理论考虑因素和各种综合性和现实世界数据的实验。
translated by 谷歌翻译
本文缩小了先前有关量子线性代数的文献与量子计算机上的实用数据分析之间的差异,从而使量子程序形式化,以加快机器学习中数据表示的本本本特征的解决方案。这些子例程的功率和实际用途通过新的量子算法(输入矩阵的大小中的sublinear)显示,用于主成分分析,通信分析和潜在的语义分析。我们提供了对运行时的理论分析,并在随机算法的误差上证明了紧密的界限。我们在多个数据集上运行实验,以模拟PCA的尺寸减小,以通过新型例程进行图像分类。结果表明,不依赖输入的大小的运行时参数是合理的,并且计算模型上的错误很小,从而允许竞争性分类性能。
translated by 谷歌翻译
使用机器学习算法从未标记的文本中提取知识可能很复杂。文档分类和信息检索是两个应用程序,可以从无监督的学习(例如文本聚类和主题建模)中受益,包括探索性数据分析。但是,无监督的学习范式提出了可重复性问题。初始化可能会导致可变性,具体取决于机器学习算法。此外,关于群集几何形状,扭曲可能会产生误导。在原因中,异常值和异常的存在可能是决定因素。尽管初始化和异常问题与文本群集和主题建模相关,但作者并未找到对它们的深入分析。这项调查提供了这些亚地区的系统文献综述(2011-2022),并提出了共同的术语,因为类似的程序具有不同的术语。作者描述了研究机会,趋势和开放问题。附录总结了与审查的作品直接或间接相关的文本矢量化,分解和聚类算法的理论背景。
translated by 谷歌翻译
从模型分析和机器学习中的比较到医疗数据集集合中的趋势发现,需要有效地比较和表示具有未知字段的数据集跨越各个字段。我们使用歧管学习来比较不同数据集的固有几何结构,通过比较其扩散操作员,对称阳性定义(SPD)矩阵,这些矩阵与连续的拉普拉斯 - 贝特拉米操作员与离散样品的近似相关。现有方法通常假设已知的数据对齐,并以点数的方式比较此类运算符。取而代之的是,我们利用SPD矩阵的Riemannian几何形状比较了这些操作员并根据log-euclidean Metric的下限定义了新的理论动机距离。我们的框架有助于比较具有不同大小,功能数量和测量方式的数据集中表达的数据歧管的比较。我们的日志 - 欧几里德签名(LES)距离恢复了有意义的结构差异,在各种应用领域的表现都优于竞争方法。
translated by 谷歌翻译
从大型套装中选择不同的和重要的项目,称为地标是机器学习兴趣的问题。作为一个具体示例,为了处理大型训练集,内核方法通常依赖于基于地标的选择或采样的低等级矩阵NYSTR \“OM近似值。在此上下文中,我们提出了一个确定性和随机的自适应算法在培训数据集中选择地标点。这些地标与克尼利克里斯特步函数序列的最小值有关。除了ChristOffel功能和利用分数之间的已知联系,我们的方法也有限决定性点过程(DPP)也是如此解释。即,我们的建设以类似于DPP的方式促进重要地标点之间的多样性。此外,我们解释了我们的随机自适应算法如何影响内核脊回归的准确性。
translated by 谷歌翻译
We present the Word Mover's Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover's Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven stateof-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates. 'Obama' word2vec embedding 'President' 'speaks' 'Illinois' 'media' 'greets' 'press' 'Chicago' document 2 document 1 Obama speaks to the media in Illinois The President greets the press in Chicago
translated by 谷歌翻译
变压器已成为自然兰格格处理和视觉中许多任务的首选模型。在更有效地进行培训和部署变压器的最新努力已经确定了许多策略,以近似自我发挥作用矩阵,这是变压器体系结构中的关键模块。有效的想法包括各种预先指定的稀疏模式,低级基础扩展及其组合。在本文中,我们重新访问了小波等经典多分辨率分析(MRA)概念,在这种情况下,在这种情况下的潜在价值迄今仍未被逐渐解散。我们表明,基于现代硬件和实施挑战所告知的经验反馈和设计选择的简单近似值,最终在大多数感兴趣的标准中产生了基于MRA的自我注意力方法,具有出色的性能。我们进行了一系列广泛的实验,并证明该多分辨率方案的表现优于最有效的自我注意力建议,并且对短序列和长序列都有利。代码可在\ url {https://github.com/mlpen/mra-witchention}中获得。
translated by 谷歌翻译
由于自我关注模块的二次空间和时间复杂性,基于变压器的模型在处理长序列中是不高的。为了解决此限制,建议通过分别通过低维投影和行选择来降低线性(模数对数因子)的二次复杂度。这两种型号本质上连接,并了解他们的连接,我们介绍了矩阵素描的理论框架。基于理论分析,我们提出了Skeinformer加速自我关注,进一步提高了三个精心设计的组件的自我关注的准确性:列采样,自适应行标准化和飞行员采样重新利用。关于长距离竞技场(LRA)基准的实验表明,我们的方法以始终如一的较小时间/空间占地面积优于替代方案。
translated by 谷歌翻译
“移动”一词的距离(WMD)是测量两个文档相似性的基本技术。作为WMD的关键,它可以通过采用最佳传输配方来利用空间单词的基础几何形状。关于WMD的最初研究报告说,WMD在各种数据集中的大幅度边缘优于古典基线,例如词袋(Bow)和TF-IDF。在本文中,我们指出原始研究中的评估可能会产生误导。我们重新评估了WMD和经典基准的性能,并发现如果我们采用适当的预处理(即L1归一化),经典的基线与WMD具有竞争力。此外,我们引入了WMD和L1拟态化的弓之间的类比,发现不仅WMD的性能,而且距离值都类似于高维空间的弓形值。
translated by 谷歌翻译
降低降低方法是无监督的方法,它学习了低维空间,在这些方法中,初始空间的某些特性(通常是“邻居”的概念)被保留。这种方法通常需要在大的K-NN图或复杂的优化求解器上传播。另一方面,通常用于从头开始学习表示形式,依靠简单,更可扩展的框架来学习的自我监督学习方法。在本文中,我们提出了TLDR,这是通用输入空间的一种降低方法,该方法正在移植Zbontar等人的最新自我监督学习框架。 (2021)降低维度的特定任务,超越任意表示。我们建议使用最近的邻居从训练组中构建对,并减少冗余损失,以学习在此类对之间产生表示形式的编码器。 TLDR是一种简单,易于训练和广泛适用性的方法。它由一个离线最近的邻居计算步骤组成,该步骤可以高度近似,并且是一个直接的学习过程。为了提高可伸缩性,我们专注于提高线性维度的降低,并在图像和文档检索任务上显示一致的收益,例如在Roxford上获得PCA的 +4%地图,用于GEM-AP,改善了ImageNet上的Dino的性能或以10倍的压缩保留。
translated by 谷歌翻译
我们启动了一项全面的实验研究,对大量数据集的基于目标的层次聚类方法,该方法包括来自计算机视觉和NLP应用程序的深层嵌入向量。这包括各种各样的图像嵌入(Imagenet,ImagenetV2,nabirds),单词嵌入(Twitter,Wikipedia)和句子嵌入(SST-2)载体(例如,Resnet,Resnext,Insnext,Inception V3,Sbert,Sbert)中的句子嵌入(SST-2)矢量。我们的研究包括最高45万美元的条目的数据集,其嵌入式尺寸高达2048美元。为了解决将层次聚类扩展到如此大的数据集的挑战,我们提出了一种新的实用层次聚类算法B ++&c。流行的Moseley-Wang(MW) / Cohen-Addad等人的平均可获得5% / 20%的提高。 (CKMM)目标(归一化)与广泛的经典方法和最近的启发式方法相比。我们还引入了一种理论算法B2SAT&C,该算法在多项式时间内实现了CKMM目标的0.74 $ approximation。这是对随机二进制树实现的微不足道$ 2/3 $ - approximation的首次实质性改进。在这项工作之前,$ \ $ \ 2/3 + 0.0004 $的最佳聚时近似是由于Charikar等人。 (Soda'19)。
translated by 谷歌翻译
Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BIGBIRD, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BIGBIRD drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
translated by 谷歌翻译
信息检索是自然语言处理中的重要组成部分,用于知识密集型任务,如问题应答和事实检查。最近,信息检索已经看到基于神经网络的密集检索器的出现,作为基于术语频率的典型稀疏方法的替代方案。这些模型在数据集和任务中获得了最先进的结果,其中提供了大型训练集。但是,它们不会很好地转移到没有培训数据的新域或应用程序,并且通常因未经监督的术语 - 频率方法(例如BM25)的术语频率方法而言。因此,自然问题是如果没有监督,是否有可能训练密集的索取。在这项工作中,我们探讨了对比学习的限制,作为培训无人监督的密集检索的一种方式,并表明它导致强烈的检索性能。更确切地说,我们在15个数据集中出现了我们的模型胜过BM25的Beir基准测试。此外,当有几千例的示例可用时,我们显示微调我们的模型,与BM25相比,这些模型导致强大的改进。最后,当在MS-Marco数据集上微调之前用作预训练时,我们的技术在Beir基准上获得最先进的结果。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
Kronecker产品的自然概括是Kronecker产品的张量Kronecker产品,在多个研究社区中独立出现。像它们的矩阵对应物一样,张量的概括为隐式乘法和分解定理提供了结构。我们提出了一个定理,该定理将张量kronecker产品的主要特征向量分解,这是从矩阵理论到张量特征向量的罕见概括。该定理意味着在kronecker产品的张量功率方法的迭代中应该存在低级结构。我们研究了网络对齐算法TAME中的低等级结构,这是一种功率方法启发式方法。直接或通过新的启发式嵌入方法使用低级结构,我们生成的新算法在提高或保持准确性的同时更快,并扩展到无法通过现有技术实际处理的问题。
translated by 谷歌翻译