In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Text-to-SQL semantic parsing is an important NLP task, which greatly facilitates the interaction between users and the database and becomes the key component in many human-computer interaction systems. Much recent progress in text-to-SQL has been driven by large-scale datasets, but most of them are centered on English. In this work, we present MultiSpider, the largest multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese). Upon MultiSpider, we further identify the lexical and structural challenges of text-to-SQL (caused by specific language properties and dialect sayings) and their intensity across different languages. Experimental results under three typical settings (zero-shot, monolingual and multilingual) reveal a 6.1% absolute drop in accuracy in non-English languages. Qualitative and quantitative analyses are conducted to understand the reason for the performance drop of each language. Besides the dataset, we also propose a simple schema augmentation framework SAVe (Schema-Augmentation-with-Verification), which significantly boosts the overall performance by about 1.8% and closes the 29.5% performance gap across languages.
translated by 谷歌翻译
The task of text-to-SQL is to convert a natural language question to its corresponding SQL query in the context of relational tables. Existing text-to-SQL parsers generate a "plausible" SQL query for an arbitrary user question, thereby failing to correctly handle problematic user questions. To formalize this problem, we conduct a preliminary study on the observed ambiguous and unanswerable cases in text-to-SQL and summarize them into 6 feature categories. Correspondingly, we identify the causes behind each category and propose requirements for handling ambiguous and unanswerable questions. Following this study, we propose a simple yet effective counterfactual example generation approach for the automatic generation of ambiguous and unanswerable text-to-SQL examples. Furthermore, we propose a weakly supervised model DTE (Detecting-Then-Explaining) for error detection, localization, and explanation. Experimental results show that our model achieves the best result on both real-world examples and generated examples compared with various baselines. We will release data and code for future research.
translated by 谷歌翻译
We present Spider, a large-scale, complex and cross-domain semantic parsing and textto-SQL dataset annotated by 11 college students. It consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables, covering 138 different domains. We define a new complex and cross-domain semantic parsing and textto-SQL task where different complex SQL queries and databases appear in train and test sets. In this way, the task requires the model to generalize well to both new SQL queries and new database schemas. Spider is distinct from most of the previous semantic parsing tasks because they all use a single database and the exact same programs in the train set and the test set. We experiment with various state-of-the-art models and the best model achieves only 12.4% exact matching accuracy on a database split setting. This shows that Spider presents a strong challenge for future research. Our dataset and task are publicly available at https://yale-lily. github.io/spider.
translated by 谷歌翻译
学习捕获文本表对齐对于文本到SQL等任务至关重要。一个模型需要正确识别对列和值的自然语言引用,并在给定的数据库架构中将其扎根。在本文中,我们为文本到SQL提出了一个新颖的弱监督结构接地预处理框架(strug),可以有效地学习基于平行的文本表语料库来捕获文本表对齐。我们确定了一组新的预测任务:列接地,价值接地和列值映射,并利用它们为文本表编码预处理。此外,为了评估更现实的文本表对齐设置下的不同方法,我们基于蜘蛛dev设置的新评估集蜘蛛现实化,并明确提及已删除的列名,并采用八个现有的文本到SQL数据集以进行交叉 - 数据库评估。在所有设置中,Strug对Bert-Large都有显着改善。与现有的预训练方法(例如Grappa)相比,Strug在蜘蛛方面的性能相似,并且在更现实的集合上都优于所有基线。蜘蛛现实的数据集可从https://doi.org/10.5281/zenodo.5205322获得。
translated by 谷歌翻译
文本到SQL解析是一项必不可少且具有挑战性的任务。文本到SQL解析的目的是根据关系数据库提供的证据将自然语言(NL)问题转换为其相应的结构性查询语言(SQL)。来自数据库社区的早期文本到SQL解析系统取得了显着的进展,重度人类工程和用户与系统的互动的成本。近年来,深层神经网络通过神经生成模型显着提出了这项任务,该模型会自动学习从输入NL问题到输出SQL查询的映射功能。随后,大型的预训练的语言模型将文本到SQL解析任务的最新作品带到了一个新级别。在这项调查中,我们对文本到SQL解析的深度学习方法进行了全面的评论。首先,我们介绍了文本到SQL解析语料库,可以归类为单转和多转。其次,我们提供了预先训练的语言模型和现有文本解析方法的系统概述。第三,我们向读者展示了文本到SQL解析所面临的挑战,并探索了该领域的一些潜在未来方向。
translated by 谷歌翻译
文本到SQL引起了自然语言处理和数据库社区的关注,因为它能够将自然语言中的语义转换为SQL查询及其在构建自然语言接口到数据库系统中的实际应用。文本到SQL的主要挑战在于编码自然话语的含义,解码为SQL查询,并翻译这两种形式之间的语义。这些挑战已被最近的进步解决了不同的范围。但是,对于这项任务仍缺乏全面的调查。为此,我们回顾了有关数据集,方法和评估的文本到SQL的最新进展,并提供了这项系统的调查,解决了上述挑战并讨论潜在的未来方向。我们希望这项调查可以作为快速获取现有工作并激励未来的研究。
translated by 谷歌翻译
随着未来以数据为中心的决策,对数据库的无缝访问至关重要。关于创建有效的文本到SQL(Text2SQL)模型以访问数据库的数据有广泛的研究。使用自然语言是可以通过有效访问数据库(尤其是对于非技术用户)来弥合数据和结果之间差距的最佳接口之一。它将打开门,并在精通技术技能或不太熟练的查询语言的用户中引起极大的兴趣。即使提出或研究了许多基于深度学习的算法,在现实工作场景中使用自然语言来解决数据查询问题仍然非常具有挑战性。原因是在不同的研究中使用不同的数据集,这带来了其局限性和假设。同时,我们确实缺乏对这些提议的模型及其对其训练的特定数据集的局限性的彻底理解。在本文中,我们试图介绍过去几年研究的24种神经网络模型的整体概述,包括其涉及卷积神经网络,经常性神经网络,指针网络,强化学习,生成模型等的架构。我们还概述11个数据集,这些数据集被广泛用于训练Text2SQL技术的模型。我们还讨论了无缝数据查询中文本2SQL技术的未来应用可能性。
translated by 谷歌翻译
Current SQL generators based on pre-trained language models struggle to answer complex questions requiring domain context or understanding fine-grained table structure. Humans would deal with these unknowns by reasoning over the documentation of the tables. Based on this hypothesis, we propose DocuT5, which uses off-the-shelf language model architecture and injects knowledge from external `documentation' to improve domain generalization. We perform experiments on the Spider family of datasets that contain complex questions that are cross-domain and multi-table. Specifically, we develop a new text-to-SQL failure taxonomy and find that 19.6% of errors are due to foreign key mistakes, and 49.2% are due to a lack of domain knowledge. We proposed DocuT5, a method that captures knowledge from (1) table structure context of foreign keys and (2) domain knowledge through contextualizing tables and columns. Both types of knowledge improve over state-of-the-art T5 with constrained decoding on Spider, and domain knowledge produces state-of-the-art comparable effectiveness on Spider-DK and Spider-SYN datasets.
translated by 谷歌翻译
知识基础问题回答(KBQA)旨在通过知识库(KB)回答问题。早期研究主要集中于回答有关KB的简单问题,并取得了巨大的成功。但是,他们在复杂问题上的表现远非令人满意。因此,近年来,研究人员提出了许多新颖的方法,研究了回答复杂问题的挑战。在这项调查中,我们回顾了KBQA的最新进展,重点是解决复杂问题,这些问题通常包含多个主题,表达复合关系或涉及数值操作。详细说明,我们从介绍复杂的KBQA任务和相关背景开始。然后,我们描述用于复杂KBQA任务的基准数据集,并介绍这些数据集的构建过程。接下来,我们提出两个复杂KBQA方法的主流类别,即基于语义解析的方法(基于SP)的方法和基于信息检索的方法(基于IR)。具体而言,我们通过流程设计说明了他们的程序,并讨论了它们的主要差异和相似性。之后,我们总结了这两类方法在回答复杂问题时会遇到的挑战,并解释了现有工作中使用的高级解决方案和技术。最后,我们结论并讨论了与复杂的KBQA有关的几个有希望的方向,以进行未来的研究。
translated by 谷歌翻译
作为第一个会话级的中文数据集,Chase包含两个单独的部分,即从Scratch(Chase-C)手动构建的2,003个会话,以及从英语SPARC(Chase-T)翻译的3,456个会话。我们发现这两个部分是高度差异,并且作为培训和评估数据不兼容。在这项工作中,我们介绍了SESQL,这是中文的另一个大规模会话级文本到SQL数据集,由5,028个会话组成,所有课程都是从Scratch手动构建的。为了保证数据质量,我们采用迭代注释工作流程,以促进对先前的自然语言(NL)问题和SQL查询的紧张和及时审查。此外,通过完成所有与上下文有关的NL问题,我们获得了27,012个独立的问题/SQL对,允许SESQL用作单轮多DB文本到SQL解析的最大数据集。我们通过使用三个竞争性会话级解析器,并提供详细的分析,对SESQL进行基准测试级文本到SQL解析实验。
translated by 谷歌翻译
The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications. Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables. Motivated by this, we propose the Adversarial Table Perturbation (ATP) as a new attacking paradigm to measure the robustness of Text-to-SQL models. Following this proposition, we curate ADVETA, the first robustness evaluation benchmark featuring natural and realistic ATPs. All tested state-of-the-art models experience dramatic performance drops on ADVETA, revealing models' vulnerability in real-world practices. To defend against ATP, we build a systematic adversarial training example generation framework tailored for better contextualization of tabular data. Experiments show that our approach not only brings the best robustness improvement against table-side perturbations but also substantially empowers models against NL-side perturbations. We release our benchmark and code at: https://github.com/microsoft/ContextualSP.
translated by 谷歌翻译
Recent years have witnessed the resurgence of knowledge engineering which is featured by the fast growth of knowledge graphs. However, most of existing knowledge graphs are represented with pure symbols, which hurts the machine's capability to understand the real world. The multi-modalization of knowledge graphs is an inevitable key step towards the realization of human-level machine intelligence. The results of this endeavor are Multi-modal Knowledge Graphs (MMKGs). In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques. We then systematically review the challenges, progresses and opportunities on the construction and application of MMKGs respectively, with detailed analyses of the strength and weakness of different solutions. We finalize this survey with open research problems relevant to MMKGs.
translated by 谷歌翻译
对新数据库的普遍性对于旨在将人类话语解析为SQL语句的文本到SQL系统至关重要。现有作品通过利用确切的匹配方法来确定问题单词和模式项目之间的词汇匹配来实现这一目标。但是,这些方法在其他具有挑战性的场景中失败,例如,表面形式在相应的问题单词和架构项目之间有所不同的同义词替代。在本文中,我们提出了一个名为ISESL-SQL的框架,以迭代地构建问题令牌和数据库模式之间的语义增强的架构链接图。首先,我们以无监督的方式通过探测过程提取PLM的模式链接图。然后,通过深图学习方法在训练过程中进一步优化了模式链接图。同时,我们还设计了一个称为图形正则化的辅助任务,以改善模式链接图中提到的模式信息。对三个基准测试的广泛实验表明,ISESL-SQL可以始终优于基准,进一步的研究表明其普遍性和鲁棒性。
translated by 谷歌翻译
自然语言接口到数据库(NLIDB),其中用户在自然语言(NL)上姿势查询是至关重要的,使非专家能够从数据中获得见解。相比之下,开发此类接口依赖于经常代码启发式的专家来映射NL到SQL。或者,基于机器学习模型的NLIDB依赖于用作训练数据的NL到SQL映射的监督示例(NL-SQL对)。再次采购这些示例,使用专家,该专家通常涉及超过一次性相互作用。即,部署NLIDB的每个数据域都可能具有不同的特征,因此需要专用的启发式或域特定的培训示例。为此,我们提出了一种使用弱监管培训基于机器学习的NLIDB的替代方法。我们使用最近提出的问题分解表示称为qdmr,是NL和正式查询语言之间的中间。最近的工作表明,非专家通常在将NL转化为QDMR时是成功的。因此,我们使用NL-QDMR对以及问题答案,作为自动综合SQL查询的监督。然后使用NL问题和合成的SQL来培训NL-TO-SQL模型,我们在五个基准数据集中测试。广泛的实验表明,我们的解决方案需要零专家注释,竞争性地与专家注释数据培训的模型竞争地表现得很竞争。
translated by 谷歌翻译
Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
长期以来,可以将可以应用于新数据库的文本到SQL解析器的重要性已得到认可,实现此目标的关键步骤是架构链接,即在生成SQL时正确地识别未见列或表的提及。在这项工作中,我们提出了一个新颖的框架,以通过基于PoinCar \'e距离指标的探测程序从大规模预训练的语言模型(PLM)中引起关系结构,并使用诱导的关系来增强基于图的解析器为了更好的模式链接。与常用的基于规则的架构链接方法相比,我们发现探测关系也可以稳健地捕获语义对应关系,即使提及和实体的表面形式不同。此外,我们的探测过程完全不受监督,不需要其他参数。广泛的实验表明,我们的框架在三个基准测试中设定了新的最新性能。我们从经验上验证我们的探测程序确实可以通过定性分析找到所需的关系结构。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
Table-and-text hybrid question answering (HybridQA) is a widely used and challenging NLP task commonly applied in the financial and scientific domain. The early research focuses on migrating other QA task methods to HybridQA, while with further research, more and more HybridQA-specific methods have been present. With the rapid development of HybridQA, the systematic survey is still under-explored to summarize the main techniques and advance further research. So we present this work to summarize the current HybridQA benchmarks and methods, then analyze the challenges and future directions of this task. The contributions of this paper can be summarized in three folds: (1) first survey, to our best knowledge, including benchmarks, methods and challenges for HybridQA; (2) systematic investigation with the reasonable comparison of the existing systems to articulate their advantages and shortcomings; (3) detailed analysis of challenges in four important dimensions to shed light on future directions.
translated by 谷歌翻译
深度学习的最新进展极大地推动了语义解析的研究。此后,在许多下游任务中进行了改进,包括Web API的自然语言接口,文本到SQL的生成等。但是,尽管与这些任务有着密切的联系,但有关知识库的问题的研究(KBQA)的进展相对缓慢。我们将其确定并归因于KBQA的两个独特挑战,模式级的复杂性和事实级别的复杂性。在这项调查中,我们将KBQA放置在更广泛的语义解析文献中,并全面说明了现有的KBQA方法如何试图应对独特的挑战。无论面临什么独特的挑战,我们都认为我们仍然可以从语义解析的文献中汲取太大的灵感,这被现有的KBQA研究所忽略了。基于我们的讨论,我们可以更好地了解当前KBQA研究的瓶颈,并阐明KBQA的有希望的方向,以跟上语义解析的文献,尤其是在预训练的语言模型时代。
translated by 谷歌翻译