Automated plot generation is the challenge of generating a sequence of events that will be perceived by readers as the plot of a coherent story. Traditional symbolic planners plan a story from a goal state and guarantee logical causal plot coherence but rely on a library of hand-crafted actions with their preconditions and effects. This closed world setting limits the length and diversity of what symbolic planners can generate. On the other hand, pre-trained neural language models can generate stories with great diversity, while being generally incapable of ending a story in a specified manner and can have trouble maintaining coherence. In this paper, we present an approach to story plot generation that unifies causal planning with neural language models. We propose to use commonsense knowledge extracted from large language models to recursively expand a story plot in a backward chaining fashion. Specifically, our system infers the preconditions for events in the story and then events that will cause those conditions to become true. We performed automatic evaluation to measure narrative coherence as indicated by the ability to answer questions about whether different events in the story are causally related to other events. Results indicate that our proposed method produces more coherent plotlines than several strong baselines.
translated by 谷歌翻译
自动化讲故事长期以来一直抓住了研究人员在日常生活中的叙述中的难以感受。但是,在用神经语言模型产生叙述时,保持一致性并保持对特定结束的特定结束挑战。在本文中,我们介绍了读者模型(Storm)的故事生成,这是一个框架,其中读者模型用于推理故事的推理应该进步。读者模型是人类读者相信关于虚构故事世界的概念,实体和关系的人。我们展示了如何作为知识图表所代表的明确读者模型提供故事一致性,并以实现给定的故事世界目标的形式提供可控性。实验表明,我们的模型产生了显着更加连贯和主题的故事,优于尺寸的基线,包括情节合理性并保持主题。我们的系统也优于在未订购的情况下在组成给定概念时占总引导的故事生成基线。
translated by 谷歌翻译
基于神经语言模型的自动化故事生成方法遭受了两个重要的限制。首先,基于语言模型的故事生成器通常不适用于给定的目标或结束。其次,当故事变长时,他们经常失去一致。我们提出了一种新的自动化故事生成方法,将问题视为生成的问答之一。我们所提出的故事生成系统从封装故事的最终事件的句子开始。然后系统迭代地(1)分析描述最新事件的文本,(2)生成关于“为什么”一个字符正在执行他们在事件中执行的事情的问题,然后(3)尝试生成另一个前面的回答这个问题的事件。
translated by 谷歌翻译
人工推理通常可以理解为两个系统之间的相互作用:直观和关联(“系统1”)和审议和逻辑(“系统2”)。神经序列模型 - 在执行复杂,结构化任务时越来越成功 - 表现出系统1的优点和故障模式:它们是快速和学习数据的模式,但通常不一致和不连贯。在这项工作中,我们通过添加系统2-Inspired逻辑推理,寻求一种轻量级,无培训的手段来改善现有系统1样序列模型。我们探讨了该主题的几种变体,其中通过符号推理模块检查来自神经序列模型的候选几代,可以通过符号推理模块来接受或拒绝几代人。我们的方法使用神经推理来介导神经系统1和逻辑系统2.导致强大的故事生成和接地的指示,表明这种方法可以增加神经基代的一致性和准确性。
translated by 谷歌翻译
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structure knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematical taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
translated by 谷歌翻译
大型语言模型(LLM)的最新进展已改变了自然语言处理(NLP)的领域。从GPT-3到Palm,每种新的大型语言模型都在推动自然语言任务的最新表现。除了自然语言的能力外,人们还对理解这种模型(接受大量数据,具有推理能力的培训)也引起了重大兴趣。因此,人们有兴趣为各种推理任务开发基准,并且在此类基准测试中测试LLM的初步结果似乎主要是积极的。但是,目前的基准相对简单,这些基准的性能不能用作支持的证据,很多时候是古怪的,对LLMS的推理能力提出了主张。截至目前,这些基准仅代表了一组非常有限的简单推理任务集,如果我们要衡量此类基于LLM的系统的真实限制,我们需要研究更复杂的推理问题。通过这种动机,我们提出了一个可扩展的评估框架,以测试LLM在人类智能的中心方面的能力,这是关于行动和变化的推理。我们提供的多个测试案例比任何先前建立的推理基准都更重要,并且每个测试案例都评估了有关行动和变化的推理的某些方面。对GPT-3(Davinci)基本版本的初步评估结果,在这些基准测试中显示了Subpar的性能。
translated by 谷歌翻译
大型预先训练的生成语言模型的出现为AI故事的常见框架通过采样模型来创建持续故事的序列。然而,单独的抽样对故事产生不足。特别是,很难指导语言模型来创建故事以达到特定的目标事件。我们提出了两种在深增强学习和奖励塑造的自动化技术,以控制计算机生成的故事的情节。首先利用近端策略优化来微调现有的基于变换器的语言模型,以生成文本持续,而且是寻求目标。第二种提取来自展开故事的知识图,该故事由策略网络使用,具有图注意选择由语言模型生成的候选继续。我们报告了与故事如何实现给定的目标事件以及与基线和消融相比的一致性和整体故事质量的人类参与者排名的自动化指标报告。
translated by 谷歌翻译
Story generation and understanding -- as with all NLG/NLU tasks -- has seen a surge in neurosymbolic work. Researchers have recognized that, while large language models (LLMs) have tremendous utility, they can be augmented with symbolic means to be even better and to make up for any flaws that the neural networks might have. However, symbolic methods are extremely costly in terms of the amount of time and expertise needed to create them. In this work, we capitalize on state-of-the-art Code-LLMs, such as Codex, to bootstrap the use of symbolic methods for tracking the state of stories and aiding in story understanding. We show that our CoRRPUS system and abstracted prompting procedures can beat current state-of-the-art structured LLM techniques on pre-existing story understanding tasks (bAbI task 2 and Re^3) with minimal hand engineering. We hope that this work can help highlight the importance of symbolic representations and specialized prompting for LLMs as these models require some guidance for performing reasoning tasks properly.
translated by 谷歌翻译
预训练的语言模型(PLM)无法生成长形式的叙事文本,因为它们不考虑全局结构。结果,生成的文本通常是不巧妙的,重复的或缺乏内容的。故事发电的最新工作以提示,关键字或语义框架的形式重新引入了明确的内容计划。经过大型平行语料库的培训,这些模型可以生成更合乎逻辑的事件序列,从而产生更满足的故事。但是,这些中间表示通常不使用自然语言,并且不需要微调就无法使用。我们建议使用现成的PLM生成故事情节,同时保持内容计划的好处,以产生凝聚力和满足的故事。我们提出的方法ScratchPlot首先提示PLM构成内容计划。然后,我们生成故事的身体并以内容计划结束。此外,我们通过使用其他PLM来对生成的(故事,结尾)对进行排名。我们用各种基线基准测试我们的方法,并在人类和自动评估中取得了卓越的结果。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
对事件序列的预测对于信息检索和自然语言处理中的许多现实世界应用至关重要。在事件序列预测中,未来的活动生成(FEG)是一项具有挑战性的任务,因为它不仅需要流利的文本生成,而且需要常识性推理才能保持整个事件故事的逻辑连贯性。在本文中,我们提出了一个新颖的可解释的FEG框架COEP。它突出并整合了两种类型的事件知识,对直接事件事件关系的顺序知识以及推论知识,这些知识反映了事件之间的中间角色心理学(例如意图,原因,反应),这些心理本质地将故事推向了故事。为了减轻知识遗忘问题,我们为每种类型的知识设计了两个模块,即IM和GM,它们是通过及时调整组合的。首先,IM专注于理解推论知识,以产生常识性解释并为通用汽车提供软提示向量。我们还设计了一种对比歧视器,以提高概括能力。其次,GM通过用IM的指导对直接顺序知识进行建模来生成未来事件。自动和人类评估表明,我们的方法可以产生更连贯,具体和逻辑的未来事件。
translated by 谷歌翻译
Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations (or ``chain-of-thought'' (CoT)) for in-context learning. On the other hand, these reasoning tasks are usually presumed to be more approachable for symbolic programming. To make progress towards understanding in-context learning, we curate synthetic datasets containing equivalent (natural, symbolic) data pairs, where symbolic examples contain first-order logic rules and predicates from knowledge bases (KBs). Then we revisit neuro-symbolic approaches and use Language Models as Logic Programmer (LMLP) that learns from demonstrations containing logic rules and corresponding examples to iteratively reason over KBs, recovering Prolog's backward chaining algorithm. Comprehensive experiments are included to systematically compare LMLP with CoT in deductive reasoning settings, showing that LMLP enjoys more than 25% higher accuracy than CoT on length generalization benchmarks even with fewer parameters.
translated by 谷歌翻译
随着人工智能系统变得越来越强大和普遍,人们对机器的道德或缺乏道德的关注变得越来越关注。然而,向机器讲授道德是一项艰巨的任务,因为道德仍然是人类中最激烈的争论问题之一,更不用说AI了。但是,部署到数百万用户的现有AI系统已经在做出充满道德影响的决策,这构成了一个看似不可能的挑战:教学机器的道德意义,而人类继续努力努力。为了探索这一挑战,我们介绍了Delphi,这是一个基于深层神经网络的实验框架,直接训练了描述性道德判断,例如,“帮助朋友”通常是不错的,而“帮助朋友传播假新闻”不是。经验结果提供了对机器伦理的承诺和局限性的新见解。面对新的道德情况,德尔菲(Delphi)表现出强大的概括能力,而现成的神经网络模型表现出明显差的判断,包括不公正的偏见,证实了对明确教学机器的道德意义的必要性。然而,德尔菲并不完美,表现出对普遍性偏见和不一致的敏感性。尽管如此,我们还是展示了不完美的Delphi的积极用例,包括在其他不完美的AI系统中将其用作组件模型。重要的是,我们根据著名的道德理论来解释Delphi的运营化,这使我们提出了重要的未来研究问题。
translated by 谷歌翻译
语言规划旨在通过分解为更简单的低级步骤来实现复杂的高级目标。这种程序推理能力对于诸如家用机器人和虚拟助手等应用至关重要。尽管语言规划是日常生活中人类的基本技能,但对于缺乏现实世界中缺乏深层常识性知识的大型语言模型(LLM)来说,这仍然是一个挑战。以前的方法需要手动示例或带注释的程序才能从LLM中获取此类能力。相比之下,本文提出了神经符号的因果语言规划师(CLAP),该策划者通过注入常识的提示从LLM中引起了程序知识。 LLMS中的预训练知识本质上是一种未观察到的混杂因素,它在任务和行动计划之间引起虚假的相关性。通过结构性因果模型(SCM)的镜头,我们提出了一个有效的策略,以构建提示作为对SCM的因果干预。我们的策略使用图形采样技术和符号程序执行者,正式从常识知识基础上形成结构化因果提示。拍手在Wikihow和机器人上获得最新的表现,在反事实环境下,人类评估的相对提高了5.28%。这表明在语义和顺序的因果语言规划中拍手的优势。
translated by 谷歌翻译
相同上下文的可能后果可能会因我们所指的情况而异。但是,当前在自然语言处理中的研究并不集中于多种可能情况下的常识性推理。本研究通过短篇小说文字提出与候选人答案相同的结尾的多个问题来构成这项任务。我们由此产生的数据集,可能的故事,包括超过1.3k的故事文本超过4.5k的问题。我们发现,即使是目前的强训练性语言模型也很难始终如一地回答问题,这强调了无监督环境中最高的准确性(60.2%)远远落后于人类准确性(92.5%)。通过与现有数据集进行比较,我们观察到数据集中的问题包含答案选项中的最小注释伪像。此外,我们的数据集还包括需要反事实推理的示例,以及需要读者的反应和虚构信息的示例,这表明我们的数据集可以作为对未来常识性推理的未来研究的挑战性测试。
translated by 谷歌翻译
We consider the problem of automatically generating stories in multiple languages. Compared to prior work in monolingual story generation, crosslingual story generation allows for more universal research on story planning. We propose to use Prompting Large Language Models with Plans to study which plan is optimal for story generation. We consider 4 types of plans and systematically analyse how the outputs differ for different planning strategies. The study demonstrates that formulating the plans as question-answer pairs leads to more coherent generated stories while the plan gives more control to the story creators.
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
创建什么故事需要推理关于先前陈述以及变更条件的可能结果。人们可以在新条件下轻松生成连贯的结局,但目前系统会对原始故事进行最小的变化来挑战。因此,一个主要挑战是生成逻辑故事和用最小编辑重写之间的权衡。在本文中,我们提出了教育,这是一种基于编辑的无预测方法,用于反复重写。教育包括基于估计在线条件的因果效果的目标位置检测策略,这使故事的因果不变部分。 Bowat然后在流畅,一致性和最小编辑约束下生成故事。我们还提出了一种新的指标来缓解当前自动指标的缺点,更好地评估权衡。我们评估公共反事故事重写基准的教育。实验表明,教育根据自动和人类评估,达到了无监督的SOTA方法的最佳权衡。教育资源可用于:https://github.com/jiangjiechen/educat。
translated by 谷歌翻译
Language models (LMs) often generate incoherent outputs: they refer to events and entity states that are incompatible with the state of the world described in their inputs. We introduce SituationSupervision, a family of approaches for improving coherence in LMs by training them to construct and condition on explicit representations of entities and their states. SituationSupervision has two components: an auxiliary situation modeling task that trains models to predict state representations in context, and a latent state inference procedure that imputes these states from partially annotated training data. SituationSupervision can be applied to both fine-tuning (by supervising LMs to encode state variables in their hidden representations) and prompting (by inducing LMs to interleave textual descriptions of entity states with output text). In both cases, SituationSupervision requires only a small number of state annotations to produce major coherence improvements (between 4-11%), showing that standard LMs can be sample-efficiently trained to model not just language but the situations it describes.
translated by 谷歌翻译
The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from-machine-to-corpus-to-machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al., 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically-as text-in addition to the neural model. We also distill only one aspect-the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model's commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and share our new symbolic knowledge graph and commonsense models.
translated by 谷歌翻译