当前的口语对话系统在长时间的沉默(700-1000ms)之后开始转弯,这导致了几乎没有实时反馈,缓慢的反应和整体刻板的对话流。人类通常在200ms之内做出反应,并成功预测提前的起始点将使口语对话代理也能够做到这一点。在这项工作中,我们预测使用预先训练的语音表示模型(WAV2VEC 1.0)的韵律功能在用户音频和从预先训练的语言模型(GPT-2)上运行的单词功能(wav2Vec 1.0)的启动时间进行预测。。为了评估错误,我们提出了两个指标W.R.T.预测和真实的交货时间。我们训练和评估了总结板语料库上的模型,发现我们的方法的表现优于指标的先前工作,并且大大优于等待700ms沉默的常见方法。
translated by 谷歌翻译
最近的文本到语音(TTS)的质量与人类的质量相当。但是,其在口语对话中的应用尚未得到广泛研究。这项研究旨在实现与人类对话非常相似的TT。首先,我们记录并抄录实际自发对话。然后,提出的对话TTS分为两个阶段:第一阶段,各种自动编码器(VAE) - VITS或高斯混合物变化自动编码器(GMVAE) - 培训了训练,从端到端文本对语音(VIT),最近提出的端到端TTS模型。从语音中提取潜在的口语表示的样式编码器与TTS共同培训。在第二阶段,对风格预测指标进行了训练,以预测从对话历史中综合的说话风格。在推断期间,通过将样式预测器预测的语言样式表示为VAE/gmvae-vits,可以以适合对话背景的样式合成语音。主观评估结果表明,所提出的方法在对话级别的自然性方面优于原始VIT。
translated by 谷歌翻译
扬声器日流是一个标签音频或视频录制的任务,与扬声器身份或短暂的任务标记对应于扬声器标识的类,以识别“谁谈到何时发表讲话”。在早期,对MultiSpeaker录音的语音识别开发了扬声器日益衰退算法,以使扬声器自适应处理能够实现扬声器自适应处理。这些算法还将自己的价值作为独立应用程序随着时间的推移,为诸如音频检索等下游任务提供特定于扬声器的核算。最近,随着深度学习技术的出现,这在讲话应用领域的研究和实践中引起了革命性的变化,对扬声器日益改善已经进行了快速进步。在本文中,我们不仅审查了扬声器日益改善技术的历史发展,而且还审查了神经扬声器日益改善方法的最新进步。此外,我们讨论了扬声器日复速度系统如何与语音识别应用相结合,以及最近深度学习的激增是如何引领联合建模这两个组件互相互补的方式。通过考虑这种令人兴奋的技术趋势,我们认为本文对社区提供了有价值的贡献,以通过巩固具有神经方法的最新发展,从而促进更有效的扬声器日益改善进一步进展。
translated by 谷歌翻译
In this paper, we present a novel method for phoneme-level prosody control of F0 and duration using intuitive discrete labels. We propose an unsupervised prosodic clustering process which is used to discretize phoneme-level F0 and duration features from a multispeaker speech dataset. These features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention-based text-to-speech model. We utilize various methods in order to improve prosodic control range and coverage, such as augmentation, F0 normalization, balanced clustering for duration and speaker-independent clustering. The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. Instead of relying on reference utterances for inference, we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio. We also fine-tune the multispeaker model to unseen speakers with limited amounts of data, as a realistic application scenario and show that the prosody control capabilities are maintained, verifying that the speaker-independent prosodic clustering is effective. Experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker's range despite the variability that a multispeaker setting introduces.
translated by 谷歌翻译
在本文中,我们呈现VDTTS,一个视觉驱动的文本到语音模型。通过配音而激励,VDTTS利用视频帧作为伴随文本的附加输入,并生成与视频信号匹配的语音。我们展示了这允许VDTTS,与普通的TTS模型不同,产生不仅具有自然暂停和间距等韵律变化的语音,而且还与输入视频同步。实验,我们显示我们的模型产生良好的同步输出,接近地面真理的视频语音同步质量,在几个具有挑战性的基准中,包括来自VoxceleB2的“野外”内容。我们鼓励读者查看演示视频,演示视频语音同步,对扬声器ID交换和韵律的鲁棒性。
translated by 谷歌翻译
Voice Conversion (VC) is the task of making a spoken utterance by one speaker sound as if uttered by a different speaker, while keeping other aspects like content unchanged. Current VC methods, focus primarily on spectral features like timbre, while ignoring the unique speaking style of people which often impacts prosody. In this study, we introduce a method for converting not only the timbre, but also prosodic information (i.e., rhythm and pitch changes) to those of the target speaker. The proposed approach is based on a pretrained, self-supervised, model for encoding speech to discrete units, which make it simple, effective, and easy to optimise. We consider the many-to-many setting with no paired data. We introduce a suite of quantitative and qualitative evaluation metrics for this setup, and empirically demonstrate the proposed approach is significantly superior to the evaluated baselines. Code and samples can be found under https://pages.cs.huji.ac.il/adiyoss-lab/dissc/ .
translated by 谷歌翻译
这项工作探讨了在不存在的人类发声声中合成语音的任务。我们称之为此任务“扬声器生成”,并呈现Tacosawn,一个在此任务中竞争地执行的系统。Tacosawn是一种基于重复的关注文本到语音模型,了解备用空间的发行版,这使得新颖和各种扬声器采样。我们的方法易于实现,并且不需要从扬声器ID系统转移学习。我们呈现客观和主观指标,用于评估此任务的表现,并证明我们所提出的客观指标与人类对扬声器相似性相关联。我们的演示页面上有音频样本。
translated by 谷歌翻译
In this work we propose a novel token-based training strategy that improves Transformer-Transducer (T-T) based speaker change detection (SCD) performance. The conventional T-T based SCD model loss optimizes all output tokens equally. Due to the sparsity of the speaker changes in the training data, the conventional T-T based SCD model loss leads to sub-optimal detection accuracy. To mitigate this issue, we use a customized edit-distance algorithm to estimate the token-level SCD false accept (FA) and false reject (FR) rates during training and optimize model parameters to minimize a weighted combination of the FA and FR, focusing the model on accurately predicting speaker changes. We also propose a set of evaluation metrics that align better with commercial use cases. Experiments on a group of challenging real-world datasets show that the proposed training method can significantly improve the overall performance of the SCD model with the same number of parameters.
translated by 谷歌翻译
Long-range context modeling is crucial to both dialogue understanding and generation. The most popular method for dialogue context representation is to concatenate the last-$k$ previous utterances. However, this method may not be ideal for conversations containing long-range dependencies. In this work, we propose DialoGX, a novel encoder-decoder based framework for conversational response generation with a generalized and explainable context representation that can look beyond the last-$k$ utterances. Hence the method is adaptive to conversations with long-range dependencies. The main idea of our approach is to identify and utilize the most relevant historical utterances instead of the last-$k$ utterances in chronological order. We study the effectiveness of our proposed method on both dialogue generation (open-domain) and understanding (DST) tasks. DialoGX achieves comparable performance with the state-of-the-art models on DailyDialog dataset. We also observe performance gain in existing DST models with our proposed context representation strategy on MultiWOZ dataset. We justify our context representation through the lens of psycholinguistics and show that the relevance score of previous utterances agrees well with human cognition which makes DialoGX explainable as well.
translated by 谷歌翻译
本文介绍了一种无监督的基于分段的稳健语音活动检测方法(RVAD)。该方法包括两个去噪之后的传递,然后是语音活动检测(VAD)阶段。在第一通道中,通过使用后验信噪比(SNR)加权能量差来检测语音信号中的高能段,并且如果在段内没有检测到间距,则该段被认为是高能量噪声段并设置为零。在第二种通过中,语音信号由语音增强方法进行去噪,探索了几种方法。接下来,具有间距的相邻帧被分组在一起以形成音调段,并且基于语音统计,俯仰段进一步从两端延伸,以便包括浊音和发声声音和可能的非语音部分。最后,将后验SNR加权能量差应用于用于检测语音活动的去噪语音信号的扩展桨距片段。我们使用两个数据库,大鼠和极光-2评估所提出的方法的VAD性能,该方法包含大量噪声条件。在扬声器验证性能方面进一步评估RVAD方法,在Reddots 2016挑战数据库及其噪声损坏版本方面。实验结果表明,RVAD与许多现有方法有利地比较。此外,我们介绍了一种修改版的RVAD,其中通过计算有效的光谱平坦度计算替换计算密集的俯仰提取。修改的版本显着降低了适度较低的VAD性能成本的计算复杂性,这是在处理大量数据并在低资源设备上运行时的优势。 RVAD的源代码被公开可用。
translated by 谷歌翻译
我们引入了一种新的自动评估方法,用于说话者相似性评估,这与人类感知得分一致。现代神经文本到语音模型需要大量的干净训练数据,这就是为什么许多解决方案从单个扬声器模型转换为在许多不同扬声器的示例中训练的解决方案的原因。多扬声器模型带来了新的可能性,例如更快地创建新声音,也是一个新问题 - 扬声器泄漏,其中合成示例的扬声器身份可能与目标扬声器的示例不符。当前,发现此问题的唯一方法是通过昂贵的感知评估。在这项工作中,我们提出了一种评估说话者相似性的自动方法。为此,我们扩展了有关说话者验证系统的最新工作,并评估不同的指标和说话者嵌入模型如何以隐藏的参考和锚(Mushra)分数反映多个刺激。我们的实验表明,我们可以训练一个模型来预测扬声器嵌入的扬声器相似性,其精度为0.96的扬声器嵌入,并且在话语级别上最高0.78 Pearson分数。
translated by 谷歌翻译
The task of emotion recognition in conversations (ERC) benefits from the availability of multiple modalities, as offered, for example, in the video-based MELD dataset. However, only a few research approaches use both acoustic and visual information from the MELD videos. There are two reasons for this: First, label-to-video alignments in MELD are noisy, making those videos an unreliable source of emotional speech data. Second, conversations can involve several people in the same scene, which requires the detection of the person speaking the utterance. In this paper we demonstrate that by using recent automatic speech recognition and active speaker detection models, we are able to realign the videos of MELD, and capture the facial expressions from uttering speakers in 96.92% of the utterances provided in MELD. Experiments with a self-supervised voice recognition model indicate that the realigned MELD videos more closely match the corresponding utterances offered in the dataset. Finally, we devise a model for emotion recognition in conversations trained on the face and audio information of the MELD realigned videos, which outperforms state-of-the-art models for ERC based on vision alone. This indicates that active speaker detection is indeed effective for extracting facial expressions from the uttering speakers, and that faces provide more informative visual cues than the visual features state-of-the-art models have been using so far.
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
尽管神经网络表现出具有非凡的语言内容的非凡能力,但捕获与说话者对话角色有关的上下文信息是一个开放的研究领域。在这项工作中,我们通过黑手党的游戏分析了说话者角色对语言使用的影响,其中参与者被分配了诚实或欺骗性的角色。除了构建一个框架以收集黑手党游戏记录数据集外,我们还证明了角色不同的玩家所产生的语言差异。我们确认,分类模型能够将欺骗性玩家排名为仅根据语言的使用而对诚实的玩家排名更可疑。此外,我们表明,有关两个辅助任务的培训模型优于基于BERT的标准文本分类方法。我们还提出了使用训练有素的模型来识别区分玩家角色的功能的方法,这些功能可在黑手党游戏中用于帮助玩家。
translated by 谷歌翻译
口吃是一种言语障碍,在此期间,语音流被非自愿停顿和声音重复打断。口吃识别是一个有趣的跨学科研究问题,涉及病理学,心理学,声学和信号处理,使检测很难且复杂。机器和深度学习的最新发展已经彻底彻底改变了语音领域,但是对口吃的识别受到了最小的关注。这项工作通过试图将研究人员从跨学科领域聚集在一起来填补空白。在本文中,我们回顾了全面的声学特征,基于统计和深度学习的口吃/不足分类方法。我们还提出了一些挑战和未来的指示。
translated by 谷歌翻译
尽管在文本到语音综合的生成建模方面取得了最新进展,但这些模型尚未具有与螺距条件确定性模型(例如FastPitch和fastspeech2)相同的细粒度可调节性。音调信息不仅是低维度,而且是不连续的,这使得在生成环境中建模特别困难。我们的工作探讨了在正常流量模型的背景下处理上述问题的几种技术。我们还发现这个问题非常适合神经条件流,这是归一化流中更常见的仿射耦合机制的高度表达替代品。
translated by 谷歌翻译
Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors.
translated by 谷歌翻译
产生表现力和上下文适当的韵律仍然是现代文本到语音(TTS)系统的挑战。对于长,多句的输入,这一点尤其明显。在本文中,我们检查了基于变压器的快速语音系统的简单扩展,目的是改善多句子TT的韵律。我们发现,漫长的上下文,强大的文本功能以及对多演讲者数据的培训都改善了韵律。更有趣的是,它们产生协同作用。长篇小说席卷了韵律,改善了连贯性,并发挥了变形金刚的优势。来自强大的语言模型(例如BERT)的微调单词级功能似乎从更多培训数据中获利,在多演讲者设置中很容易获得。我们调查有关暂停和起搏的客观指标,并对语音自然进行彻底的主观评估。我们的主要系统结合了所有扩展,取得了始终如一的良好结果,包括对所有竞争对手的言语自然性的显着改善。
translated by 谷歌翻译
语音情感转换是修改语音话语的感知情绪的任务,同时保留词汇内容和扬声器身份。在这项研究中,我们将情感转换问题作为口语翻译任务。我们将演讲分解为离散和解散的学习表现,包括内容单位,F0,扬声器和情感。首先,我们通过将内容单元转换为目标情绪来修改语音内容,然后基于这些单元预测韵律特征。最后,通过将预测的表示馈送到神经声码器中来生成语音波形。这样的范式允许我们超越信号的光谱和参数变化,以及模型非口头发声,例如笑声插入,打开拆除等。我们客观地和主观地展示所提出的方法在基础上优于基线感知情绪和音频质量。我们严格评估了这种复杂系统的所有组成部分,并通过广泛的模型分析和消融研究结束,以更好地强调建议方法的建筑选择,优势和弱点。示例和代码将在以下链接下公开使用:https://speechbot.github.io/emotion。
translated by 谷歌翻译
使用未知数量的扬声器数量的单通道远场录制的自动语音识别(ASR)传统上由级联模块解决。最近的研究表明,与模块化系统相比,端到端(E2E)多扬声器ASR模型可以实现卓越的识别准确性。但是,这些模型不会确保由于其对完整音频上下文的依赖性而实时适用性。这项工作采用实时适用性,作为模型设计的第一优先级,并解决了以前的多扬声器经常性神经网络传感器(MS-RNN-T)的几个挑战。首先,我们在训练期间介绍一般的重叠言论模拟,在LibrisPeechMix测试集上产生14%的相对字错误率(WER)改进。其次,我们提出了一种新的多转RNN-T(MT-RNN-T)模型,其具有基于重叠的目标布置策略,其概括为任意数量的扬声器,而没有模型架构的变化。我们调查在Liblics测试集上培训训练期间看到的最大扬声器数量的影响,并在两位扬声器MS-RNN-T上报告28%的相对加速。第三,我们试验丰富的转录战略,共同承认和分割多方言论。通过深入分析,我们讨论所提出的系统的潜在陷阱以及未来的未来研究方向。
translated by 谷歌翻译