Text-based speech editing allows users to edit speech by intuitively cutting, copying, and pasting text to speed up the process of editing speech. In the previous work, CampNet (context-aware mask prediction network) is proposed to realize text-based speech editing, significantly improving the quality of edited speech. This paper aims at a new task: adding emotional effect to the editing speech during the text-based speech editing to make the generated speech more expressive. To achieve this task, we propose Emo-CampNet (emotion CampNet), which can provide the option of emotional attributes for the generated speech in text-based speech editing and has the one-shot ability to edit unseen speakers' speech. Firstly, we propose an end-to-end emotion-selectable text-based speech editing model. The key idea of the model is to control the emotion of generated speech by introducing additional emotion attributes based on the context-aware mask prediction network. Secondly, to prevent the emotion of the generated speech from being interfered by the emotional components in the original speech, a neutral content generator is proposed to remove the emotion from the original speech, which is optimized by the generative adversarial framework. Thirdly, two data augmentation methods are proposed to enrich the emotional and pronunciation information in the training set, which can enable the model to edit the unseen speaker's speech. The experimental results that 1) Emo-CampNet can effectively control the emotion of the generated speech in the process of text-based speech editing; And can edit unseen speakers' speech. 2) Detailed ablation experiments further prove the effectiveness of emotional selectivity and data augmentation methods. The demo page is available at https://hairuo55.github.io/Emo-CampNet/
translated by 谷歌翻译
Previous databases have been designed to further the development of fake audio detection. However, fake utterances are mostly generated by altering timbre, prosody, linguistic content or channel noise of original audios. They ignore a fake situation, in which the attacker manipulates an acoustic scene of the original audio with another forgery one. It will pose a major threat to our society if some people misuse the manipulated audio with malicious purpose. Therefore, this motivates us to fill in the gap. This paper designs such a dataset for scene fake audio detection (SceneFake). A manipulated audio in the SceneFake dataset involves only tampering the acoustic scene of an utterance by using speech enhancement technologies. We can not only detect fake utterances on a seen test set but also evaluate the generalization of fake detection models to unseen manipulation attacks. Some benchmark results are described on the SceneFake dataset. Besides, an analysis of fake attacks with different speech enhancement technologies and signal-to-noise ratios are presented on the dataset. The results show that scene manipulated utterances can not be detected reliably by the existing baseline models of ASVspoof 2019. Furthermore, the detection of unseen scene manipulation audio is still challenging.
translated by 谷歌翻译
进行了许多有效的尝试进行了DeepFake音频检测。但是,他们只能区分真实和假货。对于许多实际的应用程序方案,还需要哪种工具或算法生成DeepFake音频。这提出了一个问题:我们可以检测到DeepFake音频的系统指纹吗?因此,本文进行了初步研究,以检测DeepFake音频的系统指纹。实验是从五个最新的深入学习语音合成系统的DeepFake音频数据集上进行的。结果表明,LFCC功能相对适合系统指纹检测。此外,RESNET在基于LCNN和X-Vector模型中获得了最佳检测结果。T-SNE可视化表明,不同的语音合成系统会生成不同的系统指纹。
translated by 谷歌翻译
已经进行了许多有效的尝试来进行虚假的音频检测。但是,他们只能提供检测结果,但没有对抗这种伤害的对策。对于许多相关的实际应用,也需要哪种模型或算法生成假音频。因此,我们提出了一个新问题,用于检测虚假音频的Vocoder指纹。实验是在由八个最先进的歌手合成的数据集上进行的。我们已经初步探索了功能和模型体系结构。T-SNE可视化表明,不同的Vocoder会生成不同的Vocoder指纹。
translated by 谷歌翻译
现有的假音频检测系统通常依靠专家经验来设计声学功能或手动设计网络结构的超参数。但是,人工调整参数可能会对结果产生相对明显的影响。几乎不可能手动设置最佳参数集。因此,本文提出了一种完全自动化的终端伪造音频检测方法。我们首先使用WAV2VEC预训练模型来获得语音的高级表示。此外,对于网络结构,我们使用了名为Light-Darts的可区分体系结构搜索(飞镖)的修改版本。它学习了深厚的语音表示,同时自动学习和优化包括卷积操作和残留块组成的复杂神经结构。 ASVSPOOF 2019 LA数据集的实验结果表明,我们提出的系统达到的错误率(EER)为1.08%,这表现优于最先进的单个系统。
translated by 谷歌翻译
最近,先驱研究工作提出了大量的声学特征(原木功率谱图,线性频率卷轴系数,恒定的q cepstral系数等),以进行音频深层检测,获得良好的性能,并表明不同的子带对音频有不同的贡献DeepFake检测。但是,这缺乏对子带中特定信息的解释,这些功能也丢失了诸如阶段之类的信息。受合成语音机制的启发,基本频率(F0)信息用于提高综合语音的质量,而合成语音的F0仍然太平均,这与真实语音的F0差异很大。可以预期,F0可以用作重要信息来区分真正的语言和虚假语音,而由于F0的分布不规则,因此不能直接使用此信息。相反,选择了大多数F0的频带作为输入特征。同时,为了充分利用相位和全频段信息,我们还建议使用真实和虚构的频谱图作为互补输入功能,并分别对Discoint子带进行建模。最后,融合了F0的结果,真实和假想的频谱图。 ASVSPOOF 2019 LA数据集的实验结果表明,我们所提出的系统对于音频DeepFake检测任务非常有效,达到等效错误率(EER)为0.43%,几乎超过了所有系统。
translated by 谷歌翻译
代码转换是关于在通信过程中处理替代语言。训练端到端(E2E)自动语音识别(ASR)系统用于代码开关是一个充满挑战的问题,因为由于存在多种语言,因此缺乏增加语言上下文混乱的数据加剧的数据。在本文中,我们提出了一种与语言相关的注意机制,以减少基于等价约束理论(EC)的E2E代码转换ASR模型的多语言上下文混乱。语言理论要求在代码转换句子中发生的任何单语片段都必须发生在一个单语句子中。它在单语言数据和代码转换数据之间建立了一个桥梁。通过计算多种语言的各自注意力,我们的方法可以从丰富的单语言数据中有效地传输语言知识。我们在ASRU 2019-English代码转换挑战数据集上评估我们的方法。与基线模型相比,提出的方法可实现11.37%的相对混合错误率降低。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译