Removing reverb from reverberant music is a necessary technique to clean up audio for downstream music manipulations. Reverberation of music contains two categories, natural reverb, and artificial reverb. Artificial reverb has a wider diversity than natural reverb due to its various parameter setups and reverberation types. However, recent supervised dereverberation methods may fail because they rely on sufficiently diverse and numerous pairs of reverberant observations and retrieved data for training in order to be generalizable to unseen observations during inference. To resolve these problems, we propose an unsupervised method that can remove a general kind of artificial reverb for music without requiring pairs of data for training. The proposed method is based on diffusion models, where it initializes the unknown reverberation operator with a conventional signal processing technique and simultaneously refines the estimate with the help of diffusion models. We show through objective and perceptual evaluations that our method outperforms the current leading vocal dereverberation benchmarks.
translated by 谷歌翻译
我们提出了一种基于多任务对抗训练的多扬声器神经文本到语音(TTS)模型的新型培训算法。传统的基于基于的训练算法的常规生成对抗网络(GAN)通过减少自然语音和合成语音之间的统计差异来显着提高合成语音的质量。但是,该算法不能保证训练有素的TTS模型的概括性能在综合培训数据中未包括的看不见的说话者的声音中。我们的算法替代训练两个深神经网络:多任务歧视器和多扬声器神经TTS模型(即GAN的生成器)。对歧视者的训练不仅是为了区分自然语音和合成语音,而且还存在验证输入语音的说话者的存在或不存在(即,通过插值可见的说话者的嵌入向量而新生成)。同时,对发电机进行了训练,以最大程度地减少语音重建损失的加权总和和欺骗歧视者的对抗性损失,即使目标扬声器看不见,也可以实现高质量的多演讲者TT。实验评估表明,我们的算法比传统的甘斯多克算法更好地提高了合成语音的质量。
translated by 谷歌翻译
本文提出了一种用于多演讲者文本到语音的人类扬声器适应方法。使用常规的说话者适应方法,使用对扬声器歧视任务进行培训的扬声器编码器,从其参考语音中提取目标扬声器的嵌入矢量。但是,当参考语音不可用时,该方法无法获得目标扬声器的嵌入向量。我们的方法基于人类的优化框架,该框架结合了用户来探索扬声器 - 安装空间以查找目标扬声器的嵌入。提出的方法使用顺序线搜索算法,该算法反复要求用户在嵌入空间中的线段上选择一个点。为了有效地从多个刺激中选择最佳的语音样本,我们还开发了一个系统,在该系统中,用户可以在每个音素的声音之间切换在循环发音的同时。实验结果表明,即使不直接将参考语音用作说话者编码器的输入,提出的方法也可以在客观和主观评估中实现与常规评估相当的性能。
translated by 谷歌翻译
我们提出了一个端到端的移情对话言语综合(DSS)模型,该模型既考虑对话历史的语言和韵律背景。同理心是人类积极尝试进入对话中的对话者,而同理心DSS是在口语对话系统中实施此行为的技术。我们的模型以语言和韵律特征的历史为条件,以预测适当的对话环境。因此,可以将其视为传统基于语言 - 基于语言的对话历史建模的扩展。为了有效地培训善解人意的DSS模型,我们研究1)通过大型语音语料库预审预测的一个自我监督的学习模型,2)一种风格引导的培训,使用韵律嵌入对话上下文嵌入的当前话语,3)对结合文本和语音方式的跨模式的关注,以及4)句子的嵌入,以实现细粒度的韵律建模,而不是通过话语建模。评估结果表明,1)仅考虑对话历史的韵律环境并不能提高善解人意的DSS中的语音质量和2)引入样式引导的培训和句子嵌入模型的言语质量比传统方法更高。
translated by 谷歌翻译
我们提出了研究,这是一种新的演讲语料库,用于开发一个可以以友好方式讲话的语音代理。人类自然会控制他们的言语韵律以相互同情。通过将这种“同情对话”行为纳入口语对话系统,我们可以开发一个可以自然响应用户的语音代理。我们设计了研究语料库,以包括一位演讲者,他明确地对对话者的情绪表示同情。我们描述了构建善解人意的对话语音语料库的方法论,并报告研究语料库的分析结果。我们进行了文本到语音实验,以最初研究如何开发更多的自然语音代理,以调整其口语风格,以对应对话者的情绪。结果表明,对话者的情绪标签和对话上下文嵌入的使用可以与使用代理商的情感标签相同的自然性产生语音。我们的研究项目页面是http://sython.org/corpus/studies。
translated by 谷歌翻译
Many e-commerce marketplaces offer their users fast delivery options for free to meet the increasing needs of users, imposing an excessive burden on city logistics. Therefore, understanding e-commerce users' preference for delivery options is a key to designing logistics policies. To this end, this study designs a stated choice survey in which respondents are faced with choice tasks among different delivery options and time slots, which was completed by 4,062 users from the three major metropolitan areas in Japan. To analyze the data, mixed logit models capturing taste heterogeneity as well as flexible substitution patterns have been estimated. The model estimation results indicate that delivery attributes including fee, time, and time slot size are significant determinants of the delivery option choices. Associations between users' preferences and socio-demographic characteristics, such as age, gender, teleworking frequency and the presence of a delivery box, were also suggested. Moreover, we analyzed two willingness-to-pay measures for delivery, namely, the value of delivery time savings (VODT) and the value of time slot shortening (VOTS), and applied a non-semiparametric approach to estimate their distributions in a data-oriented manner. Although VODT has a large heterogeneity among respondents, the estimated median VODT is 25.6 JPY/day, implying that more than half of the respondents would wait an additional day if the delivery fee were increased by only 26 JPY, that is, they do not necessarily need a fast delivery option but often request it when cheap or almost free. Moreover, VOTS was found to be low, distributed with the median of 5.0 JPY/hour; that is, users do not highly value the reduction in time slot size in monetary terms. These findings on e-commerce users' preferences can help in designing levels of service for last-mile delivery to significantly improve its efficiency.
translated by 谷歌翻译
In recent years, various service robots have been introduced in stores as recommendation systems. Previous studies attempted to increase the influence of these robots by improving their social acceptance and trust. However, when such service robots recommend a product to customers in real environments, the effect on the customers is influenced not only by the robot itself, but also by the social influence of the surrounding people such as store clerks. Therefore, leveraging the social influence of the clerks may increase the influence of the robots on the customers. Hence, we compared the influence of robots with and without collaborative customer service between the robots and clerks in two bakery stores. The experimental results showed that collaborative customer service increased the purchase rate of the recommended bread and improved the impression regarding the robot and store experience of the customers. Because the results also showed that the workload required for the clerks to collaborate with the robot was not high, this study suggests that all stores with service robots may show high effectiveness in introducing collaborative customer service.
translated by 谷歌翻译
We introduce \textsc{PoliteRewrite} -- a dataset for polite language rewrite which is a novel sentence rewrite task. Compared with previous text style transfer tasks that can be mostly addressed by slight token- or phrase-level edits, polite language rewrite requires deep understanding and extensive sentence-level edits over an offensive and impolite sentence to deliver the same message euphemistically and politely, which is more challenging -- not only for NLP models but also for human annotators to rewrite with effort. To alleviate the human effort for efficient annotation, we first propose a novel annotation paradigm by a collaboration of human annotators and GPT-3.5 to annotate \textsc{PoliteRewrite}. The released dataset has 10K polite sentence rewrites annotated collaboratively by GPT-3.5 and human, which can be used as gold standard for training, validation and test; and 100K high-quality polite sentence rewrites by GPT-3.5 without human review. We wish this work (The dataset (10K+100K) will be released soon) could contribute to the research on more challenging sentence rewrite, and provoke more thought in future on resource annotation paradigm with the help of the large-scaled pretrained models.
translated by 谷歌翻译
This study targets the mixed-integer black-box optimization (MI-BBO) problem where continuous and integer variables should be optimized simultaneously. The CMA-ES, our focus in this study, is a population-based stochastic search method that samples solution candidates from a multivariate Gaussian distribution (MGD), which shows excellent performance in continuous BBO. The parameters of MGD, mean and (co)variance, are updated based on the evaluation value of candidate solutions in the CMA-ES. If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization. In particular, when binary variables are included in the problem, this stagnation more likely occurs because the granularity of the discretization becomes wider, and the existing modification to the CMA-ES does not address this stagnation. To overcome these limitations, we propose a simple extension of the CMA-ES based on lower-bounding the marginal probabilities associated with the generation of integer variables in the MGD. The numerical experiments on the MI-BBO benchmark problems demonstrate the efficiency and robustness of the proposed method. Furthermore, in order to demonstrate the generality of the idea of the proposed method, in addition to the single-objective optimization case, we incorporate it into multi-objective CMA-ES and verify its performance on bi-objective mixed-integer benchmark problems.
translated by 谷歌翻译
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
translated by 谷歌翻译