古本(Guzheng)是一种具有多种演奏技巧的传统中国乐器。乐器演奏技术(IPT)在音乐表演中起着重要作用。但是,大多数现有的IPT检测作品显示出可变长度音频的效率低下,并且在概括方面没有保证,因为它们依靠单个声音库进行训练和测试。在这项研究中,我们建议使用可应用于可变长度音频的完全卷积网络提出了一个端到端的古兴游戏检测系统。由于每种古季的演奏技术都应用于音符,因此对专用的发作探测器进行了训练,可以将音频分为几个音符,并将其预测与框架IPT的预测融合在一起。在融合过程中,我们在每个音符内部添加IPT预测框架,并在每个音符中获得最高概率的IPT作为该注释的最终输出。我们创建了一个来自多个声音银行的名为GZ_ISOTECH的新数据集,并创建了Guzheng性能分析的现实世界录制。我们的方法在框架级准确性和80.76%的笔记级F1得分方面达到了87.97%,超过了现有的作品,这表明我们提出的方法在IPT检测中的有效性。
translated by 谷歌翻译
在本文中,我们介绍了联合主义者,这是一种能够感知的多仪器框架,能够转录,识别和识别和将多种乐器与音频剪辑分开。联合主义者由调节其他模块的仪器识别模块组成:输出仪器特异性钢琴卷的转录模块以及利用仪器信息和转录结果的源分离模块。仪器条件设计用于明确的多仪器功能,而转录和源分离模块之间的连接是为了更好地转录性能。我们具有挑战性的问题表述使该模型在现实世界中非常有用,因为现代流行音乐通常由多种乐器组成。但是,它的新颖性需要关于如何评估这种模型的新观点。在实验过程中,我们从各个方面评估了模型,为多仪器转录提供了新的评估观点。我们还认为,转录模型可以用作其他音乐分析任务的预处理模块。在几个下游任务的实验中,我们的转录模型提供的符号表示有助于解决降低检测,和弦识别和关键估计的频谱图。
translated by 谷歌翻译
音乐表达需要控制播放的笔记,以及如何执行它们。传统的音频合成器提供了详细的表达控制,但以现实主义的成本提供了详细的表达控制。黑匣子神经音频合成和连接采样器可以产生现实的音频,但有很少的控制机制。在这项工作中,我们介绍MIDI-DDSP乐器的分层模型,可以实现现实的神经音频合成和详细的用户控制。从可解释的可分辨率数字信号处理(DDSP)合成参数开始,我们推断出富有表现力性能的音符和高级属性(例如Timbre,Vibrato,Dynamics和Asticiculation)。这将创建3级层次结构(注释,性能,合成),提供个人选择在每个级别进行干预,或利用培训的前沿(表现给出备注,综合赋予绩效)进行创造性的帮助。通过定量实验和聆听测试,我们证明了该层次结构可以重建高保真音频,准确地预测音符序列的性能属性,独立地操纵给定性能的属性,以及作为完整的系统,从新颖的音符生成现实音频顺序。通过利用可解释的层次结构,具有多个粒度的粒度,MIDI-DDSP将门打开辅助工具的门,以赋予各种音乐体验的个人。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
In recent years, the task of Automatic Music Transcription (AMT), whereby various attributes of music notes are estimated from audio, has received increasing attention. At the same time, the related task of Multi-Pitch Estimation (MPE) remains a challenging but necessary component of almost all AMT approaches, even if only implicitly. In the context of AMT, pitch information is typically quantized to the nominal pitches of the Western music scale. Even in more general contexts, MPE systems typically produce pitch predictions with some degree of quantization. In certain applications of AMT, such as Guitar Tablature Transcription (GTT), it is more meaningful to estimate continuous-valued pitch contours. Guitar tablature has the capacity to represent various playing techniques, some of which involve pitch modulation. Contemporary approaches to AMT do not adequately address pitch modulation, and offer only less quantization at the expense of more model complexity. In this paper, we present a GTT formulation that estimates continuous-valued pitch contours, grouping them according to their string and fret of origin. We demonstrate that for this task, the proposed method significantly improves the resolution of MPE and simultaneously yields tablature estimation results competitive with baseline models.
translated by 谷歌翻译
声音事件检测(SED)在监控,视频索引等中的广泛应用程序上获得了越来越长的关注。SED中的现有模型主要产生帧级预测,将其转换为序列多标签分类问题。基于帧的模型的一个关键问题是它追求最佳的帧级预测而不是最佳的事件级预测。此外,它需要后处理,无法以端到端的方式培训。本文首先介绍了一维检测变压器(1D-DETR),受到图像对象检测的检测变压器的启发。此外,鉴于SED的特征,音频查询分支和用于微调的一对多匹配策略将模型添加到1D-DETR以形成声音事件检测变压器(SEDT)。据我们所知,Sedt是第一个基于事件和最终的SED模型。实验在城市 - SED数据集和DCES2019任务4数据集上进行,两者都表明席克可以实现竞争性能。
translated by 谷歌翻译
音频分割和声音事件检测是机器聆听中的关键主题,旨在检测声学类别及其各自的边界。它对于音频分析,语音识别,音频索引和音乐信息检索非常有用。近年来,大多数研究文章都采用分类。该技术将音频分为小帧,并在这些帧上单独执行分类。在本文中,我们提出了一种新颖的方法,叫您只听一次(Yoho),该方法受到计算机视觉中普遍采用的Yolo算法的启发。我们将声学边界的检测转换为回归问题,而不是基于框架的分类。这是通过具有单独的输出神经元来检测音频类的存在并预测其起点和终点来完成的。与最先进的卷积复发性神经网络相比,Yoho的F量的相对改善范围从多个数据集中的1%到6%不等,以进行音频分段和声音事件检测。由于Yoho的输出更端到端,并且可以预测的神经元更少,因此推理速度的速度至少比逐个分类快6倍。另外,由于这种方法可以直接预测声学边界,因此后处理和平滑速度约为7倍。
translated by 谷歌翻译
诸如“ uh”或“ um”之类的填充词是人们用来表示他们停下来思考的声音或词。从录音中查找和删除填充单词是媒体编辑中的一项常见和繁琐的任务。自动检测和分类填充单词可以极大地帮助这项任务,但是迄今为止,很少有关于此问题的研究。一个关键原因是缺少带有带注释的填充词的数据集用于模型培训和评估。在这项工作中,我们介绍了一个新颖的语音数据集,PodcastFillers,带有35K注释的填充单词和50k注释,这些声音通常会出现在播客中,例如呼吸,笑声和单词重复。我们提出了一条利用VAD和ASR来检测填充候选物和分类器以区分填充单词类型的管道。我们评估了有关播客填充器的拟议管道,与几个基线相比,并提供了一项详细的消融研究。特别是,我们评估了使用ASR的重要性以及它与类似于关键字发现的无转录方法的比较。我们表明,我们的管道获得了最新的结果,并且利用ASR强烈优于关键字斑点方法。我们公开播放播客,希望我们的工作是未来研究的基准。
translated by 谷歌翻译
音乐作品结构的分析是一项任务,对人工智能仍然是一个挑战,特别是在深度学习领域。它需要先前识别音乐件的结构范围。最近通过无监督的方法和\ Texit {端到端}技术研究了这种结构边界分析,例如使用熔融缩放的对数级阶段特征(MLS),自相似性矩阵(SSM)等卷积神经网络(CNN)或自我相似性滞后矩阵(SSLM)作为输入和用人的注释培训。已发布几项研究分为无监督和\ yexit {端到端}方法,其中使用不同的距离度量和音频特性以不同方式进行预处理,因此通过计算模型输入的广义预处理方法是丢失的。这项工作的目的是通过比较来自不同池策略,距离度量和音频特性的输入来建立预处理这些输入的一般方法,也考虑到计算时间来获得它们。我们还建立了要交付给CNN的最有效的投入结合,以便建立最有效的方法来提取音乐件结构的限制。通过对输入矩阵和池策略的充分组合,我们获得了0.411的测量精度$ 0.411优于在相同条件下获得的目前。
translated by 谷歌翻译
老年人的跌倒检测是一些经过深入研究的问题,其中包括多种拟议的解决方案,包括可穿戴和不可磨损的技术。尽管现有技术的检测率很高,但由于需要佩戴设备和用户隐私问题,因此缺乏目标人群的采用。我们的论文提供了一种新颖的,不可磨损的,不受欢迎的和可扩展的解决方案,用于秋季检测,该解决方案部署在配备麦克风的自主移动机器人上。所提出的方法使用人们在房屋中记录的环境声音输入。我们专门针对浴室环境,因为它很容易跌落,并且在不危害用户隐私的情况下无法部署现有技术。目前的工作开发了一种基于变压器体系结构的解决方案,该解决方案从浴室中获取嘈杂的声音输入,并将其分为秋季/禁止类别,准确性为0.8673。此外,提出的方法可扩展到其他室内环境,除了浴室外,还适合在老年家庭,医院和康复设施中部署,而无需用户佩戴任何设备或不断受到传感器的“观察”。
translated by 谷歌翻译
Music discovery services let users identify songs from short mobile recordings. These solutions are often based on Audio Fingerprinting, and rely more specifically on the extraction of spectral peaks in order to be robust to a number of distortions. Few works have been done to study the robustness of these algorithms to background noise captured in real environments. In particular, AFP systems still struggle when the signal to noise ratio is low, i.e when the background noise is strong. In this project, we tackle this problematic with Deep Learning. We test a new hybrid strategy which consists of inserting a denoising DL model in front of a peak-based AFP algorithm. We simulate noisy music recordings using a realistic data augmentation pipeline, and train a DL model to denoise them. The denoising model limits the impact of background noise on the AFP system's extracted peaks, improving its robustness to noise. We further propose a novel loss function to adapt the DL model to the considered AFP system, increasing its precision in terms of retrieved spectral peaks. To the best of our knowledge, this hybrid strategy has not been tested before.
translated by 谷歌翻译
自动音乐转录(AMT),从原始音频推断出音符,是音乐理解核心的具有挑战性的任务。与通常专注于单个扬声器的单词的自动语音识别(ASR)不同,AMT通常需要同时转换多个仪器,同时保留微量间距和定时信息。此外,许多AMT数据集是“低资源”,甚至甚至专家音乐家发现音乐转录困难和耗时。因此,事先工作专注于任务特定的架构,对每个任务的个体仪器量身定制。在这项工作中,通过对低资源自然语言处理(NLP)的序列到序列转移学习的有前途的结果,我们证明了通用变压器模型可以执行多任务AMT,共同转录音乐的任意组合跨几个转录数据集的仪器。我们展示了统一培训框架在一系列数据集中实现了高质量的转录结果,大大提高了低资源仪器(如吉他)的性能,同时为丰富的仪器(如钢琴)保持了强大的性能。最后,通过扩大AMT的范围,我们揭示了更加一致的评估指标和更好的数据集对齐,并为这个新的多任务AMT的新方向提供了强的基线。
translated by 谷歌翻译
我们提出了一个录音录音录音的录音录音。我们的模型通过短时傅立叶变换(STFT)将其输入转换为时频表示,并使用卷积神经网络处理所得的复杂频谱图。该网络在合成音乐数据集上培训了重建和对抗性目标,该数据集是通过将干净的音乐与从旧唱片的安静片段中提取的真实噪声样本混合而创建的。我们在合成数据集的持有测试示例中定量评估我们的方法,并通过人类对实际历史记录样本的评级进行定性评估。我们的结果表明,所提出的方法可有效消除噪音,同时保留原始音乐的质量和细节。
translated by 谷歌翻译
We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself -the correspondence between the visual and the audio streams, and we introduce a novel "Audio-Visual Correspondence" learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art selfsupervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 2nd International Workshop on Reading Music Systems, held in Delft on the 2nd of November 2019.
translated by 谷歌翻译
Audio sound recognition and classification is used for many tasks and applications including human voice recognition, music recognition and audio tagging. In this paper we apply Mel Frequency Cepstral Coefficients (MFCC) in combination with a range of machine learning models to identify (Australian) birds from publicly available audio files of their birdsong. We present approaches used for data processing and augmentation and compare the results of various state of the art machine learning models. We achieve an overall accuracy of 91% for the top-5 birds from the 30 selected as the case study. Applying the models to more challenging and diverse audio files comprising 152 bird species, we achieve an accuracy of 58%
translated by 谷歌翻译
呼吸声分类中的问题已在去年的临床科学家和医学研究员团体中获得了良好的关注,以诊断Covid-19疾病。迄今为止,各种模型的人工智能(AI)进入了现实世界,从人类生成的声音等人生成的声音中检测了Covid-19疾病,例如语音/言语,咳嗽和呼吸。实现卷积神经网络(CNN)模型,用于解决基于人工智能(AI)的机器上的许多真实世界问题。在这种情况下,建议并实施一个维度(1D)CNN,以诊断Covid-19的呼吸系统疾病,例如语音,咳嗽和呼吸。应用基于增强的机制来改善Covid-19声音数据集的预处理性能,并使用1D卷积网络自动化Covid-19疾病诊断。此外,使用DDAE(数据去噪自动编码器)技术来产生诸如输入功能的深声特征,而不是采用MFCC(MEL频率跳跃系数)的标准输入,并且它更好地执行比以前的型号的准确性和性能。
translated by 谷歌翻译
将音频分离成不同声音源的深度学习技术面临着几种挑战。标准架构需要培训不同类型的音频源的独立型号。虽然一些通用分离器采用单个模型来靶向多个来源,但它们难以推广到看不见的来源。在本文中,我们提出了一个三个组件的管道,可以从大型但弱标记的数据集:audioset训练通用音频源分离器。首先,我们提出了一种用于处理弱标记训练数据的变压器的声音事件检测系统。其次,我们设计了一种基于查询的音频分离模型,利用此数据进行模型培训。第三,我们设计一个潜在的嵌入处理器来编码指定用于分离的音频目标的查询,允许零拍摄的概括。我们的方法使用单一模型进行多种声音类型的源分离,并仅依赖于跨标记的培训数据。此外,所提出的音频分离器可用于零拍摄设置,学习以分离从未在培训中看到的音频源。为了评估分离性能,我们在侦察中测试我们的模型,同时在不相交的augioset上培训。我们通过对从训练中保持的音频源类型进行另一个实验,进一步通过对训练进行了另一个实验来验证零射性能。该模型在两种情况下实现了对当前监督模型的相当的源 - 失真率(SDR)性能。
translated by 谷歌翻译
由生物声监测设备组成的无线声传感器网络运行的专家系统的部署,从声音中识别鸟类物种将使许多生态价值任务自动化,包括对鸟类种群组成的分析或濒危物种的检测在环境感兴趣的地区。由于人工智能的最新进展,可以将这些设备具有准确的音频分类功能,其中深度学习技术出色。但是,使生物声音设备负担得起的一个关键问题是使用小脚印深神经网络,这些神经网络可以嵌入资源和电池约束硬件平台中。因此,这项工作提供了两个重型和大脚印深神经网络(VGG16和RESNET50)和轻量级替代方案MobilenetV2之间的批判性比较分析。我们的实验结果表明,MobileNetV2的平均F1得分低于RESNET50(0.789 vs. 0.834)的5 \%,其性能优于VGG16,其足迹大小近40倍。此外,为了比较模型,我们创建并公开了西部地中海湿地鸟类数据集,其中包括201.6分钟和5,795个音频摘录,摘录了20种特有鸟类的aiguamolls de l'empord \ e empord \`一个自然公园。
translated by 谷歌翻译
Mitosis nuclei count is one of the important indicators for the pathological diagnosis of breast cancer. The manual annotation needs experienced pathologists, which is very time-consuming and inefficient. With the development of deep learning methods, some models with good performance have emerged, but the generalization ability should be further strengthened. In this paper, we propose a two-stage mitosis segmentation and classification method, named SCMitosis. Firstly, the segmentation performance with a high recall rate is achieved by the proposed depthwise separable convolution residual block and channel-spatial attention gate. Then, a classification network is cascaded to further improve the detection performance of mitosis nuclei. The proposed model is verified on the ICPR 2012 dataset, and the highest F-score value of 0.8687 is obtained compared with the current state-of-the-art algorithms. In addition, the model also achieves good performance on GZMH dataset, which is prepared by our group and will be firstly released with the publication of this paper. The code will be available at: https://github.com/antifen/mitosis-nuclei-segmentation.
translated by 谷歌翻译