Neural transducer is now the most popular end-to-end model for speech recognition, due to its naturally streaming ability. However, it is challenging to adapt it with text-only data. Factorized neural transducer (FNT) model was proposed to mitigate this problem. The improved adaptation ability of FNT on text-only adaptation data came at the cost of lowered accuracy compared to the standard neural transducer model. We propose several methods to improve the performance of the FNT model. They are: adding CTC criterion during training, adding KL divergence loss during adaptation, using a pre-trained language model to seed the vocabulary predictor, and an efficient adaptation approach by interpolating the vocabulary predictor with the n-gram language model. A combination of these approaches results in a relative word-error-rate reduction of 9.48\% from the standard FNT model. Furthermore, n-gram interpolation with the vocabulary predictor improves the adaptation speed hugely with satisfactory adaptation performance.
translated by 谷歌翻译
Self-supervised learning (SSL) methods such as WavLM have shown promising speech separation (SS) results in small-scale simulation-based experiments. In this work, we extend the exploration of the SSL-based SS by massively scaling up both the pre-training data (more than 300K hours) and fine-tuning data (10K hours). We also investigate various techniques to efficiently integrate the pre-trained model with the SS network under a limited computation budget, including a low frame rate SSL model training setup and a fine-tuning scheme using only the part of the pre-trained model. Compared with a supervised baseline and the WavLM-based SS model using feature embeddings obtained with the previously released 94K hours trained WavLM, our proposed model obtains 15.9% and 11.2% of relative word error rate (WER) reductions, respectively, for a simulated far-field speech mixture test set. For conversation transcription on real meeting recordings using continuous speech separation, the proposed model achieves 6.8% and 10.6% of relative WER reductions over the purely supervised baseline on AMI and ICSI evaluation sets, respectively, while reducing the computational cost by 38%.
translated by 谷歌翻译
Automatic Speech Recognition (ASR) systems typically yield output in lexical form. However, humans prefer a written form output. To bridge this gap, ASR systems usually employ Inverse Text Normalization (ITN). In previous works, Weighted Finite State Transducers (WFST) have been employed to do ITN. WFSTs are nicely suited to this task but their size and run-time costs can make deployment on embedded applications challenging. In this paper, we describe the development of an on-device ITN system that is streaming, lightweight & accurate. At the core of our system is a streaming transformer tagger, that tags lexical tokens from ASR. The tag informs which ITN category might be applied, if at all. Following that, we apply an ITN-category-specific WFST, only on the tagged text, to reliably perform the ITN conversion. We show that the proposed ITN solution performs equivalent to strong baselines, while being significantly smaller in size and retaining customization capabilities.
translated by 谷歌翻译
With the development of depth sensors in recent years, RGBD object tracking has received significant attention. Compared with the traditional RGB object tracking, the addition of the depth modality can effectively solve the target and background interference. However, some existing RGBD trackers use the two modalities separately and thus some particularly useful shared information between them is ignored. On the other hand, some methods attempt to fuse the two modalities by treating them equally, resulting in the missing of modality-specific features. To tackle these limitations, we propose a novel Dual-fused Modality-aware Tracker (termed DMTracker) which aims to learn informative and discriminative representations of the target objects for robust RGBD tracking. The first fusion module focuses on extracting the shared information between modalities based on cross-modal attention. The second aims at integrating the RGB-specific and depth-specific information to enhance the fused features. By fusing both the modality-shared and modality-specific information in a modality-aware scheme, our DMTracker can learn discriminative representations in complex tracking scenes. Experiments show that our proposed tracker achieves very promising results on challenging RGBD benchmarks. Code is available at \url{https://github.com/ShangGaoG/DMTracker}.
translated by 谷歌翻译
End-to-end formulation of automatic speech recognition (ASR) and speech translation (ST) makes it easy to use a single model for both multilingual ASR and many-to-many ST. In this paper, we propose streaming language-agnostic multilingual speech recognition and translation using neural transducers (LAMASSU). To enable multilingual text generation in LAMASSU, we conduct a systematic comparison between specified and unified prediction and joint networks. We leverage a language-agnostic multilingual encoder that substantially outperforms shared encoders. To enhance LAMASSU, we propose to feed target LID to encoders. We also apply connectionist temporal classification regularization to transducer training. Experimental results show that LAMASSU not only drastically reduces the model size but also outperforms monolingual ASR and bilingual ST models.
translated by 谷歌翻译
In this paper, we introduce our work of building a Streaming Multilingual Speech Model (SM2), which can transcribe or translate multiple spoken languages into texts of the target language. The backbone of SM2 is Transformer Transducer, which has high streaming capability. Instead of human labeled speech translation (ST) data, SM2 models are trained using weakly supervised data generated by converting the transcriptions in speech recognition corpora with a machine translation service. With 351 thousand hours of anonymized speech training data from 25 languages, SM2 models achieve comparable or even better ST quality than some recent popular large-scale non-streaming speech models. More importantly, we show that SM2 has the truly zero-shot capability when expanding to new target languages, yielding high quality ST results for {source-speech, target-text} pairs that are not seen during training.
translated by 谷歌翻译
本文介绍了一个新型的流媒体自动语音识别(ASR)框架,用于由带有任意几何形状的遥远麦克风阵列捕获的多对话者重叠语音。我们的名为T-Sot-VA的框架在独立开发了两种最近的技术上。基于令牌级别的序列化输出训练(T-SOT),数量几何形状 - 反应连续的语音分离或VARARRARY和流媒体多对话者ASR。为了结合两种技术的最佳,我们新设计了一个基于T-SOT的ASR模型,该模型基于Vararray的两个分离的语音信号生成序列化的多对话者转录。我们还为这种ASR模型提出了一种预训练方案,我们基于单膜单键式ASR训练数据来模拟Vararray的输出信号。使用AMI会议语料库的对话转录实验表明,基于提议的框架的系统大大优于常规的框架。我们的系统分别在保留流媒体推理能力的同时,在多远离微米频道设置中分别实现了AMI开发和评估集的最新单词错误率为13.7%和15.5%。
translated by 谷歌翻译
由于与传统的基于RGB的跟踪相比,多模式跟踪的能力在复杂的情况下更准确和健壮,因此获得了关注。它的关键在于如何融合多模式数据并减少模式之间的差距。但是,多模式跟踪仍然严重遭受数据缺乏症的影响,从而导致融合模块的学习不足。我们没有在本文中构建这样的融合模块,而是通过将重要性附加到多模式的视觉提示中,为多模式跟踪提供了新的视角。我们设计了一种新型的多模式及时跟踪器(Protrack),可以通过及时范式将多模式输入传递到单个模态。通过最好地利用预先训练的RGB跟踪器在大规模学习的跟踪能力,我们的突起即使没有对多模式数据进行任何额外的培训,我们的突起也可以通过更改输入来实现高性能多模式跟踪。 5个基准数据集的广泛实验证明了所提出的突起的有效性。
translated by 谷歌翻译
我们提出了场景运动的新颖双流表示,将光流分​​解为由摄像机运动引起的静态流场和另一个由场景中对象的运动引起的动态流场。基于此表示形式,我们提出了一个动态的大满贯,称为Deflowslam,它利用图像中的静态和动态像素来求解相机的姿势,而不是像其他动态SLAM系统一样简单地使用静态背景像素。我们提出了一个动态更新模块,以一种自我监督的方式训练我们的Deflowslam,其中密集的束调节层采用估计的静态流场和由动态掩码控制的权重,并输出优化的静态流动场的残差,相机姿势的残差,和反度。静态和动态流场是通过将当前图像翘曲到相邻图像来估计的,并且可以通过将两个字段求和来获得光流。广泛的实验表明,在静态场景和动态场景中,Deflowslam可以很好地推广到静态和动态场景,因为它表现出与静态和动态较小的场景中最先进的Droid-Slam相当的性能,同时在高度动态的环境中表现出明显优于Droid-Slam。代码和数据可在项目网页上找到:\ urlstyle {tt} \ textColor {url_color} {\ url {https://zju3dv.github.io/deflowslam/}}}。
translated by 谷歌翻译
最近,蒙面的预测预训练在自我监督的学习(SSL)方面取得了显着的进展,以进行语音识别。它通常需要以无监督的方式获得的代码簿,从而使其准确和难以解释。我们提出了两种监督指导的代码书生成方法,以提高自动语音识别(ASR)的性能以及预训练效率,要么通过使用混合ASR系统来解码以生成音素级别对准(命名为PBERT),要么通过在上进行集群进行聚类。从端到端CTC模型(命名CTC聚类)提取的监督语音功能。混合动力和CTC模型均经过与微调相同的少量标记语音训练。实验表明,我们的方法对各种SSL和自我训练基准的优势具有显着优势,相对减少了17.0%。我们的预训练模型在非ASR语音任务中还显示出良好的可传递性。
translated by 谷歌翻译