为了保护热带森林生物多样性,我们需要能够可靠,便宜地和规模地检测它。通过机器学习方法从被动录制的SoundScapes检测自动化物种是对此目标的有希望的技术,但它受到大型训练数据集的必要性。在婆罗洲的热带森林中使用Soundcapes和通过转移学习创建的卷积神经网络模型(CNN),我们调查I)最低可行训练数据集规模,用于准确预测呼叫类型('Sonotypes')和II)的程度数据增强可以克服小型训练数据集的问题。我们发现甚至相对较高的样本尺寸(每个呼叫类型)导致平庸的精度,然而,无论分类学组或呼叫特征如何,数据增强都会显着提高。我们的研究结果表明,即使对于具有许多罕见物种的小型Sountscape的项目,转移学习和数据增强可以使用CNN来分类物种的发声。我们的开源方法有可能使节约计划能够通过在生物多样性的自适应管理中使用Soundscape数据来实现更有证据。
translated by 谷歌翻译
Self-training has been shown to be helpful in addressing data scarcity for many domains, including vision, speech, and language. Specifically, self-training, or pseudo-labeling, labels unsupervised data and adds that to the training pool. In this work, we investigate and use pseudo-labeling for a recently proposed novel setup: joint transcription and translation of speech, which suffers from an absence of sufficient data resources. We show that under such data-deficient circumstances, the unlabeled data can significantly vary in domain from the supervised data, which results in pseudo-label quality degradation. We investigate two categories of remedies that require no additional supervision and target the domain mismatch: pseudo-label filtering and data augmentation. We show that pseudo-label analysis and processing as such results in additional gains on top of the vanilla pseudo-labeling setup resulting in total improvements of up to 0.6% absolute WER and 2.2 BLEU points.
translated by 谷歌翻译
Simulating rigid collisions among arbitrary shapes is notoriously difficult due to complex geometry and the strong non-linearity of the interactions. While graph neural network (GNN)-based models are effective at learning to simulate complex physical dynamics, such as fluids, cloth and articulated bodies, they have been less effective and efficient on rigid-body physics, except with very simple shapes. Existing methods that model collisions through the meshes' nodes are often inaccurate because they struggle when collisions occur on faces far from nodes. Alternative approaches that represent the geometry densely with many particles are prohibitively expensive for complex shapes. Here we introduce the Face Interaction Graph Network (FIGNet) which extends beyond GNN-based methods, and computes interactions between mesh faces, rather than nodes. Compared to learned node- and particle-based methods, FIGNet is around 4x more accurate in simulating complex shape interactions, while also 8x more computationally efficient on sparse, rigid meshes. Moreover, FIGNet can learn frictional dynamics directly from real-world data, and can be more accurate than analytical solvers given modest amounts of training data. FIGNet represents a key step forward in one of the few remaining physical domains which have seen little competition from learned simulators, and offers allied fields such as robotics, graphics and mechanical design a new tool for simulation and model-based planning.
translated by 谷歌翻译
Continuous pseudo-labeling (PL) algorithms such as slimIPL have recently emerged as a powerful strategy for semi-supervised learning in speech recognition. In contrast with earlier strategies that alternated between training a model and generating pseudo-labels (PLs) with it, here PLs are generated in end-to-end manner as training proceeds, improving training speed and the accuracy of the final model. PL shares a common theme with teacher-student models such as distillation in that a teacher model generates targets that need to be mimicked by the student model being trained. However, interestingly, PL strategies in general use hard-labels, whereas distillation uses the distribution over labels as the target to mimic. Inspired by distillation we expect that specifying the whole distribution (aka soft-labels) over sequences as the target for unlabeled data, instead of a single best pass pseudo-labeled transcript (hard-labels) should improve PL performance and convergence. Surprisingly and unexpectedly, we find that soft-labels targets can lead to training divergence, with the model collapsing to a degenerate token distribution per frame. We hypothesize that the reason this does not happen with hard-labels is that training loss on hard-labels imposes sequence-level consistency that keeps the model from collapsing to the degenerate solution. In this paper, we show several experiments that support this hypothesis, and experiment with several regularization approaches that can ameliorate the degenerate collapse when using soft-labels. These approaches can bring the accuracy of soft-labels closer to that of hard-labels, and while they are unable to outperform them yet, they serve as a useful framework for further improvements.
translated by 谷歌翻译
Cancer is one of the leading causes of death worldwide. It is caused by a variety of genetic mutations, which makes every instance of the disease unique. Since chemotherapy can have extremely severe side effects, each patient requires a personalized treatment plan. Finding the dosages that maximize the beneficial effects of the drugs and minimize their adverse side effects is vital. Deep neural networks automate and improve drug selection. However, they require a lot of data to be trained on. Therefore, there is a need for machine-learning approaches that require less data. Hybrid quantum neural networks were shown to provide a potential advantage in problems where training data availability is limited. We propose a novel hybrid quantum neural network for drug response prediction, based on a combination of convolutional, graph convolutional, and deep quantum neural layers of 8 qubits with 363 layers. We test our model on the reduced Genomics of Drug Sensitivity in Cancer dataset and show that the hybrid quantum model outperforms its classical analog by 15% in predicting IC50 drug effectiveness values. The proposed hybrid quantum machine learning model is a step towards deep quantum data-efficient algorithms with thousands of quantum gates for solving problems in personalized medicine, where data collection is a challenge.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
This paper proposes a non-data-driven deep neural network for spectral image recovery problems such as denoising, single hyperspectral image super-resolution, and compressive spectral imaging reconstruction. Unlike previous methods, the proposed approach, dubbed Mixture-Net, implicitly learns the prior information through the network. Mixture-Net consists of a deep generative model whose layers are inspired by the linear and non-linear low-rank mixture models, where the recovered image is composed of a weighted sum between the linear and non-linear decomposition. Mixture-Net also provides a low-rank decomposition interpreted as the spectral image abundances and endmembers, helpful in achieving remote sensing tasks without running additional routines. The experiments show the MixtureNet effectiveness outperforming state-of-the-art methods in recovery quality with the advantage of architecture interpretability.
translated by 谷歌翻译
道路车辙是严重的道路障碍,可能导致早期和昂贵的维护成本的道路过早失败。在过去的几年中,正在积极进行使用图像处理技术和深度学习的道路损害检测研究。但是,这些研究主要集中在检测裂缝,坑洼及其变体上。很少有关于探测道路的研究。本文提出了一个新颖的道路车辙数据集,其中包括949张图像,并提供对象级别和像素级注释。部署了对象检测模型和语义分割模型,以检测所提出的数据集上的道路插道,并对模型预测进行了定量和定性分析,以评估模型性能并确定使用拟议方法检测道路插道时面临的挑战。对象检测模型Yolox-S实现了61.6%的Map@iou = 0.5,语义分割模型PSPNET(RESNET-50)达到54.69,精度为72.67,从而为将来的类似工作提供了基准的准确性。拟议的道路车辙数据集和我们的研究结果将有助于加速使用深度学习发现道路车辙的研究。
translated by 谷歌翻译
数据文章介绍了路线损坏数据集RDD2022,其中包括来自六个国家,日本,印度,捷克共和国,挪威,美国和中国的47,420条道路图像。图像已注释了超过55,000个道路损坏的实例。数据集中捕获了四种类型的道路损坏,即纵向裂缝,横向裂纹,鳄鱼裂纹和坑洼。设想注释的数据集用于开发基于深度学习的方法以自动检测和对道路损害进行分类。该数据集已作为基于人群传感的道路伤害检测挑战(CRDDC2022)的一部分发布。 CRDDC2022挑战邀请了来自全球的研究人员提出解决方案,以在多个国家 /地区自动道路损害检测。市政当局和道路机构可以使用RDD2022数据集,并使用RDD2022培训的模型用于低成本自动监测道路状况。此外,计算机视觉和机器学习研究人员可能会使用数据集对其他类型的其他基于图像的应用程序(分类,对象检测等)进行不同算法的性能。
translated by 谷歌翻译
基于视觉的导航需要处理复杂的信息以做出以任务为导向的决策。应用包括自动驾驶机器人,自动驾驶汽车以及对人类的辅助愿景。该过程中的关键要素之一是在像素空间中提取和选择相关特征,以便基于操作选择,适合哪种机器学习技术。但是,在模拟中接受培训的深度强化学习代理人在现实世界中部署在现实世界中通常会表现出不满意的结果,这是因为感知差异称为$ \ textit {现实gap} $。尚未探索以弥合这一差距的方法是自我注意力。在本文中,我们(1)对基于3D环境的基于自我注意力的导航进行系统探索,并从不同的超参数集中观察到的行为,包括它们的概括能力; (2)目前的策略来提高代理的概括能力和导航行为; (3)展示在模拟中训练的模型如何能够实时处理现实世界图像。据我们所知,这是使用少于4000个参数成功导航3D动作空间的基于自我注意力的代理的首次演示。
translated by 谷歌翻译