最近的一些研究描述了深层卷积神经网络,以诊断与人类专家相似甚至卓越表现的乳腺癌乳腺癌。最好的技术之一可以进行两种转移学习:第一个使用在自然图像上训练的模型来创建“补丁分类器”,该模型将小型子图表分类;第二个使用补丁分类器来扫描整个乳房X线照片并创建“单视图全图分类器”。我们建议进行第三次转移学习,以获取“两视图分类器”,以使用两种乳房X线摄影视图:双侧颅颅和中外侧倾斜。我们使用效率网络作为模型的基础。我们使用CBIS-DDSM数据集“端到端”训练整个系统。为了确保统计鲁棒性,我们使用以下方式两次测试系统,(a)5倍交叉验证; (b)数据集的原始培训/测试部门。我们的技术使用5倍的交叉验证达到0.9344的AUC(在ROC的误差率相等的误差率下,准确性,灵敏度和特异性为85.13%)。据我们所知,使用原始的数据集除法,我们的技术达到了0.8483,尽管我们知道的最高的AUC在此问题上,尽管每项工作的测试条件上的细微差异不允许进行准确的比较。推理代码和模型可在https://github.com/dpetrini/two-views-classifier上获得
translated by 谷歌翻译
Affect understanding capability is essential for social robots to autonomously interact with a group of users in an intuitive and reciprocal way. However, the challenge of multi-person affect understanding comes from not only the accurate perception of each user's affective state (e.g., engagement) but also the recognition of the affect interplay between the members (e.g., joint engagement) that presents as complex, but subtle, nonverbal exchanges between them. Here we present a novel hybrid framework for identifying a parent-child dyad's joint engagement by combining a deep learning framework with various video augmentation techniques. Using a dataset of parent-child dyads reading storybooks together with a social robot at home, we first train RGB frame- and skeleton-based joint engagement recognition models with four video augmentation techniques (General Aug, DeepFake, CutOut, and Mixed) applied datasets to improve joint engagement classification performance. Second, we demonstrate experimental results on the use of trained models in the robot-parent-child interaction context. Third, we introduce a behavior-based metric for evaluating the learned representation of the models to investigate the model interpretability when recognizing joint engagement. This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
translated by 谷歌翻译
The cone-beam computed tomography (CBCT) provides 3D volumetric imaging of a target with low radiation dose and cost compared with conventional computed tomography, and it is widely used in the detection of paranasal sinus disease. However, it lacks the sensitivity to detect soft tissue lesions owing to reconstruction constraints. Consequently, only physicians with expertise in CBCT reading can distinguish between inherent artifacts or noise and diseases, restricting the use of this imaging modality. The development of artificial intelligence (AI)-based computer-aided diagnosis methods for CBCT to overcome the shortage of experienced physicians has attracted substantial attention. However, advanced AI-based diagnosis addressing intrinsic noise in CBCT has not been devised, discouraging the practical use of AI solutions for CBCT. To address this issue, we propose an AI-based computer-aided diagnosis method using CBCT with a denoising module. This module is implemented before diagnosis to reconstruct the internal ground-truth full-dose scan corresponding to an input CBCT image and thereby improve the diagnostic performance. The external validation results for the unified diagnosis of sinus fungal ball, chronic rhinosinusitis, and normal cases show that the proposed method improves the micro-, macro-average AUC, and accuracy by 7.4, 5.6, and 9.6% (from 86.2, 87.0, and 73.4 to 93.6, 92.6, and 83.0%), respectively, compared with a baseline while improving human diagnosis accuracy by 11% (from 71.7 to 83.0%), demonstrating technical differentiation and clinical effectiveness. This pioneering study on AI-based diagnosis using CBCT indicates denoising can improve diagnostic performance and reader interpretability in images from the sinonasal area, thereby providing a new approach and direction to radiographic image reconstruction regarding the development of AI-based diagnostic solutions.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
在过去的几十年中,面部识别(FR)在计算机视觉和模式识别社会中进行了积极研究。最近,由于深度学习的进步,FR技术在大多数基准数据集中都显示出高性能。但是,当将FR算法应用于现实世界的情况时,该性能仍然不令人满意。这主要归因于训练和测试集之间的不匹配。在此类不匹配中,训练和测试面之间的面部不对对准是阻碍成功的FR的因素之一。为了解决这一限制,我们提出了一个脸型引导的深度特征对齐框架,以使fr稳健地对脸错位。基于面部形状的先验(例如,面部关键点),我们通过引入对齐方式和未对准的面部图像之间的对齐过程,即像素和特征对齐方式来训练所提出的深网。通过像从面部图像和面部形状提取的聚合特征解码的像素对齐过程,我们添加了辅助任务以重建良好的面部图像。由于汇总功能通过特征对齐过程链接到面部功能提取网络作为指南,因此我们将强大的面部功能训练到面部未对准。即使在训练阶段需要面部形状估计,通常在传统的FR管道中纳入的额外面部对齐过程在测试阶段不一定需要。通过比较实验,我们验证了提出的方法与FR数据集的面部未对准的有效性。
translated by 谷歌翻译
唇读旨在仅基于唇部运动来预测语音。当它专注于视觉信息以建模语音时,其性能本质上对个人唇部外观和动作敏感。这使得唇读模型由于训练和测试条件之间的不匹配而将其应用于看不见的说话者时显示出降级的性能。演讲者的适应技术旨在减少火车和测试扬声器之间的不匹配,从而指导训练有素的模型,以专注于对语音内容进行建模而不由说话者变化介入。与数十年来基于音频的语音识别所做的努力相反,扬声器适应方法在唇部阅读中尚未得到很好的研究。在本文中,为了纠正看不见的扬声器的唇读模型的性能降解,我们提出了一种扬声器自适应的唇部阅读方法,即用户依赖用户。依赖用户的填充是一种特定于扬声器的输入,可以参与预训练的唇读模型的视觉特征提取阶段。因此,可以在编码视觉功能期间考虑不同扬声器的唇外观和动作信息,适合单个扬声器。此外,所提出的方法不需要1)任何其他层,2)修改预训练模型的学习权重,以及3)预训练期间使用的火车数据的扬声器标签。它只能以受监督或无监督的方式仅学习用户依赖的填充,直接适应了看不见的说话者。最后,为了减轻公共唇阅读数据库中的扬声器信息不足,我们将众所周知的视听数据库的扬声器标记为LRW,并设计出一种名为LRW-ID的不可见语的唇lip阅读方案。
translated by 谷歌翻译
本文着重于设计一种噪声端到端音频语音识别(AVSR)系统。为此,我们提出了视觉上下文驱动的音频功能增强模块(V-Cafe),以在视听通讯的帮助下增强输入噪声音频语音。所提出的V-Cafe旨在捕获唇部运动的过渡,即视觉上下文,并通过考虑获得的视觉上下文来产生降噪面膜。通过与上下文相关的建模,可以完善掩模生成Viseme-to-phoneme映射中的歧义。嘈杂的表示用降噪面膜掩盖,从而增强了音频功能。增强的音频功能与视觉特征融合在一起,并将其带入由构象异构体和变压器组成的编码器模型,以进行语音识别。我们显示了带有V-fafe的端到端AVSR,可以进一步改善AVSR的噪声。使用两个最大的视听数据集LRS2和LRS3评估了所提出方法的有效性。
translated by 谷歌翻译
这项工作的目的是从无声说话的脸部视频中重建演讲。最近的研究表明,来自无声说话面部视频的综合语音表现令人印象深刻。但是,他们尚未明确考虑不同扬声器的不同身份特征,这些特征在视频到语音综合中构成了挑战,这对于不可见的扬声器设置变得更加至关重要。与以前的方法不同,我们的方法是将语音内容和外观风格与给定的无声说话的面部视频分开。通过指导模型独立专注于建模这两个表示形式,即使给出了看不见主题的输入视频,我们也可以从模型中获得高清晰度的语音。为此,我们介绍了语音视觉选择模块,该模块将语音内容和扬声器身份与输入视频的视觉特征分开。分散的表示形式通过基于VISAGE风格的合成器共同纳入综合语音,该合成器通过在维护语音内容的同时涂上VISAGE风格来产生语音。因此,提议的框架带来了合成语音包含正确内容的优势,即使给出了看不见的主题的无声说话的脸部视频。我们验证了在网格,TCD-TIMIT志愿者和LRW数据集上提出的框架的有效性。可以在补充材料中听到综合语音。
translated by 谷歌翻译
扫描透射电子显微镜(STEM)是用于多种材料的原子分辨率结构分析的必不可少的工具。 STEM图像的常规分析是一个广泛的动手过程,它限制了高通量数据的有效处理。在这里,我们应用一个完全卷积网络(FCN)来识别二维晶体的重要结构特征。 Resunet是一种FCN的类型,用于识别来自原子分辨率STEM图像的$ {MOS_2} $的硫磺空缺和多晶型物类型。在存在不同水平的噪声,畸变和碳污染的情况下,基于模拟图像的训练来实现有效的模型。 FCN模型对广泛的实验茎图像的准确性与仔细的动手分析相当。我们的工作提供了有关最佳实践的指南,以训练深度学习模型进行STEM图像分析,并证明了FCN有效地处理大量STEM数据的应用。
translated by 谷歌翻译