联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
非平行的多域语音转换方法(例如Stargan-VC)在许多情况下已被广泛应用。但是,这些模型的培训通常由于其复杂的对抗网络体系结构而构成挑战。为了解决这个问题,在这项工作中,我们利用最先进的对比学习技术,并将有效的暹罗网络结构纳入Stargan歧视者。我们的方法称为Simsiam-Stargan-VC,它提高了训练稳定性,并有效地防止了训练过程中的歧视者过度拟合问题。我们对语音转换挑战(VCC 2018)数据集进行了实验,并进行了用户研究,以验证我们的框架性能。我们的实验结果表明,Simsiam-Stargan-VC在客观和主观指标方面显着优于现有的Stargan-VC方法。
translated by 谷歌翻译
深度神经网络可以捕获查询和文档之间的复杂交互历史信息,因为它们的许多复杂的非线性单元,使它们能够提供正确的搜索建议。但是,在现实情况下,服务提供商经常面临更复杂的障碍,例如部署成本限制和公平要求。已经提出了将训练有素的复杂模型(教师)转移到简单模型(学生)的知识的知识蒸馏,以减轻前者的关注,但最佳当前蒸馏方法仅着重于如何使学生模型模仿教师模型的预测。为了更好地促进深层模型的应用,我们建议基于知识蒸馏的公平信息检索框架。该框架可以改善模型的基于暴露的公平性,同时大大降低模型大小。我们在三个巨大数据集上进行的广泛实验表明,我们提出的框架可以将模型尺寸降低到其原始尺寸的最小1%,同时保持其黑盒状态。它还将公平性能提高15%〜46%,同时保持高水平的建议效率。
translated by 谷歌翻译
通过使用低成本,远程,无维护的无线传感器进行增强,数十亿个日常对象可能会成为物联网(IoT)的一部分。射频识别(RFID)是一种低成本的无线技术,可以实现这一愿景,但是它受到短暂的通信范围和缺乏足够的能量来限制辅助电子和传感器。在这里,我们探讨了柔性钙钛矿光伏电池的使用,以提供半邮用RFID标签的外部功率,以增加外部电子设备(例如微控制器和数字传感器)的范围和能量可用性。钙钛矿是有趣的材料,具有开发高性能,低成本,可调节性(吸收不同的光谱)和柔性轻能量收割机的可能性。在标准测试条件下,我们的塑料底物上的原型钙钛矿光伏细胞的效率为13%,电压为0.88 V。我们构建了由这些柔性光伏电池供电的RFID传感器的原型原型,以展示现实世界的应用。我们对原型的评估表明:i)柔性PV细胞耐用至5 mm的弯曲半径,相对效率仅下降20%; ii)RFID通信范围增加了5倍,并满足能源需求(10-350 microwatt)以实现自动无线传感器; iii)钙钛矿动力无线传感器启用许多无电池传感应用程序(例如,易腐烂的良好监控,仓库自动化)
translated by 谷歌翻译
尽管深度神经网络(DNNS)在音频分类任务中取得了巨大的成功,但它们的不确定性校准仍未得到探索。当它确定其预测时,应进行良好的模型应准确,并表明何时可能不准确。在这项工作中,我们研究了深度音频分类器的不确定性校准。特别是,我们从经验上研究了流行校准方法的性能:(i)蒙特卡洛辍学方法,(ii)集合,(iii)局灶性损失和(iv)光谱范围差异高斯工艺(SNGP),在音频分类数据集上。为此,我们评估了(I-IV),以应对环境声音和音乐流派分类的任务。结果表明,未校准的深度音频分类器可能过于自信,并且SNGP在本文的两个数据集中表现最好,并且非常有效。
translated by 谷歌翻译
实现一般逆设计可以通过用户定义的属性极大地加速对新材料的发现。然而,最先进的生成模型往往限于特定的组成或晶体结构。这里,我们提出了一种能够一般逆设计的框架(不限于给定的一组元件或晶体结构),其具有在实际和往复空间中编码晶体的广义可逆表示,以及来自变分的属性结构潜空间autoencoder(vae)。在三种设计情况下,该框架通过用户定义的形成能量,带隙,热电(TE)功率因数和组合产生142个新晶体。在训练数据库中缺席的这些生成的晶体通过第一原理计算验证。成功率(验证的第一原理验证的目标圆形晶体/数量的设计晶体)范围为7.1%和38.9%。这些结果表示利用生成模型朝着性质驱动的一般逆设计的重要步骤,尽管在与实验合成结合时仍然存在实际挑战。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译