特定的发射极识别(SEI)是物理层身份验证的高潜在技术,它是上层身份验证的最关键补充之一。 SEI基于电路差而不是密码学的射频(RF)特征。这些功能是硬件电路的固有特征,很难伪造。最近,已经提出了各种基于深度学习(DL)的常规SEI方法,并实现了高级性能。但是,提出了这些方法,用于使用大量的RF信号样品进行训练的近距离场景,并且在训练样品有限的情况下,它们的性能较差。因此,我们将重点放在几个射击SEI(FS-SEI)上,用于通过自动依赖的监视播(ADS-B)信号进行飞机识别,并根据深度度量集合学习(DMEL)提出了一种新颖的FS-SEI方法。具体而言,提出的方法包括特征嵌入和分类。前者基于具有复杂价值的卷积神经网络(CVCNN)的度量学习,用于提取具有紧凑的类别内距离和可分离类别间距离的区分特征,而后者则由集合分类器实现。仿真结果表明,如果每个类别的样本数量超过5,则我们提出的方法的平均准确性高于98 \%。此外,特征可视化证明了我们提出的方法在可区分性和概括方面的优势。本文的代码可以从GitHub(https://github.com/beechburgpiestar/few-shot-specific-emitter-emitter-istifification-via-deep-metric-metric-semble-learning)下载。
translated by 谷歌翻译
我们建议基于负担能力识别和一种神经远期模型的组合来预测负担执行的效果的新型动作序列计划。通过对预测期货进行负担能力识别,我们避免依赖多步计划的明确负担效果定义。由于该系统从经验数据中学习负担能力效果,因此该系统不仅可以预见到负担的规范效应,还可以预见到特定情况的副作用。这使系统能够避免由于这种非规范效应而避免计划故障,并可以利用非规范效应来实现给定目标。我们在一组需要考虑规范和非典型负担效应的测试任务上评估了模拟系统的系统。
translated by 谷歌翻译
贝叶斯后期和模型证据的计算通常需要数值整合。贝叶斯正交(BQ)是一种基于替代模型的数值整合方法,能够具有出色的样品效率,但其缺乏并行化阻碍了其实际应用。在这项工作中,我们提出了一种并行的(批次)BQ方法,该方法采用了核正素的技术,该技术具有证明是指数的收敛速率。另外,与嵌套采样一样,我们的方法允许同时推断后期和模型证据。重新选择了来自BQ替代模型的样品,通过内核重组算法获得一组稀疏的样品,需要可忽略的额外时间来增加批处理大小。从经验上讲,我们发现我们的方法显着优于在包括锂离子电池分析在内的各种现实世界数据集中,最先进的BQ技术和嵌套采样的采样效率。
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译