沟通对于代理人共享信息并做出良好决定的许多多代理强化学习(MARL)问题很重要。但是,当在存在噪音和潜在攻击者的现实应用程序中部署训练有素的交流代理商时,基于沟通的政策的安全就会成为一个严重的问题,这些问题被忽视。具体而言,如果通过恶意攻击者操纵沟通信息,依靠不信任的交流的代理可能会采取不安全的行动,从而导致灾难性后果。因此,至关重要的是要确保代理人不会被腐败的沟通误导,同时仍然从良性的交流中受益。在这项工作中,我们考虑了一个具有$ n $代理的环境,攻击者可以任意将通信从任何$ c <\ frac {n-1} {2} $代理商转换为受害者代理。对于这种强大的威胁模型,我们通过构建一个消息集结策略来提出可认证的辩护,该策略汇总了多个随机消融的消息集。理论分析表明,这种消息安装策略可以利用良性通信,同时确保对对抗性交流,无论攻击算法如何。在多种环境中的实验证明,我们的防御能够显着改善受过训练的政策对各种攻击的鲁棒性。
translated by 谷歌翻译
Investigation and analysis of patient outcomes, including in-hospital mortality and length of stay, are crucial for assisting clinicians in determining a patient's result at the outset of their hospitalization and for assisting hospitals in allocating their resources. This paper proposes an approach based on combining the well-known gray wolf algorithm with frequent items extracted by association rule mining algorithms. First, original features are combined with the discriminative extracted frequent items. The best subset of these features is then chosen, and the parameters of the used classification algorithms are also adjusted, using the gray wolf algorithm. This framework was evaluated using a real dataset made up of 2816 patients from the Imam Ali Kermanshah Hospital in Iran. The study's findings indicate that low Ejection Fraction, old age, high CPK values, and high Creatinine levels are the main contributors to patients' mortality. Several significant and interesting rules related to mortality in hospitals and length of stay have also been extracted and presented. Additionally, the accuracy, sensitivity, specificity, and auroc of the proposed framework for the diagnosis of mortality in the hospital using the SVM classifier were 0.9961, 0.9477, 0.9992, and 0.9734, respectively. According to the framework's findings, adding frequent items as features considerably improves classification accuracy.
translated by 谷歌翻译
The spread of misinformation is a prominent problem in today's society, and many researchers in academia and industry are trying to combat it. Due to the vast amount of misinformation that is created every day, it is unrealistic to leave this task to human fact-checkers. Data scientists and researchers have been working on automated misinformation detection for years, and it is still a challenging problem today. The goal of our research is to add a new level to automated misinformation detection; classifying segments of text with persuasive writing techniques in order to produce interpretable reasoning for why an article can be marked as misinformation. To accomplish this, we present a novel annotation scheme containing many common persuasive writing tactics, along with a dataset with human annotations accordingly. For this task, we make use of a RoBERTa model for text classification, due to its high performance in NLP. We develop several language model-based baselines and present the results of our persuasive strategy label predictions as well as the improvements these intermediate labels make in detecting misinformation and producing interpretable results.
translated by 谷歌翻译
Developing robust and fair AI systems require datasets with comprehensive set of labels that can help ensure the validity and legitimacy of relevant measurements. Recent efforts, therefore, focus on collecting person-related datasets that have carefully selected labels, including sensitive characteristics, and consent forms in place to use those attributes for model testing and development. Responsible data collection involves several stages, including but not limited to determining use-case scenarios, selecting categories (annotations) such that the data are fit for the purpose of measuring algorithmic bias for subgroups and most importantly ensure that the selected categories/subcategories are robust to regional diversities and inclusive of as many subgroups as possible. Meta, in a continuation of our efforts to measure AI algorithmic bias and robustness (https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set), is working on collecting a large consent-driven dataset with a comprehensive list of categories. This paper describes our proposed design of such categories and subcategories for Casual Conversations v2.
translated by 谷歌翻译
This work explores the zero-shot compositional learning ability of large pre-trained vision-language models(VLMs) within the prompt-based learning framework and propose a model (\textit{PromptCompVL}) to solve the compositonal zero-shot learning (CZSL) problem. \textit{PromptCompVL} makes two design choices: first, it uses a soft-prompting instead of hard-prompting to inject learnable parameters to reprogram VLMs for compositional learning. Second, to address the compositional challenge, it uses the soft-embedding layer to learn primitive concepts in different combinations. By combining both soft-embedding and soft-prompting, \textit{PromptCompVL} achieves state-of-the-art performance on the MIT-States dataset. Furthermore, our proposed model achieves consistent improvement compared to other CLIP-based methods which shows the effectiveness of the proposed prompting strategies for CZSL.
translated by 谷歌翻译
Recent research shows synthetic data as a source of supervision helps pretrained language models (PLM) transfer learning to new target tasks/domains. However, this idea is less explored for spatial language. We provide two new data resources on multiple spatial language processing tasks. The first dataset is synthesized for transfer learning on spatial question answering (SQA) and spatial role labeling (SpRL). Compared to previous SQA datasets, we include a larger variety of spatial relation types and spatial expressions. Our data generation process is easily extendable with new spatial expression lexicons. The second one is a real-world SQA dataset with human-generated questions built on an existing corpus with SPRL annotations. This dataset can be used to evaluate spatial language processing models in realistic situations. We show pretraining with automatically generated data significantly improves the SOTA results on several SQA and SPRL benchmarks, particularly when the training data in the target domain is small.
translated by 谷歌翻译
了解空间和视觉信息对于遵循自然语言说明的导航代理至关重要。当前的基于变压器的VLN代理纠缠了方向和视觉信息,这限制了每个信息源的学习中的增益。在本文中,我们设计了具有明确取向和视觉模块的神经药物。这些模块学会了将空间信息和地标在视觉环境中的说明中提及。为了加强代理的空间推理和视觉感知,我们设计了特定的预训练任务,以进食并更好地利用我们最终导航模型中的相应模块。我们在Room2Room(R2R)和Room4Room(R4R)数据集上评估我们的方法,并在两个基准测试中实现最新结果。
translated by 谷歌翻译
这项工作调查了以知识图(kg)形式的外部知识来源的理解问题的学习和推理的挑战。我们提出了一种新型的图形神经网络体系结构,称为动态相关图形网络(DRGN)。 DRGN根据问题和答案实体在给定的KG子图上运行,并使用节点之间的相关得分来动态建立新的边缘,以在图形网络中学习节点表示。相关性的这种显式用法作为图表具有以下优点,a)模型可以利用现有关系,重新缩放节点权重,并影响邻里节点的表示方式在kg子图中汇总的方式,b)恢复推理所需的千克中缺失的边缘。此外,作为副产品,由于考虑了问题节点与图形实体之间的相关性,我们的模型改善了处理负面问题。与最新发布的结果相比,我们提出的方法在两个质量检查基准CommonSenseQA和OpenBookQA上显示了竞争性能。
translated by 谷歌翻译
语言识别对于自动语音识别(ASR)中的许多下游任务至关重要,并且有益于将多语言端到端的ASR集成为附加任务。在本文中,我们建议通过集成每帧语言标识符(LID)预测器来修改基于层压编码器的复发神经网络传感器(RNN-T)模型的结构。带有级联编码器的RNN-T可以使用不右键的第一通用解码来实现较低延迟的流动ASR,并使用二频道解码使用更长的右文本实现较低的单词错误率(WERS)。通过利用当前文章中的这种差异和统计池的流传输实现,该建议的方法可以实现准确的流盖预测,而几乎没有额外的测试时间成本。语音搜索数据集的实验结果具有9个语言语言位置,表明所提出的方法平均达到96.2%的盖子预测准确性,而与输入中的Oracle盖相同的二次通用方法。
translated by 谷歌翻译
对抗性训练(AT)已被证明是将强大的对抗性鲁棒性引入深层神经网络的有效方法。但是,AT的高计算成本禁止在Federated Learning(FL)应用程序中使用有限的计算能力和较小的记忆足迹,例如,在资源受限的边缘设备上部署大规模的AT。以前很少有研究试图同时解决这些限制。在本文中,我们提出了一个名为Federated对抗性解耦学习(vade)的新框架,以启用FL中的资源受限的边缘设备。淡入淡出通过将解耦贪婪学习(DGL)应用于联合的对抗训练来减少计算和内存使用量,以便每个客户在每个通信回合中只需要在整个模型的一个小模块上执行。此外,我们通过添加辅助重量衰减来减轻客观不一致并实现更好的性能来改善香草DGL。 Fade为对抗性鲁棒性和融合提供了理论保证。实验结果还表明,淡出可以显着减少与完全关节训练保持几乎相同的准确性和鲁棒性的同时消耗的计算资源。
translated by 谷歌翻译