非对比度CT(NCCT)图像中准确的梗塞分割是迈向计算机辅助急性缺血性中风(AIS)评估的关键步骤。在临床实践中,脑半球的双侧对称比较通常用于定位病理异常。最近的研究探索了不对称的协助AIS分割。但是,在评估其对AIS的贡献时,大多数以前基于对称性的工作都混合了不同类型的不对称性。在本文中,我们提出了一个新型的不对称分解网络(ADN),以自动将NCCT中的病理不对称性和内在的解剖不对称分离,以进行更有效和可解释的AIS分割。 ADN首先基于输入NCCT进行不对称分解,该输入nccts产生不同类型的3D不对称图。然后生成合成的,固有的 - 敏化补偿和病理 - 空气 - 对称盐的NCCT体积,后来用作分割网络的输入。 ADN的培训结合了领域知识,并采用了组织型意识到的正则化损失函数,以鼓励临床上敏感的病理不对称提取。加上无监督的3D转换网络,ADN在公共NCCT数据集上实现了最新的AIS分割性能。除了出色的表现外,我们认为学到的临床可解剖的不对称图也可以为更好地理解AIS评估提供见解。我们的代码可从https://github.com/nihaomiao/miccai22_adn获得。
translated by 谷歌翻译
在急诊室(ER)环境中,中风分类或筛查是一个普遍的挑战。由于MRI的慢速吞吐量和高成本,通常会进行快速CT而不是MRI。在此过程中通常提到临床测试,但误诊率仍然很高。我们提出了一个新型的多模式深度学习框架,深沉的中风,以通过识别较小的面部肌肉不协调的模式来实现计算机辅助中风的存在评估,并使怀疑急性环境中的中风的患者无能为力。我们提出的深雷克斯(Deepstroke)在中风分流器中容易获得一分钟的面部视频数据和音频数据,用于局部面部瘫痪检测和全球语音障碍分析。采用了转移学习来减少面部侵蚀偏见并提高普遍性。我们利用多模式的横向融合来结合低水平和高级特征,并为关节训练提供相互正则化。引入了新型的对抗训练以获得无身份和中风的特征。与实际急诊室患者进行的视频ADIO数据集进行的实验表明,与分类团队和ER医生相比,中风的表现要优于最先进的模型,并且取得更好的性能,比传统的敏感性高出10.94%,高7.37%的精度高出7.37%。当特异性对齐时,中风分类。同时,每个评估都可以在不到六分钟的时间内完成,这表明该框架的临床翻译潜力很大。
translated by 谷歌翻译
由于其广泛的应用,情感行为分析引起了研究人员的关注。但是,获得大量面部图像的准确注释是详尽的。因此,我们建议通过在未标记的面部图像上预处理的蒙版自动编码器(MAE)利用先前的面部信息。此外,我们结合了MAE预处理的视觉变压器(VIT)和AffectNet预处理的CNN,以执行多任务情绪识别。我们注意到表达和动作单元(AU)得分是价值(VA)回归的纯粹和完整的特征。结果,我们利用AffectNet预处理的CNN提取与表达和来自VIT的AU评分相连的表达评分,以获得最终的VA特征。此外,我们还提出了一个共同训练框架,该框架与两个平行的MAE预估计的VIT进行表达识别任务。为了使这两个视图独立,我们在训练过程中随机掩盖了大多数补丁。然后,执行JS差异以使两种视图的预测尽可能一致。 ABAW4上的结果表明我们的方法是有效的。
translated by 谷歌翻译
A storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards however remains challenging which not only requires association between high-level texts and images, but also demands for long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images to visualize the text synopsis. We construct a MovieNet-TeViS benchmark based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes that are manually selected from corresponding movies by considering both relevance and cinematic coherence. We also present an encoder-decoder baseline for the task. The model uses a pretrained vision-and-language model to improve high-level text-image matching. To improve coherence in long-term shots, we further propose to pre-train the decoder on large-scale movie frames without text. Experimental results demonstrate that our proposed model significantly outperforms other models to create text-relevant and coherent storyboards. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared with single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in single-agent environments. We empirically show that our certification bounds are much tighter than state-of-the-art RL certification solutions. We also run experiments on two popular c-MARL algorithms: QMIX and VDN, in two different environments, with two and four agents. The experimental results show that our method produces meaningful guaranteed robustness for all models and environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMA
translated by 谷歌翻译
We present Hybrid Infused Reranking for Passages Retrieval (HYRR), a framework for training rerankers based on a hybrid of BM25 and neural retrieval models. Retrievers based on hybrid models have been shown to outperform both BM25 and neural models alone. Our approach exploits this improved performance when training a reranker, leading to a robust reranking model. The reranker, a cross-attention neural model, is shown to be robust to different first-stage retrieval systems, achieving better performance than rerankers simply trained upon the first-stage retrievers in the multi-stage systems. We present evaluations on a supervised passage retrieval task using MS MARCO and zero-shot retrieval tasks using BEIR. The empirical results show strong performance on both evaluations.
translated by 谷歌翻译
Drug-Drug Interactions (DDIs) prediction is an essential issue in the molecular field. Traditional methods of observing DDIs in medical experiments require plenty of resources and labor. In this paper, we present a computational model dubbed MedKGQA based on Graph Neural Networks to automatically predict the DDIs after reading multiple medical documents in the form of multi-hop machine reading comprehension. We introduced a knowledge fusion system to obtain the complete nature of drugs and proteins and exploited a graph reasoning system to infer the drugs and proteins contained in the documents. Our model significantly improves the performance compared to previous state-of-the-art models on the QANGAROO MedHop dataset, which obtained a 4.5% improvement in terms of DDIs prediction accuracy.
translated by 谷歌翻译
Evaluating automatically-generated text summaries is a challenging task. While there have been many interesting approaches, they still fall short of human evaluations. We present RISE, a new approach for evaluating summaries by leveraging techniques from information retrieval. RISE is first trained as a retrieval task using a dual-encoder retrieval setup, and can then be subsequently utilized for evaluating a generated summary given an input document, without gold reference summaries. RISE is especially well suited when working on new datasets where one may not have reference summaries available for evaluation. We conduct comprehensive experiments on the SummEval benchmark (Fabbri et al., 2021) and the results show that RISE has higher correlation with human evaluations compared to many past approaches to summarization evaluation. Furthermore, RISE also demonstrates data-efficiency and generalizability across languages.
translated by 谷歌翻译
Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in this setting. We propose and study Attributed QA as a key first step in the development of attributed LLMs. We develop a reproducable evaluation framework for the task, using human annotations as a gold standard and a correlated automatic metric that we show is suitable for development settings. We describe and benchmark a broad set of architectures for the task. Our contributions give some concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third key question (How to build LLMs with attribution?).
translated by 谷歌翻译
Current methods for few-shot action recognition mainly fall into the metric learning framework following ProtoNet. However, they either ignore the effect of representative prototypes or fail to enhance the prototypes with multimodal information adequately. In this work, we propose a novel Multimodal Prototype-Enhanced Network (MORN) to use the semantic information of label texts as multimodal information to enhance prototypes, including two modality flows. A CLIP visual encoder is introduced in the visual flow, and visual prototypes are computed by the Temporal-Relational CrossTransformer (TRX) module. A frozen CLIP text encoder is introduced in the text flow, and a semantic-enhanced module is used to enhance text features. After inflating, text prototypes are obtained. The final multimodal prototypes are then computed by a multimodal prototype-enhanced module. Besides, there exist no evaluation metrics to evaluate the quality of prototypes. To the best of our knowledge, we are the first to propose a prototype evaluation metric called Prototype Similarity Difference (PRIDE), which is used to evaluate the performance of prototypes in discriminating different categories. We conduct extensive experiments on four popular datasets. MORN achieves state-of-the-art results on HMDB51, UCF101, Kinetics and SSv2. MORN also performs well on PRIDE, and we explore the correlation between PRIDE and accuracy.
translated by 谷歌翻译