精确分割是分析心脏周期语义信息并使用心血管信号捕获异常的至关重要的第一步。但是,在深层语义分割领域,通常会单方面与数据的个体属性相混淆。走向心血管信号,准周期性是要学习的必不可少的特征,被视为形态学属性(AM)和节奏(AR)的合成。我们的关键见解是在深度表示的生成过程中抑制对AM或AR的过度依赖性。为了解决这个问题,我们建立了一个结构性因果模型,作为分别自定义AM和AR的干预方法的基础。在本文中,我们提出了对比性因果干预(CCI),以在框架级对比框架下形成一种新颖的训练范式。干预可以消除单个属性带来的隐式统计偏见,并导致更客观的表示。我们对QRS位置和心脏声音分割的受控条件进行了全面的实验。最终结果表明,我们的方法显然可以将QRS位置的性能提高高达0.41%,心脏声音分段为2.73%。该方法的效率推广到多个数据库和嘈杂的信号。
translated by 谷歌翻译
了解因果关系有助于构建干预措施,以实现特定的目标并在干预下实现预测。随着学习因果关系的越来越重要,因果发现任务已经从使用传统方法推断出潜在的因果结构从观察数据到深度学习涉及的模式识别领域。大量数据的快速积累促进了具有出色可扩展性的因果搜索方法的出现。因果发现方法的现有摘要主要集中在基于约束,分数和FCM的传统方法上,缺乏针对基于深度学习的方法的完美分类和阐述,还缺乏一些考虑和探索因果关系的角度来探索因果发现方法范式。因此,我们根据变量范式将可能的因果发现任务分为三种类型,并分别给出三个任务的定义,定义和实例化每个任务的相关数据集以及同时构建的最终因果模型,然后审查不同任务的主要因果发现方法。最后,我们从不同角度提出了一些路线图,以解决因果发现领域的当前研究差距,并指出未来的研究方向。
translated by 谷歌翻译
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译
Segmenting the fine structure of the mouse brain on magnetic resonance (MR) images is critical for delineating morphological regions, analyzing brain function, and understanding their relationships. Compared to a single MRI modality, multimodal MRI data provide complementary tissue features that can be exploited by deep learning models, resulting in better segmentation results. However, multimodal mouse brain MRI data is often lacking, making automatic segmentation of mouse brain fine structure a very challenging task. To address this issue, it is necessary to fuse multimodal MRI data to produce distinguished contrasts in different brain structures. Hence, we propose a novel disentangled and contrastive GAN-based framework, named MouseGAN++, to synthesize multiple MR modalities from single ones in a structure-preserving manner, thus improving the segmentation performance by imputing missing modalities and multi-modality fusion. Our results demonstrate that the translation performance of our method outperforms the state-of-the-art methods. Using the subsequently learned modality-invariant information as well as the modality-translated images, MouseGAN++ can segment fine brain structures with averaged dice coefficients of 90.0% (T2w) and 87.9% (T1w), respectively, achieving around +10% performance improvement compared to the state-of-the-art algorithms. Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data. We release our method as a mouse brain structural segmentation tool for free academic usage at https://github.com/yu02019.
translated by 谷歌翻译
最近,自我监督的表示学习(SSRL)在计算机视觉,语音,自然语言处理(NLP)以及最近的其他类型的模式(包括传感器的时间序列)中引起了很多关注。自我监督学习的普及是由传统模型通常需要大量通知数据进行培训的事实所驱动的。获取带注释的数据可能是一个困难且昂贵的过程。已经引入了自我监督的方法,以通过使用从原始数据自由获得的监督信号对模型进行判别预训练来提高训练数据的效率。与现有的对SSRL的评论不同,该评论旨在以单一模式为重点介绍CV或NLP领域的方法,我们旨在为时间数据提供对多模式自我监督学习方法的首次全面审查。为此,我们1)提供现有SSRL方法的全面分类,2)通过定义SSRL框架的关键组件来引入通用管道,3)根据其目标功能,网络架构和潜在应用程序,潜在的应用程序,潜在的应用程序,比较现有模型, 4)查看每个类别和各种方式中的现有多模式技术。最后,我们提出了现有的弱点和未来的机会。我们认为,我们的工作对使用多模式和/或时间数据的域中SSRL的要求有了一个观点
translated by 谷歌翻译
更广泛的人重新识别(Reid)在最近的计算机视觉社区中引起了不断的关注。在这项工作中,我们在身份标签,特定特定因素(衣服/鞋子颜色等)和域特定因素(背景,观点等)之间构建结构因果模型。根据因果分析,我们提出了一种新颖的域不变表示,以获得概括的人重新识别(DIR-REID)框架。具体而言,我们首先建议解散特定于特定的和域特定的特征空间,我们提出了一种有效的算法实现,用于后台调整,基本上是朝向SCM的因果干预。已经进行了广泛的实验,表明Dir-Reid在大规模域泛化Reid基准上表现出最先进的方法。
translated by 谷歌翻译
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at https://github.com/causal-machine-learning-lab/mliv.
translated by 谷歌翻译
在深度学习研究中,自学学习(SSL)引起了极大的关注,引起了计算机视觉和遥感社区的兴趣。尽管计算机视觉取得了很大的成功,但SSL在地球观测领域的大部分潜力仍然锁定。在本文中,我们对在遥感的背景下为计算机视觉的SSL概念和最新发展提供了介绍,并回顾了SSL中的概念和最新发展。此外,我们在流行的遥感数据集上提供了现代SSL算法的初步基准,从而验证了SSL在遥感中的潜力,并提供了有关数据增强的扩展研究。最后,我们确定了SSL未来研究的有希望的方向的地球观察(SSL4EO),以铺平了两个领域的富有成效的相互作用。
translated by 谷歌翻译
视觉表示学习在各种现实世界中无处不在,包括视觉理解,视频理解,多模式分析,人类计算机的互动和城市计算。由于出现了大量多模式的异质空间/时间/时空数据,因此在大数据时代,缺乏可解释性,鲁棒性和分布外的概括正在成为现有视觉模型的挑战。大多数现有方法倾向于符合原始数据/可变分布,而忽略了多模式知识背后的基本因果关系,该知识缺乏统一的指导和分析,并分析了为什么现代视觉表示学习方法很容易崩溃成数据偏见并具有有限的概括和认知能力。因此,受到人类水平代理人的强大推理能力的启发,近年来见证了巨大的努力,以发展因果推理范式,以良好的认知能力实现强大的代表性和模型学习。在本文中,我们对视觉表示学习的现有因果推理方法进行了全面审查,涵盖了基本理论,模型和数据集。还讨论了当前方法和数据集的局限性。此外,我们提出了一些预期的挑战,机会和未来的研究方向,用于基准视觉表示学习中的因果推理算法。本文旨在为这个新兴领域提供全面的概述,引起人们的注意,鼓励讨论,使发展新颖的因果推理方法,公开可用的基准和共识建设标准的紧迫性,以可靠的视觉表示和相关的真实实践。世界应用更有效。
translated by 谷歌翻译
Graph machine learning has been extensively studied in both academia and industry. Although booming with a vast number of emerging methods and techniques, most of the literature is built on the in-distribution hypothesis, i.e., testing and training graph data are identically distributed. However, this in-distribution hypothesis can hardly be satisfied in many real-world graph scenarios where the model performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this critical problem, out-of-distribution (OOD) generalization on graphs, which goes beyond the in-distribution hypothesis, has made great progress and attracted ever-increasing attention from the research community. In this paper, we comprehensively survey OOD generalization on graphs and present a detailed review of recent advances in this area. First, we provide a formal problem definition of OOD generalization on graphs. Second, we categorize existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline, followed by detailed discussions for each category. We also review the theories related to OOD generalization on graphs and introduce the commonly used graph datasets for thorough evaluations. Finally, we share our insights on future research directions. This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
甚至在没有受限,监督的情况下,也提出了甚至在没有受限或有限的情况下学习普遍陈述的方法。使用适度数量的数据可以微调新的目标任务,或者直接在相应任务中实现显着性能的无奈域中使用的良好普遍表示。这种缓解数据和注释要求为计算机愿景和医疗保健的应用提供了诱人的前景。在本辅导纸上,我们激励了对解散的陈述,目前关键理论和详细的实际构建块和学习此类表示的标准的需求。我们讨论医学成像和计算机视觉中的应用,强调了在示例钥匙作品中进行的选择。我们通过呈现剩下的挑战和机会来结束。
translated by 谷歌翻译
我们针对虚线监督的视频对象接地(WSVog)的任务,其中仅在模型学习期间只提供视频句子注释。它旨在将句子中描述的对象本地化为视频中的视觉区域,这是模式分析和机器学习中所需的基本功能。尽管最近的进展,但现有的方法都遭受了虚假协会的严重问题,这将损害接地性能。在本文中,我们从WSVog的定义开始,从两个方面定位虚假关联:(1)协会本身由于监督弱而不是对象相关但极其暧昧,而(2)联想是不可避免的在现有方法中采用基于统计数据的匹配策略时观察偏见。考虑到这一点,我们设计一个统一的因果框架,以了解Deconfounded对象相关协会,以获得更准确和强大的视频对象接地。具体而言,我们从视频数据生成过程的角度来看,通过因果干预来学习对象相关关联。为了克服在干预方面缺乏细粒度监督的问题,我们提出了一种新的空间对抗对比学习范式。为了进一步消除对象相关协会内的随附的混杂效果,我们通过通过后门调整进行因果干预来追求真正的因果关系。最后,在统一的因果关系中以端到端的方式在统一的因果框架下学习和优化了Deconfound的对象相关关联。关于IID和OOD测试组的广泛实验,三个基准测试展示了其针对最先进的准确和强大的接地性能。
translated by 谷歌翻译
心电图(ECG)是用于监测心脏电信号和评估其功能的最常见和常规诊断工具。人心脏可能患有多种疾病,包括心律不齐。心律不齐是一种不规则的心律,在严重的情况下会导致心脏中风,可以通过ECG记录诊断。由于早期发现心律不齐非常重要,因此在过去的几十年中,计算机化和自动化的分类以及这些异常心脏信号的识别引起了很多关注。方法:本文引入了一种轻度的深度学习方法,以高精度检测8种不同的心律不齐和正常节奏。为了利用深度学习方法,将重新采样和基线徘徊清除技术应用于ECG信号。在这项研究中,将500个样本ECG段用作模型输入。节奏分类是通过11层网络以端到端方式完成的,而无需手工制作的手动功能提取。结果:为了评估提出的技术,从两个Physionet数据库,MIT-BIH心律失常数据库和长期AF数据库中选择了ECG信号。基于卷积神经网络(CNN)和长期记忆(LSTM)的组合,提出的深度学习框架比大多数最先进的方法显示出令人鼓舞的结果。所提出的方法达到98.24%的平均诊断准确性。结论:成功开发和测试了使用多种心电图信号的心律失常分类的训练有素的模型。意义:由于本工作使用具有高诊断精度的光分类技术与其他值得注意的方法相比,因此可以在Holter Monitor设备中成功实施以进行心律失常检测。
translated by 谷歌翻译
因果表示学习是识别基本因果变量及其从高维观察(例如图像)中的关系的任务。最近的工作表明,可以从观测的时间序列中重建因果变量,假设它们之间没有瞬时因果关系。但是,在实际应用中,我们的测量或帧速率可能比许多因果效应要慢。这有效地产生了“瞬时”效果,并使以前的可识别性结果无效。为了解决这个问题,我们提出了ICITRI,这是一种因果表示学习方法,当具有已知干预目标的完美干预措施时,可以在时间序列中处理瞬时效应。 Icitris从时间观察中识别因果因素,同时使用可区分的因果发现方法来学习其因果图。在三个视频数据集的实验中,Icitris准确地识别了因果因素及其因果图。
translated by 谷歌翻译
目的:心电图(ECG)信号通常会遭受噪声干扰,例如基线徘徊。心电图信号的高质量和高保真重建对于诊断心血管疾病具有重要意义。因此,本文提出了一种新型的心电图基线徘徊和降噪技术。方法:我们以特定于心电图信号的条件方式扩展模型,即心电图基线徘徊和噪声去除(Descod-ECG)的基于深度分数的扩散模型。此外,我们部署了一个多拍的平均策略,以改善信号重建。我们在QT数据库和MIT-BIH噪声应力测试数据库上进行了实验,以验证该方法的可行性。采用基线方法进行比较,包括传统的基于数字过滤器和基于深度学习的方法。结果:数量评估结果表明,所提出的方法在四个基于距离的相似性指标(平方距离的总和,最大绝对正方形,根距离的百分比和余弦相似性)上获得了出色的性能,并具有3.771 $ \ pm $ 5.713 au,$ 5.713 au, 0.329 $ \ pm $ 0.258 au,40.527 $ \ pm $ 26.258 \%和0.926 $ \ pm $ 0.087。与最佳基线方法相比,这至少导致了至少20%的总体改进。结论:本文证明了Descod-ECG的最新性能用于ECG噪声,该噪声可以更好地近似真实的数据分布和在极端噪声腐败下较高的稳定性。意义:这项研究是最早扩展基于条件扩散的生成模型以去除ECG噪声的研究之一,并且Descod-ECG具有广泛用于生物医学应用的潜力。
translated by 谷歌翻译
信号处理是几乎任何传感器系统的基本组件,具有不同科学学科的广泛应用。时间序列数据,图像和视频序列包括可以增强和分析信息提取和量化的代表性形式的信号。人工智能和机器学习的最近进步正在转向智能,数据驱动,信号处理的研究。该路线图呈现了最先进的方法和应用程序的关键概述,旨在突出未来的挑战和对下一代测量系统的研究机会。它涵盖了广泛的主题,从基础到工业研究,以简明的主题部分组织,反映了每个研究领域的当前和未来发展的趋势和影响。此外,它为研究人员和资助机构提供了识别新前景的指导。
translated by 谷歌翻译
能够可靠地估计来自视频的生理信号是低成本,临床前健康监测的强大工具。在这项工作中,我们提出了一种新的远程光学仪器描绘(RPPG)的新方法 - 从人脸或皮肤的观察结果测量血液体积的变化。类似于RPPG的当前最先进的方法,我们应用神经网络,以便在滋扰图像变异的不变性中学习深度表示。与此类方法相比,我们采用了一个完全自我监督的培训方法,这毫无依赖于昂贵的地面真理生理培训数据。我们所提出的方法在频率和时间光滑的频率和兴趣信号的时间平滑之前使用对比学习。我们在四个RPPG数据集中评估我们的方法,显示与最近监督的深度学习方法相比,可以实现可比或更好的结果,但不使用任何注释。此外,我们还将学习的显着重采样模块纳入了我们无监督的方法和监督基线。我们表明,通过允许模型来了解输入图像的位置,我们可以减少手工工程功能的需要,同时为模型的行为和可能的故障模式提供一些可解释性。我们释放守则以获得我们完整的培训和评估管道,以鼓励在这种激动人心的新方向上的可重复进展。
translated by 谷歌翻译
随着深度学习(DL)的引入,常用心电图(ECG)诊断模型的性能改善。但是,尚未充分研究多个DL组件的各种组合和/或数据增强技术对诊断的作用的影响。这项研究提出了一种基于集合的多视图学习方法,采用ECG增强技术,比传统的12级ECG诊断方法获得更高的性能。数据分析结果表明,所提出的模型报告的F1得分为0.840,这表现优于文献中现有的最新方法。
translated by 谷歌翻译
Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a "dog" can be seen, heard, and felt). We investigate the classic hypothesis that a powerful representation is one that models view-invariant factors. We study this hypothesis under the framework of multiview contrastive learning, where we learn a representation that aims to maximize mutual information between different views of the same scene but is otherwise compact. Our approach scales to any number of views, and is viewagnostic. We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics. Our approach achieves state-of-the-art results on image and video unsupervised learning benchmarks.
translated by 谷歌翻译