Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
因果关系的概念在人类认知中起着重要作用。在过去的几十年中,在许多领域(例如计算机科学,医学,经济学和教育)中,因果推论已经得到很好的发展。随着深度学习技术的发展,它越来越多地用于针对反事实数据的因果推断。通常,深层因果模型将协变量的特征映射到表示空间,然后设计各种客观优化函数,以根据不同的优化方法公正地估算反事实数据。本文重点介绍了深层因果模型的调查,其核心贡献如下:1)我们在多种疗法和连续剂量治疗下提供相关指标; 2)我们从时间开发和方法分类的角度综合了深层因果模型的全面概述; 3)我们协助有关相关数据集和源代码的详细且全面的分类和分析。
translated by 谷歌翻译
This invited review discusses causal learning in the context of robotic intelligence. The paper introduced the psychological findings on causal learning in human cognition, then it introduced the traditional statistical solutions on causal discovery and causal inference. The paper reviewed recent deep causal learning algorithms with a focus on their architectures and the benefits of using deep nets and discussed the gap between deep causal learning and the needs of robotic intelligence.
translated by 谷歌翻译
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at https://github.com/causal-machine-learning-lab/mliv.
translated by 谷歌翻译
了解因果关系有助于构建干预措施,以实现特定的目标并在干预下实现预测。随着学习因果关系的越来越重要,因果发现任务已经从使用传统方法推断出潜在的因果结构从观察数据到深度学习涉及的模式识别领域。大量数据的快速积累促进了具有出色可扩展性的因果搜索方法的出现。因果发现方法的现有摘要主要集中在基于约束,分数和FCM的传统方法上,缺乏针对基于深度学习的方法的完美分类和阐述,还缺乏一些考虑和探索因果关系的角度来探索因果发现方法范式。因此,我们根据变量范式将可能的因果发现任务分为三种类型,并分别给出三个任务的定义,定义和实例化每个任务的相关数据集以及同时构建的最终因果模型,然后审查不同任务的主要因果发现方法。最后,我们从不同角度提出了一些路线图,以解决因果发现领域的当前研究差距,并指出未来的研究方向。
translated by 谷歌翻译
Although understanding and characterizing causal effects have become essential in observational studies, it is challenging when the confounders are high-dimensional. In this article, we develop a general framework $\textit{CausalEGM}$ for estimating causal effects by encoding generative modeling, which can be applied in both binary and continuous treatment settings. Under the potential outcome framework with unconfoundedness, we establish a bidirectional transformation between the high-dimensional confounders space and a low-dimensional latent space where the density is known (e.g., multivariate normal distribution). Through this, CausalEGM simultaneously decouples the dependencies of confounders on both treatment and outcome and maps the confounders to the low-dimensional latent space. By conditioning on the low-dimensional latent features, CausalEGM can estimate the causal effect for each individual or the average causal effect within a population. Our theoretical analysis shows that the excess risk for CausalEGM can be bounded through empirical process theory. Under an assumption on encoder-decoder networks, the consistency of the estimate can be guaranteed. In a series of experiments, CausalEGM demonstrates superior performance over existing methods for both binary and continuous treatments. Specifically, we find CausalEGM to be substantially more powerful than competing methods in the presence of large sample sizes and high dimensional confounders. The software of CausalEGM is freely available at https://github.com/SUwonglab/CausalEGM.
translated by 谷歌翻译
因果关系是理解世界的科学努力的基本组成部分。不幸的是,在心理学和社会科学中,因果关系仍然是禁忌。由于越来越多的建议采用因果方法进行研究的重要性,我们重新制定了心理学研究方法的典型方法,以使不可避免的因果理论与其余的研究渠道协调。我们提出了一个新的过程,该过程始于从因果发现和机器学习的融合中纳入技术的发展,验证和透明的理论形式规范。然后,我们提出将完全指定的理论模型的复杂性降低到与给定目标假设相关的基本子模型中的方法。从这里,我们确定利息量是否可以从数据中估算出来,如果是的,则建议使用半参数机器学习方法来估计因果关系。总体目标是介绍新的研究管道,该管道可以(a)促进与测试因果理论的愿望兼容的科学询问(b)鼓励我们的理论透明代表作为明确的数学对象,(c)将我们的统计模型绑定到我们的统计模型中该理论的特定属性,因此减少了理论到模型间隙通常引起的规范不足问题,以及(d)产生因果关系和可重复性的结果和估计。通过具有现实世界数据的教学示例来证明该过程,我们以摘要和讨论来结论。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
因果推断能够估计治疗效果(即,治疗结果的因果效果),使各个领域的决策受益。本研究中的一个基本挑战是观察数据的治疗偏见。为了提高对因果推断的观察研究的有效性,基于代表的方法作为最先进的方法表明了治疗效果估计的卓越性能。基于大多数基于表示的方法假设所有观察到的协变量都是预处理的(即,不受治疗影响的影响),并学习这些观察到的协变量的平衡表示,以估算治疗效果。不幸的是,这种假设往往在实践中往往是太严格的要求,因为一些协调因子是通过对治疗的干预进行改变(即,后治疗)来改变。相比之下,从不变的协变量中学到的平衡表示因此偏置治疗效果估计。
translated by 谷歌翻译
在广泛的任务中,在包括医疗处理,广告和营销和政策制定的发​​展中,对观测数据进行因果推断非常有用。使用观察数据进行因果推断有两种重大挑战:治疗分配异质性(\ Texit {IE},治疗和未经处理的群体之间的差异),并且没有反事实数据(\ TEXTIT {IE},不知道是什么已经发生了,如果确实得到治疗的人,反而尚未得到治疗)。通过组合结构化推论和有针对性的学习来解决这两个挑战。在结构方面,我们将联合分布分解为风险,混淆,仪器和杂项因素,以及在目标学习方面,我们应用来自影响曲线的规则器,以减少残余偏差。进行了一项消融研究,对基准数据集进行评估表明,TVAE具有竞争力和最先进的艺术表现。
translated by 谷歌翻译
在科学研究和现实世界应用的许多领域中,非实验数据的因果效应的无偏估计对于理解数据的基础机制以及对有效响应或干预措施的决策至关重要。从不同角度对这个具有挑战性的问题进行了大量研究。对于数据中的因果效应估计,始终做出诸如马尔可夫财产,忠诚和因果关系之类的假设。在假设下,仍然需要一组协变量或基本因果图之类的全部知识。一个实用的挑战是,在许多应用程序中,没有这样的全部知识或只有某些部分知识。近年来,研究已经出现了基于图形因果模型的搜索策略,以从数据中发现有用的知识,以进行因果效应估计,并具有一些温和的假设,并在应对实际挑战方面表现出了诺言。在这项调查中,我们回顾了方法,并关注数据驱动方法所面临的挑战。我们讨论数据驱动方法的假设,优势和局限性。我们希望这篇综述将激励更多的研究人员根据图形因果建模设计更好的数据驱动方法,以解决因果效应估计的具有挑战性的问题。
translated by 谷歌翻译
数据驱动的社会事件预测方法利用相关的历史信息来预测未来的事件。这些方法依赖于历史标记数据,并且当数据有限或质量差时无法准确地预测事件。研究事件之间的因果效应超出相关性分析,并且可以有助于更强大的事件预测。然而,由于若干因素,在数据驱动事件预测中纳入因果区分析是具有挑战性的:(i)事件发生在复杂和充满活力的社交环境中。许多未观察到的变量,即隐藏的混乱,影响潜在的原因和结果。 (ii)给予时尚非独立和相同分布的(非IID)数据,为准确的因果效应估计建模隐藏的混淆并不差。在这项工作中,我们介绍了一个深入的学习框架,将因果效应估计整合到事件预测中。我们首先研究了从时空属性的观察事件数据的单个治疗效果(ITE)估计的问题,并提出了一种新的因果推断模型来估计ites。然后,我们将学习的事件相关的因果信息纳入事件预测作为先验知识。引入了两个强大的学习模块,包括特征重载模块和近似约束损耗,以实现先验知识注入。我们通过将学习的因果信息送入不同的深度学习方法,评估了真实世界事件数据集的提出的因果推断模型,并验证了在事件预测中提出的强大学习模块的有效性。实验结果展示了社会事件中拟议的因果推断模型的强度,并展示了社会事件预测中强大的学习模块的有益特性。
translated by 谷歌翻译
解决公平问题对于安全使用机器学习算法来支持对人们的生活产生关键影响的决策,例如雇用工作,儿童虐待,疾病诊断,贷款授予等。过去十年,例如统计奇偶校验和均衡的赔率。然而,最新的公平概念是基于因果关系的,反映了现在广泛接受的想法,即使用因果关系对于适当解决公平问题是必要的。本文研究了基于因果关系的公平概念的详尽清单,并研究了其在现实情况下的适用性。由于大多数基于因果关系的公平概念都是根据不可观察的数量(例如干预措施和反事实)来定义的,因此它们在实践中的部署需要使用观察数据来计算或估计这些数量。本文提供了有关从观察数据(包括可识别性(Pearl的SCM框架))和估计(潜在结果框架)中推断出因果量的不同方法的全面报告。该调查论文的主要贡献是(1)指南,旨在在特定的现实情况下帮助选择合适的公平概念,以及(2)根据Pearl的因果关系阶梯的公平概念的排名,表明它很难部署。实践中的每个概念。
translated by 谷歌翻译
发现新药是寻求并证明因果关系。作为一种新兴方法利用人类的知识和创造力,数据和机器智能,因果推论具有减少认知偏见并改善药物发现决策的希望。尽管它已经在整个价值链中应用了,但因子推理的概念和实践对许多从业者来说仍然晦涩难懂。本文提供了有关因果推理的非技术介绍,审查了其最新应用,并讨论了在药物发现和开发中采用因果语言的机会和挑战。
translated by 谷歌翻译
大量的数据和创新算法使数据驱动的建模成为现代行业的流行技术。在各种数据驱动方法中,潜在变量模型(LVM)及其对应物占主要份额,并在许多工业建模领域中起着至关重要的作用。 LVM通常可以分为基于统计学习的经典LVM和基于神经网络的深层LVM(DLVM)。我们首先讨论经典LVM的定义,理论和应用,该定义和应用既是综合教程,又是对经典LVM的简短申请调查。然后,我们对当前主流DLVM进行了彻底的介绍,重点是其理论和模型体系结构,此后不久就提供了有关DLVM的工业应用的详细调查。上述两种类型的LVM具有明显的优势和缺点。具体而言,经典的LVM具有简洁的原理和良好的解释性,但是它们的模型能力无法解决复杂的任务。基于神经网络的DLVM具有足够的模型能力,可以在复杂的场景中实现令人满意的性能,但它以模型的解释性和效率为例。旨在结合美德并减轻这两种类型的LVM的缺点,并探索非神经网络的举止以建立深层模型,我们提出了一个新颖的概念,称为“轻量级Deep LVM(LDLVM)”。在提出了这个新想法之后,该文章首先阐述了LDLVM的动机和内涵,然后提供了两个新颖的LDLVM,并详尽地描述了其原理,建筑和优点。最后,讨论了前景和机会,包括重要的开放问题和可能的研究方向。
translated by 谷歌翻译
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译
Graph machine learning has been extensively studied in both academia and industry. Although booming with a vast number of emerging methods and techniques, most of the literature is built on the in-distribution hypothesis, i.e., testing and training graph data are identically distributed. However, this in-distribution hypothesis can hardly be satisfied in many real-world graph scenarios where the model performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this critical problem, out-of-distribution (OOD) generalization on graphs, which goes beyond the in-distribution hypothesis, has made great progress and attracted ever-increasing attention from the research community. In this paper, we comprehensively survey OOD generalization on graphs and present a detailed review of recent advances in this area. First, we provide a formal problem definition of OOD generalization on graphs. Second, we categorize existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline, followed by detailed discussions for each category. We also review the theories related to OOD generalization on graphs and introduce the commonly used graph datasets for thorough evaluations. Finally, we share our insights on future research directions. This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
translated by 谷歌翻译
制定和实施基于AI的解决方案有助于国家和联邦政府机构,研究机构和商业公司加强决策过程,自动化连锁业务,减少自然和人力资源的消费。与此同时,实践中使用的大多数AI方法只能表示为“黑匣子”并遭受缺乏透明度。这最终可能导致意外的结果和破坏在这种系统中的信任。因此,至关重要,不仅要开发有效和强大的AI系统,而且为了确保其内部过程可解释和公平。我们本章的目标是利用美国经济技术部门的示例,介绍具有高影响决策的AI系统的保证方法的主题。我们通过提供技术经济数据集的因果试验,我们解释了这些领域如何从数据集的关键指标之间揭示致命关系。审查了几种因果推断方法和AI保证技术,并对数据转换为图形结构数据集。
translated by 谷歌翻译
作为因果推断中的重要问题,我们讨论了治疗效果(TES)的估计。代表混淆器作为潜在的变量,我们提出了完整的VAE,这是一个变形AutoEncoder(VAE)的新变种,其具有足以识别TES的预后分数的动机。我们的VAE也自然地提供了使用其之前用于治疗组的陈述。(半)合成数据集的实验显示在各种环境下的最先进的性能,包括不观察到的混淆。基于我们模型的可识别性,我们在不协调下证明TES的识别,并讨论(可能)扩展到更难的设置。
translated by 谷歌翻译
视觉表示学习在各种现实世界中无处不在,包括视觉理解,视频理解,多模式分析,人类计算机的互动和城市计算。由于出现了大量多模式的异质空间/时间/时空数据,因此在大数据时代,缺乏可解释性,鲁棒性和分布外的概括正在成为现有视觉模型的挑战。大多数现有方法倾向于符合原始数据/可变分布,而忽略了多模式知识背后的基本因果关系,该知识缺乏统一的指导和分析,并分析了为什么现代视觉表示学习方法很容易崩溃成数据偏见并具有有限的概括和认知能力。因此,受到人类水平代理人的强大推理能力的启发,近年来见证了巨大的努力,以发展因果推理范式,以良好的认知能力实现强大的代表性和模型学习。在本文中,我们对视觉表示学习的现有因果推理方法进行了全面审查,涵盖了基本理论,模型和数据集。还讨论了当前方法和数据集的局限性。此外,我们提出了一些预期的挑战,机会和未来的研究方向,用于基准视觉表示学习中的因果推理算法。本文旨在为这个新兴领域提供全面的概述,引起人们的注意,鼓励讨论,使发展新颖的因果推理方法,公开可用的基准和共识建设标准的紧迫性,以可靠的视觉表示和相关的真实实践。世界应用更有效。
translated by 谷歌翻译