本文提出了一种估计条件平均治疗效果的新方法。它称为TNW-CATE(可训练的Nadaraya-Watson回归CATE),并且基于以下假设:控制数量相当大,而处理的数量很少。 TNW-CATE使用Nadaraya-Watson回归来预测对照组和治疗组的患者的结果。 TNW-CATE背后的主要思想是通过使用特定形式的重量分享神经网络来训练Nadaraya-Watson回归的内核。该网络在控件上进行了训练,并用一组具有共享参数的神经子网代替标准内核,使每个子网都实现了可训练的内核,但是整个网络都实现了Nadaraya-Watson估计器。网络记住特征向量如何位于特征空间中。当源和目标数据的域相似时,所提出的方法类似于传输学习,但任务不同。各种数值仿真实验说明了TNW-CATE,并将其与众所周知的T-Learner,S-Learner和X-Learner进行比较,以进行几种类型的对照和治疗结果函数。 https://github.com/stasychbr/tnw-cate提供了实施TNW-CATE的算法的代码。
translated by 谷歌翻译
提出了一个新的基于注意力的升压机(GBM)的模型,称为AgBoost(基于注意力的梯度提升),以解决回归问题。拟议的AGBOOST模型背后的主要思想是将带有可训练参数的注意力分配给GBM的迭代,条件是决策树是GBM中的基础学习者。注意力的重量是通过应用决策树的特性和使用Huber的污染模型来确定的,该模型在注意力的参数和注意力重量之间提供了有趣的线性依赖性。这种特殊性使我们能够通过线性约束解决标准二次优化问题来训练注意力权重。注意力重量还取决于折现因子作为调整参数,这决定了重量的影响随迭代次数减少的程度。对两种类型的基础学习者,原始决策树和具有各种回归数据集的极为随机树进行的数值实验说明了所提出的模型。
translated by 谷歌翻译
提出了使用注意力和自我发项机制共同解决回归问题的新模型。这些模型可以被视为基于注意力的随机森林的扩展,其思想源于将Nadaraya-Watson内核回归和Huber污染模型的组合应用于随机森林。自我发作旨在捕获树木预测的依赖性,并消除随机森林中的噪声或异常预测。自我发场模块与注意力重量的注意模块共同训练。结果表明,注意力重量的训练过程减少到解决单个二次或线性优化问题。提出并比较了一般方法的三个修改。还考虑了对随机森林的特定多头自我注意。自我注意事项的头部是通过更改其调谐参数(包括内核参数和模型的污染参数)来获得的。使用各种数据集的数值实验说明了所提出的模型,并表明自我发挥的补充可改善许多数据集的模型性能。
translated by 谷歌翻译
提出了一种称为ABRF(基于关注的随机林)的新方法及其用于将注意机制应用于回归和分类的随机林(RF)的修改。拟议的ABRF模型背后的主要观点是以特定方式将注意力与可培训参数分配给决策树。权重取决于实例之间的距离,其落入树的相应叶子,以及落入同一叶子的情况。这种想法源于Nadaraya-Watson内核回归以RF的形式表示。提出了三种改进的一般方法。第一个基于应用Huber的污染模型,并通过解决二次或线性优化问题来计算注意力。第二个和第三种修改使用基于梯度的算法来计算可训练参数。各种回归和分类数据集的数值实验说明了所提出的方法。
translated by 谷歌翻译
提出了一种新的基于多关注的MIL问题(MIMIL)的方法,其考虑了袋子中的每个分析的贴片的邻近补丁或情况。在该方法中,关注模块之一考虑了相邻的补丁或实例,使用了几个注意力模块来获取各种特征表示的补丁,并且一个注意模块用于组合不同的特征表示,以提供每个补丁的准确分类(实例)和整袋。由于妈妈,实现了以小规模的嵌入形式的斑块和邻居的组合表示,用于简单分类。此外,实现了不同类型的贴片,并有效地处理了通过使用几种注意力模块的袋中贴片的不同特征表示。提出了一种简单的解释贴片分类预测的方法。各种数据集的数值实验说明了所提出的方法。
translated by 谷歌翻译
Text classifiers have promising applications in high-stake tasks such as resume screening and content moderation. These classifiers must be fair and avoid discriminatory decisions by being invariant to perturbations of sensitive attributes such as gender or ethnicity. However, there is a gap between human intuition about these perturbations and the formal similarity specifications capturing them. While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e.g., in cases of asymmetric counterfactuals). This work proposes novel methods for bridging this gap by discovering expressive and intuitive individual fairness specifications. We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to automatically generate expressive candidate pairs of semantically similar sentences that differ along sensitive attributes. We then validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in the context of toxicity classification. Finally, we show how limited amounts of human feedback can be leveraged to learn a similarity specification that can be used to train downstream fairness-aware models.
translated by 谷歌翻译
Autonomous driving is an exciting new industry, posing important research questions. Within the perception module, 3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians. While hardware systems and sensors have dramatically improved over the decades -- with cars potentially boasting complex LiDAR and vision systems and with a growing expansion of the available body of dedicated datasets for this newly available information -- not much work has been done to harness these novel signals for the core problem of 3D human pose estimation. Our method, which we coin HUM3DIL (HUMan 3D from Images and LiDAR), efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin. It is a fast and compact model for onboard deployment. Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages. Quantitative experiments on the Waymo Open Dataset support these claims, where we achieve state-of-the-art results on the task of 3D pose estimation.
translated by 谷歌翻译
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface. The 3D points have associated semantics and can move freely in 3D space. This allows for optimal coverage of the person of interest, beyond just the body shape, which in turn, additionally helps modeling accessories, hair, and loose clothing. Owing to this, we present a complete 3D transformer-based attention framework which, given a single image of a person in an unconstrained pose, generates an animatable 3D reconstruction with albedo and illumination decomposition, as a result of a single end-to-end model, trained semi-supervised, and with no additional postprocessing. We show that our S3F model surpasses the previous state-of-the-art on various tasks, including monocular 3D reconstruction, as well as albedo and shading estimation. Moreover, we show that the proposed methodology allows novel view synthesis, relighting, and re-posing the reconstruction, and can naturally be extended to handle multiple input images (e.g. different views of a person, or the same view, in different poses, in video). Finally, we demonstrate the editing capabilities of our model for 3D virtual try-on applications.
translated by 谷歌翻译
Domain adaptation has been vastly investigated in computer vision but still requires access to target images at train time, which might be intractable in some conditions, especially for long-tail samples. In this paper, we propose the task of `Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a general textual description of the target domain, i.e., a prompt. First, we leverage a pretrained contrastive vision-language model (CLIP) to optimize affine transformations of source features, bringing them closer to target text embeddings, while preserving their content and semantics. Second, we show that augmented features can be used to perform zero-shot domain adaptation for semantic segmentation. Experiments demonstrate that our method significantly outperforms CLIP-based style transfer baselines on several datasets for the downstream task at hand. Our prompt-driven approach even outperforms one-shot unsupervised domain adaptation on some datasets, and gives comparable results on others. The code is available at https://github.com/astra-vision/PODA.
translated by 谷歌翻译
Graph learning problems are typically approached by focusing on learning the topology of a single graph when signals from all nodes are available. However, many contemporary setups involve multiple related networks and, moreover, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by this, we propose a joint graph learning method that takes into account the presence of hidden (latent) variables. Intuitively, the presence of the hidden nodes renders the inference task ill-posed and challenging to solve, so we overcome this detrimental influence by harnessing the similarity of the estimated graphs. To that end, we assume that the observed signals are drawn from a Gaussian Markov random field with latent variables and we carefully model the graph similarity among hidden (latent) nodes. Then, we exploit the structure resulting from the previous considerations to propose a convex optimization problem that solves the joint graph learning task by providing a regularized maximum likelihood estimator. Finally, we compare the proposed algorithm with different baselines and evaluate its performance over synthetic and real-world graphs.
translated by 谷歌翻译