感兴趣的许多因果和政策效应都是由高维或非参数回归函数的线性功能定义的。 $ \ sqrt {n} $ - 对目标对象的一致且渐近地正常估计需要偏见,以减少正则化和/或模型选择对感兴趣对象的影响。通常,通过将校正项添加到功能的插件估计器中来实现,从而导致属性,例如半参数效率,双重鲁棒性和Neyman正交性。我们基于自动学习使用神经网和随机森林的Riesz表示的自动偏差程序。我们的方法仅依赖于黑框评估Oracle访问线性功能,并且不需要其分析形式的知识。我们提出了一种多任务神经网络偏见方法,具有随机梯度下降最小化的Riesz代表和回归损失,同时共享这两个函数的表示层。我们还提出了一种随机森林方法,该方法了解Riesz函数的局部线性表示。即使我们的方法适用于任意功能,我们在实验上发现它的性能与Shi等人的最先进的神经网状算法相比。 (2019)对于平均治疗效果功能的情况。我们还使用汽油需求的汽油价格变化的半合成数据来评估我们的方法,即通过连续处理估算平均边缘效应的问题。
translated by 谷歌翻译
本文涉及根N的可行性和手段,始终估算高维,大约稀疏回归的线性,均方连续功能。这些对象包括各种有趣的参数,例如回归系数,平均衍生物和平均治疗效果。我们给出了回归斜率和平均导数的估计量的收敛速率的下限,并发现这些界限大大比低维,半参数设置大。我们还提供了依据的机器学习者,这些学习者在最小的稀疏条件或速率双重鲁棒性下是一致的。这些估计值对在先前已知的更一般条件下保持root-n一致的现有估计值有所改善。
translated by 谷歌翻译
我们将自动辩护的机器学习的想法扩展到动态处理方案,并将其更普遍地扩展到嵌套功能。我们表明,可以根据递归riesz的代表表征嵌套平均回归的递归riesz代表来重新说明动态治疗方案的多重强大公式。然后,我们应用递归RIES代表估计学习算法,该学习算法估算偏低的校正,而无需表征校正术语的外观,例如,逆向概率加权术语的产物,如先前在双重稳健估计上所做的那样在动态状态中。我们的方法定义了一系列损失最小化问题的序列,其最小化是偏见校正的误解器,因此规避了解决辅助倾向模型的需求,并直接优化目标降低偏见校正的平均平方误差。我们为动态离散选择模型的估计提供了进一步的应用。
translated by 谷歌翻译
我们推出了一般,但简单,尖锐的界限,用于广泛的因果参数的省略可变偏置,可以被识别为结果的条件期望函数的线性功能。这些功能包括许多传统的因果推断研究中的调查目标,例如(加权)平均潜在结果,平均治疗效果(包括亚组效应,例如对处理的效果),(加权)平均值来自协变态分布的转变的衍生品和政策影响 - 所有是一般的非参数因果模型。我们的建设依赖于目标功能的riesz-frechet表示。具体而言,我们展示了偏差的绑定如何仅取决于潜在变量在结果中创建的附加变型以及用于感兴趣的参数的RIESZ代表。此外,在许多重要病例中(例如,部分线性模型中的平均治疗效果,或在具有二元处理的不可分配模型中),所示的界定依赖于两个易于解释的数量:非参数部分$ r ^ 2 $(Pearson的相关性与治疗和结果的未观察变量的比例“。因此,对省略变量的最大解释力(在解释处理和结果变化时)的简单合理性判断足以将整体界限放置在偏置的尺寸上。最后,利用脱叠机器学习,我们提供灵活有效的统计推理方法,以估计从观察到的分布识别的界限的组件。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译
This is paper for the smooth function approximation by neural networks (NN). Mathematical or physical functions can be replaced by NN models through regression. In this study, we get NNs that generate highly accurate and highly smooth function, which only comprised of a few weight parameters, through discussing a few topics about regression. First, we reinterpret inside of NNs for regression; consequently, we propose a new activation function--integrated sigmoid linear unit (ISLU). Then special charateristics of metadata for regression, which is different from other data like image or sound, is discussed for improving the performance of neural networks. Finally, the one of a simple hierarchical NN that generate models substituting mathematical function is presented, and the new batch concept ``meta-batch" which improves the performance of NN several times more is introduced. The new activation function, meta-batch method, features of numerical data, meta-augmentation with metaparameters, and a structure of NN generating a compact multi-layer perceptron(MLP) are essential in this study.
translated by 谷歌翻译
The existing methods for video anomaly detection mostly utilize videos containing identifiable facial and appearance-based features. The use of videos with identifiable faces raises privacy concerns, especially when used in a hospital or community-based setting. Appearance-based features can also be sensitive to pixel-based noise, straining the anomaly detection methods to model the changes in the background and making it difficult to focus on the actions of humans in the foreground. Structural information in the form of skeletons describing the human motion in the videos is privacy-protecting and can overcome some of the problems posed by appearance-based features. In this paper, we present a survey of privacy-protecting deep learning anomaly detection methods using skeletons extracted from videos. We present a novel taxonomy of algorithms based on the various learning approaches. We conclude that skeleton-based approaches for anomaly detection can be a plausible privacy-protecting alternative for video anomaly detection. Lastly, we identify major open research questions and provide guidelines to address them.
translated by 谷歌翻译
The Government of Kerala had increased the frequency of supply of free food kits owing to the pandemic, however, these items were static and not indicative of the personal preferences of the consumers. This paper conducts a comparative analysis of various clustering techniques on a scaled-down version of a real-world dataset obtained through a conjoint analysis-based survey. Clustering carried out by centroid-based methods such as k means is analyzed and the results are plotted along with SVD, and finally, a conclusion is reached as to which among the two is better. Once the clusters have been formulated, commodities are also decided upon for each cluster. Also, clustering is further enhanced by reassignment, based on a specific cluster loss threshold. Thus, the most efficacious clustering technique for designing a food kit tailored to the needs of individuals is finally obtained.
translated by 谷歌翻译