归一化的流提供了一种优雅的生成建模方法,可以有效地采样和确切的数据分布的密度评估。但是,当在低维歧管上支持数据分布或具有非平凡的拓扑结构时,当前技术的表现性有显着局限性。我们介绍了一个新的统计框架,用于学习局部正常流的混合物作为数据歧管上的“图表图”。我们的框架增强了最近方法的表现力,同时保留了标准化流的签名特性,他们承认了精确的密度评估。我们通过量化自动编码器(VQ-AE)学习了数据歧管图表的合适地图集,并使用条件流量学习了它们的分布。我们通过实验验证我们的概率框架可以使现有方法更好地模拟数据分布,而不是复杂的歧管。
translated by 谷歌翻译
归一化流提供一种优雅的方法,用于通过使用可逆的变换获得来自分布的易于密度估计。主要挑战是提高模型的表现,同时保持可逆性约束完整。我们建议通过纳入本地化的自我关注来这样做。然而,传统的自我关注机制不满足获得可逆流的要求,并且不能胆无利地结合到标准化流中。为了解决这一点,我们介绍了一种称为细微的收缩流(ACF)的新方法,它利用了一种特殊类别的基于流的生成模型 - 收缩流。我们证明可以以即插即用的方式将ACF引入到最新的现有技术的状态。这被证明是不仅改善了这些模型的表示力(改善了每次昏暗度量的比特),而且还导致训练它们的速度明显更快。在包括测试图像之间的分隔的定性结果证明样本更加现实并捕获数据中的本地相关性。我们通过使用AWGN进行扰动分析来进一步评估结果,证明ACF模型(特别是点 - 产品变体)表现出更好,更加一致的恢复能力噪声。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We present a framework for ranking images within their class based on the strength of spurious cues present. By measuring the gap in accuracy on the highest and lowest ranked images (we call this spurious gap), we assess spurious feature reliance for $89$ diverse ImageNet models, finding that even the best models underperform in images with weak spurious presence. However, the effect of spurious cues varies far more dramatically across classes, emphasizing the crucial, often overlooked, class-dependence of the spurious correlation problem. While most spurious features we observe are clarifying (i.e. improving test-time accuracy when present, as is typically expected), we surprisingly find many cases of confusing spurious features, where models perform better when they are absent. We then close the spurious gap by training new classification heads on lowly ranked (i.e. without common spurious cues) images, resulting in improved effective robustness to distribution shifts (ObjectNet, ImageNet-R, ImageNet-Sketch). We also propose a second metric to assess feature reliability, finding that spurious features are generally less reliable than non-spurious (core) ones, though again, spurious features can be more reliable for certain classes. To enable our analysis, we annotated $5,000$ feature-class dependencies over {\it all} of ImageNet as core or spurious using minimal human supervision. Finally, we show the feature discovery and spuriosity ranking framework can be extended to other datasets like CelebA and WaterBirds in a lightweight fashion with only linear layer training, leading to discovering a previously unknown racial bias in the Celeb-A hair classification.
translated by 谷歌翻译
Recommender systems are ubiquitous in most of our interactions in the current digital world. Whether shopping for clothes, scrolling YouTube for exciting videos, or searching for restaurants in a new city, the recommender systems at the back-end power these services. Most large-scale recommender systems are huge models trained on extensive datasets and are black-boxes to both their developers and end-users. Prior research has shown that providing recommendations along with their reason enhances trust, scrutability, and persuasiveness of the recommender systems. Recent literature in explainability has been inundated with works proposing several algorithms to this end. Most of these works provide item-style explanations, i.e., `We recommend item A because you bought item B.' We propose a novel approach, RecXplainer, to generate more fine-grained explanations based on the user's preference over the attributes of the recommended items. We perform experiments using real-world datasets and demonstrate the efficacy of RecXplainer in capturing users' preferences and using them to explain recommendations. We also propose ten new evaluation metrics and compare RecXplainer to six baseline methods.
translated by 谷歌翻译
Tasks critical to enterprise profitability, such as customer churn prediction, fraudulent account detection or customer lifetime value estimation, are often tackled by models trained on features engineered from customer data in tabular format. Application-specific feature engineering adds development, operationalization and maintenance costs over time. Recent advances in representation learning present an opportunity to simplify and generalize feature engineering across applications. When applying these advancements to tabular data researchers deal with data heterogeneity, variations in customer engagement history or the sheer volume of enterprise datasets. In this paper, we propose a novel approach to encode tabular data containing customer transactions, purchase history and other interactions into a generic representation of a customer's association with the business. We then evaluate these embeddings as features to train multiple models spanning a variety of applications. CASPR, Customer Activity Sequence-based Prediction and Representation, applies Transformer architecture to encode activity sequences to improve model performance and avoid bespoke feature engineering across applications. Our experiments at scale validate CASPR for both small and large enterprise applications.
translated by 谷歌翻译
视觉问题回答(VQA)是一项多模式的任务,涉及从输入图像中回答问题,以语义了解图像的内容并以自然语言回答。由于VQA系统回答的问题范围,使用VQA进行灾难管理是一项重要的研究。但是,主要的挑战是评估受影响地区的标签产生的延迟。为了解决这个问题,我们部署了预先训练的剪辑模型,该模型在视觉图像对中进行了训练。但是,我们从经验上看到该模型的零击性能差。因此,我们相反,我们使用此模型中的文本和图像的预训练嵌入,进行我们的监督培训,并超过Floodnet数据集上的先前最新结果。我们将其扩展到持续的设置,这是一种更现实的情况。我们解决了使用各种经验重播方法的灾难性遗忘的问题。我们的培训运行可在以下网址提供:https://wandb.ai/compyle/continual_vqa_final
translated by 谷歌翻译
混合整数程序(MIP)通常通过分支结合算法解决。最近,学会模仿专家强的分支启发式的快速近似,由于它成功地减少了解决MIP的运行时间,因此引起了人们的关注。但是,现有的学习与分支方法假设整个培训数据都可以在一次培训中获得。这个假设通常不正确,如果随着时间的推移以连续的方式提供培训数据,现有技术会遭受灾难性遗忘。在这项工作中,我们研究了迄今未开发的终身学习范式,以在混合整数程序上分支。为了减轻灾难性的遗忘,我们提出了Limip,该limip是由以两部分图的形式对MIP实例进行建模的想法,我们使用双方图形注意力网络将其映射到嵌入式空间。这种丰富的嵌入空间避免了通过应用知识蒸馏和弹性重量巩固的灾难性遗忘,其中我们学习参数的关键是保持疗效,因此受到保护,免受明显的漂移。我们评估了一系列NP硬性问题的利润,并确定与现有基线相比,在面对终身学习时,Limip的速度高达50%。
translated by 谷歌翻译
深度学习网络已在各种应用中表现出高性能,例如图像分类,语音识别和自然语言处理。但是,存在使用对抗攻击所利用的主要漏洞。对抗性攻击通过稍微稍微更改输入图像,使其对肉眼几乎无法检测到图像,但导致网络的分类非常不同。本文探讨了使用两种类型的体系结构:MobileNetV3和Resnet50探讨图像分割DeepLabV3模型上预计的梯度下降(PGD)攻击和自适应面膜分割攻击(ASMA),发现PGD在更改分割方面非常一致它的目标虽然ASMA对多类目标的概括不那么有效。然而,这种攻击的存在使所有图像分类深度学习网络处于剥削的危险之中。
translated by 谷歌翻译
减少甲烷排放对于缓解全球变暖至关重要。为了将甲烷排放归因于其来源,有必要综合的甲烷源基础设施数据集。深入学习远程感知的图像的最新进展有可能识别甲烷源的位置和特征,但是缺乏公开可用的数据,可以使机器学习研究人员和从业人员能够构建自动映射方法。为了帮助填补这一空白,我们在美国构建了一个称为Meter-ML的多传感器数据集,该数据集包含86,625个地理参考的NAIP,Sentinel-1和Sentinel-2图像,并在美国标记为有甲烷源设施,包括甲烷源设施,包括集中动物喂养操作,,,,,,,包括浓缩动物喂养操作,煤矿,垃圾填埋场,天然气加工厂,炼油厂和石油末端以及废水处理厂。我们尝试各种模型,以利用不同的空间分辨率,空间足迹,图像产品和光谱带。我们发现,我们的最佳模型在确定浓缩动物喂养操作的精确召回曲线下达到了一个面积,在专家标签的测试集上,用于识别浓缩动物饲养操作,用于油炼油厂和石油末端0.821,这表明有可能进行大规模映射。我们在https://stanfordmlgroup.github.io/projects/meter-ml/上免费提供仪表-ML,以支持自动化甲烷源映射的未来工作。
translated by 谷歌翻译