单细胞RNA-seq数据允许在不断增长的一组生物环境中定量细胞类型差异。但是,确定了一小部分基因组特征来解释这种变异性可能是错误的,并且在计算上很棘手。在这里,我们介绍了MarkerMap,这是一种用于选择最小基因集的生成模型,这些基因集对细胞类型的起源提供最大信息,并启用整个转录组重建。MarkerMap为旨在识别特定细胞类型种群的监督标记选择提供了可扩展的框架,以及针对基因表达插补和重建的无监督标记选择。我们基于Markermap的竞争性能,以实现对真实单细胞基因表达数据集的先前发表的方法。MarkerMap可作为可安装的PIP软件包获得,可作为旨在开发可解释的机器学习技术的社区资源,以增强单细胞研究中的可解释性。
translated by 谷歌翻译
随机微分方程的系统定义了一系列随机波动率模型。尽管这些模型在金融和统计气候学等领域中取得了广泛的成功,但它们通常缺乏在历史数据上条件产生真正的后验分布的能力。为了解决这一基本限制,我们展示了如何将一类随机波动率模型重新塑造为具有专门协方差函数的层次高斯工艺(GP)模型。该GP模型保留了随机波动率模型的电感偏差,同时提供了GP推断给出的后验预测分布。在此框架内,我们从研究良好的域中汲取灵感,以引入新的型号,即Volt和Magpie,这些模型在库存和风速预测中的表现明显超过了基线,并且自然扩展到多任务设置。
translated by 谷歌翻译
建设深入学习系统之间通常有足够刺激现实的细微差别的深入学习系统之间的权衡,并具有良好的感应偏差以获得高效学习。我们将残留的途径(RPPS)引入了将硬建筑限制转换为软前沿的方法,引导模型朝向结构化解决方案,同时保留捕获额外复杂性的能力。使用RPPS,我们用归纳偏差构建具有协调的归纳偏差,但不限制灵活性。我们表明RPPS对近似或错过的对称性有弹性,并且即使在对称性精确时也与完全约束的模型有效。我们展示RPP与动态系统,表格数据和加强学习的广泛适用性。在Mujoco Locomotion任务中,其中联系力和定向奖励违反了严格的标准性假设,RPP优于无基线的无模型RL代理,并且还改善了基于模型的RL的学习过渡模型。
translated by 谷歌翻译
通过更好地了解多层网络的损失表面,我们可以构建更强大和准确的培训程序。最近发现,独立训练的SGD解决方案可以沿近持续训练损失的一维路径连接。在本文中,我们表明存在模式连接的单纯复合物,形成低损耗的多维歧管,连接许多独立培训的型号。灵感来自这一发现,我们展示了如何有效地建立快速合奏的单纯性复杂,表现优于准确性,校准和对数据集移位的鲁棒性的独立培训的深度集合。值得注意的是,我们的方法只需要几个训练时期来发现低损失单纯乳,从预先接受训练的解决方案开始。代码可在https://github.com/g-benton/loss-surface-simplexes中获得。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
The ability to jointly learn from multiple modalities, such as text, audio, and visual data, is a defining feature of intelligent systems. While there have been promising advances in designing neural networks to harness multimodal data, the enormous success of data augmentation currently remains limited to single-modality tasks like image classification. Indeed, it is particularly difficult to augment each modality while preserving the overall semantic structure of the data; for example, a caption may no longer be a good description of an image after standard augmentations have been applied, such as translation. Moreover, it is challenging to specify reasonable transformations that are not tailored to a particular modality. In this paper, we introduce LeMDA, Learning Multimodal Data Augmentation, an easy-to-use method that automatically learns to jointly augment multimodal data in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that LeMDA can (1) profoundly improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data.
translated by 谷歌翻译
Previous work has shown the potential of deep learning to predict renal obstruction using kidney ultrasound images. However, these image-based classifiers have been trained with the goal of single-visit inference in mind. We compare methods from video action recognition (i.e. convolutional pooling, LSTM, TSM) to adapt single-visit convolutional models to handle multiple visit inference. We demonstrate that incorporating images from a patient's past hospital visits provides only a small benefit for the prediction of obstructive hydronephrosis. Therefore, inclusion of prior ultrasounds is beneficial, but prediction based on the latest ultrasound is sufficient for patient risk stratification.
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
Applying deep learning concepts from image detection and graph theory has greatly advanced protein-ligand binding affinity prediction, a challenge with enormous ramifications for both drug discovery and protein engineering. We build upon these advances by designing a novel deep learning architecture consisting of a 3-dimensional convolutional neural network utilizing channel-wise attention and two graph convolutional networks utilizing attention-based aggregation of node features. HAC-Net (Hybrid Attention-Based Convolutional Neural Network) obtains state-of-the-art results on the PDBbind v.2016 core set, the most widely recognized benchmark in the field. We extensively assess the generalizability of our model using multiple train-test splits, each of which maximizes differences between either protein structures, protein sequences, or ligand extended-connectivity fingerprints. Furthermore, we perform 10-fold cross-validation with a similarity cutoff between SMILES strings of ligands in the training and test sets, and also evaluate the performance of HAC-Net on lower-quality data. We envision that this model can be extended to a broad range of supervised learning problems related to structure-based biomolecular property prediction. All of our software is available as open source at https://github.com/gregory-kyro/HAC-Net/.
translated by 谷歌翻译
State-of-the-art performance in electroencephalography (EEG) decoding tasks is currently often achieved with either Deep-Learning or Riemannian-Geometry-based decoders. Recently, there is growing interest in Deep Riemannian Networks (DRNs) possibly combining the advantages of both previous classes of methods. However, there are still a range of topics where additional insight is needed to pave the way for a more widespread application of DRNs in EEG. These include architecture design questions such as network size and end-to-end ability as well as model training questions. How these factors affect model performance has not been explored. Additionally, it is not clear how the data within these networks is transformed, and whether this would correlate with traditional EEG decoding. Our study aims to lay the groundwork in the area of these topics through the analysis of DRNs for EEG with a wide range of hyperparameters. Networks were tested on two public EEG datasets and compared with state-of-the-art ConvNets. Here we propose end-to-end EEG SPDNet (EE(G)-SPDNet), and we show that this wide, end-to-end DRN can outperform the ConvNets, and in doing so use physiologically plausible frequency regions. We also show that the end-to-end approach learns more complex filters than traditional band-pass filters targeting the classical alpha, beta, and gamma frequency bands of the EEG, and that performance can benefit from channel specific filtering approaches. Additionally, architectural analysis revealed areas for further improvement due to the possible loss of Riemannian specific information throughout the network. Our study thus shows how to design and train DRNs to infer task-related information from the raw EEG without the need of handcrafted filterbanks and highlights the potential of end-to-end DRNs such as EE(G)-SPDNet for high-performance EEG decoding.
translated by 谷歌翻译