This chapter sheds light on the synaptic organization of the brain from the perspective of computational neuroscience. It provides an introductory overview on how to account for empirical data in mathematical models, implement them in software, and perform simulations reflecting experiments. This path is demonstrated with respect to four key aspects of synaptic signaling: the connectivity of brain networks, synaptic transmission, synaptic plasticity, and the heterogeneity across synapses. Each step and aspect of the modeling and simulation workflow comes with its own challenges and pitfalls, which are highlighted and addressed in detail.
translated by 谷歌翻译
Even though deep neural networks (DNNs) achieve state-of-the-art results for a number of problems involving genomic data, getting DNNs to explain their decision-making process has been a major challenge due to their black-box nature. One way to get DNNs to explain their reasoning for prediction is via attribution methods which are assumed to highlight the parts of the input that contribute to the prediction the most. Given the existence of numerous attribution methods and a lack of quantitative results on the fidelity of those methods, selection of an attribution method for sequence-based tasks has been mostly done qualitatively. In this work, we take a step towards identifying the most faithful attribution method by proposing a computational approach that utilizes point mutations. Providing quantitative results on seven popular attribution methods, we find Layerwise Relevance Propagation (LRP) to be the most appropriate one for translation initiation, with LRP identifying two important biological features for translation: the integrity of Kozak sequence as well as the detrimental effects of premature stop codons.
translated by 谷歌翻译
We present an approach for safe trajectory planning, where a strategic task related to autonomous racing is learned sample-efficient within a simulation environment. A high-level policy, represented as a neural network, outputs a reward specification that is used within the cost function of a parametric nonlinear model predictive controller (NMPC). By including constraints and vehicle kinematics in the NLP, we are able to guarantee safe and feasible trajectories related to the used model. Compared to classical reinforcement learning (RL), our approach restricts the exploration to safe trajectories, starts with a good prior performance and yields full trajectories that can be passed to a tracking lowest-level controller. We do not address the lowest-level controller in this work and assume perfect tracking of feasible trajectories. We show the superior performance of our algorithm on simulated racing tasks that include high-level decision making. The vehicle learns to efficiently overtake slower vehicles and to avoid getting overtaken by blocking faster vehicles.
translated by 谷歌翻译
We study the fundamental task of outlier-robust mean estimation for heavy-tailed distributions in the presence of sparsity. Specifically, given a small number of corrupted samples from a high-dimensional heavy-tailed distribution whose mean $\mu$ is guaranteed to be sparse, the goal is to efficiently compute a hypothesis that accurately approximates $\mu$ with high probability. Prior work had obtained efficient algorithms for robust sparse mean estimation of light-tailed distributions. In this work, we give the first sample-efficient and polynomial-time robust sparse mean estimator for heavy-tailed distributions under mild moment assumptions. Our algorithm achieves the optimal asymptotic error using a number of samples scaling logarithmically with the ambient dimension. Importantly, the sample complexity of our method is optimal as a function of the failure probability $\tau$, having an additive $\log(1/\tau)$ dependence. Our algorithm leverages the stability-based approach from the algorithmic robust statistics literature, with crucial (and necessary) adaptations required in our setting. Our analysis may be of independent interest, involving the delicate design of a (non-spectral) decomposition for positive semi-definite matrices satisfying certain sparsity properties.
translated by 谷歌翻译
Syntax is a latent hierarchical structure which underpins the robust and compositional nature of human language. An active line of inquiry is whether large pretrained language models (LLMs) are able to acquire syntax by training on text alone; understanding a model's syntactic capabilities is essential to understanding how it processes and makes use of language. In this paper, we propose a new method, SSUD, which allows for the induction of syntactic structures without supervision from gold-standard parses. Instead, we seek to define formalism-agnostic, model-intrinsic syntactic parses by using a property of syntactic relations: syntactic substitutability. We demonstrate both quantitative and qualitative gains on dependency parsing tasks using SSUD, and induce syntactic structures which we hope provide clarity into LLMs and linguistic representations, alike.
translated by 谷歌翻译
医学图像分割模型的性能指标用于衡量参考注释和预测之间的一致性。在开发此类模型中,使用了一组通用指标,以使结果更具可比性。但是,公共数据集中的分布与临床实践中遇到的案例之间存在不匹配。许多常见的指标无法衡量这种不匹配的影响,尤其是对于包含不确定,小或空参考注释的临床数据集。因此,可能无法通过此类指标来验证模型在临床上有意义的一致性。评估临床价值的维度包括独立于参考注释量的大小,考虑参考注释的不确定性,体积计和/或位置一致性的奖励以及对空参考注释正确分类的奖励。与普通的公共数据集不同,我们的内部数据集更具代表性。它包含不确定的,小或空的参考注释。我们研究了有关深度学习框架的预测的公开度量指标,以确定哪些设置共同指标可提供有意义的结果。我们将公共基准数据集进行比较而没有不确定,小或空参考注释。该代码将发布。
translated by 谷歌翻译
预测+优化是一个最近提出的框架,将机器学习和约束优化结合在一起,解决包含在求解时间未知参数的优化问题。目标是预测未知参数,并使用估计值来解决优化问题的估计最佳解决方案。但是,所有先前的作品都集中在未知参数仅出现在优化目标而不是约束中的情况下,其简单原因是,如果不确定的约束,则估计的最佳解决方案在真实参数下甚至可能是可行的。 。本文的贡献是两个方面。首先,我们为预测+优化设置提出了一个新颖且实际相关的框架,但是在目标和约束中都有未知参数。我们介绍了校正函数的概念,并在损失函数中的额外惩罚项进行了建模实际情况,在该方案中可以将估计的最佳解决方案修改为可行的解决方案,并在揭示了真实参数后,但以额外的成本进行了修改。其次,我们为我们的框架提出了相应的算法方法,该方法处理所有包装和涵盖线性程序。我们的方法灵感来自先前的曼迪和枪支工作,尽管对我们的不同环境进行了关键的修改和重新启示。实验证明了我们方法比经典方法的卓越经验表现。
translated by 谷歌翻译
使用计算机视觉对间接费用的分析是一个问题,在学术文献中受到了很大的关注。在这个领域运行的大多数技术都非常专业,需要大型数据集的昂贵手动注释。这些问题通过开发更通用的框架来解决这些问题,并结合了表示学习的进步,该框架可以更灵活地分析具有有限标记数据的新图像类别。首先,根据动量对比机制创建了未标记的空中图像数据集的强大表示。随后,通过构建5个标记图像的准确分类器来专门用于不同的任务。从6000万个未标记的图像中,成功的低水平检测城市基础设施进化,体现了我们推进定量城市研究的巨大潜力。
translated by 谷歌翻译
解释性学者通过手动采样文档,应用代码以及将代码精炼和整理成类别,直到出现有意义的主题,从而从文本语料库中产生知识。鉴于大量的语料库,机器学习可以帮助扩展此数据采样和分析,但先前的研究表明,专家通常关注算法可能破坏或推动解释性奖学金。我们采用以人为本的设计方法来解决围绕机器辅助解释性研究的关注,以构建学术研究,该研究将机器中的集群算法纳入了脚手架解释性文本分析。随着学者将代码应用于文档和完善它们,所得编码的模式用作结构化元数据,该元数据限制了从语料库推断出的层次文档和单词簇。这些集群的交互式可视化可以帮助学者们战略性地对文档进行进一步的洞察力进行洞察力。 Scholastic证明了采用熟悉隐喻的以人为中心的算法设计和可视化如何通过交互式主题建模和文档群集来支持归纳和解释性研究方法。
translated by 谷歌翻译
COVID-19的大流行造成了毁灭性的经济和社会破坏,使全球医疗机构的资源紧张。这导致全国范围内呼吁模型预测Covid-19患者的住院和严重疾病,以告知有限医疗资源的分配。我们回应针对儿科人群的其中一种。为了应对这一挑战,我们使用电子健康记录研究了针对儿科人群的两项预测任务:1)预测哪些儿童更有可能住院,而2)在住院儿童中,哪些孩子更有可能出现严重的症状。我们通过新颖的机器学习模型MEDML应对国家儿科Covid-19数据挑战。 MEDML根据超过600万个医学概念的医学知识和倾向得分提取了最预测的特征,并通过图神经网络(GNN)结合了异质医学特征之间的功能间关系。我们使用来自国家队列协作(N3C)数据集的数据评估了143,605名患者的MEDML,并在143,605名患者的住院预测任务中评估了严重性预测任务的11,465名患者。我们还报告了详细的小组级和个人级特征的重要性分析,以评估模型的解释性。与最佳的基线机器学习模型相比,MEDML的AUROC得分高达7%,AUPRC得分高达14%,并且自大流行以来的所有九个国家地理区域以及所有三个月的跨度都表现良好。我们的跨学科研究团队开发了一种将临床领域知识纳入新型机器学习模型的框架的方法,该框架比当前最新的数据驱动的功能选择方法更具预测性和可解释。
translated by 谷歌翻译