随着各种科学领域中数据的越来越多,生成模型在科学方法的每个步骤中都具有巨大的潜力来加速科学发现。他们最有价值的应用也许在于传统上提出假设最慢,最具挑战性的步骤。现在,正在从大量数据中学到强大的表示形式,以产生新的假设,这对从材料设计到药物发现的科学发现应用产生了重大影响。 GT4SD(https://github.com/gt4sd/gt4sd-core)是一个可扩展的开放源库,使科学家,开发人员和研究人员能够培训和使用科学发现中假设生成的最先进的生成模型。 GT4SD支持跨材料科学和药物发现的各种生成模型的用途,包括基于与目标蛋白,OMIC剖面,脚手架距离,结合能等性质的分子发现和设计。
translated by 谷歌翻译
Despite significant progress of generative models in the natural sciences, their controllability remains challenging. One fundamentally missing aspect of molecular or protein generative models is an inductive bias that can reflect continuous properties of interest. To that end, we propose the Regression Transformer (RT), a novel method that abstracts regression as a conditional sequence modeling problem. This introduces a new paradigm of multitask language models which seamlessly bridge sequence regression and conditional sequence generation. We thoroughly demonstrate that, despite using a nominal-scale training objective, the RT matches or surpasses the performance of conventional regression models in property prediction tasks of small molecules, proteins and chemical reactions. Critically, priming the same model with continuous properties yields a highly competitive conditional generative model that outperforms specialized approaches in a substructure-constrained, property-driven molecule generation benchmark. Our dichotomous approach is facilitated by a novel, alternating training scheme that enables the model to decorate seed sequences by desired properties, e.g., to optimize reaction yield. In sum, the RT is the first report of a multitask model that concurrently excels at predictive and generative tasks in biochemistry. This finds particular application in property-driven, local exploration of the chemical or protein space and could pave the road toward foundation models in material design. The code to reproduce all experiments of the paper is available at: https://github.com/IBM/regression-transformer
translated by 谷歌翻译
Data-centric artificial intelligence (data-centric AI) represents an emerging paradigm emphasizing that the systematic design and engineering of data is essential for building effective and efficient AI-based systems. The objective of this article is to introduce practitioners and researchers from the field of Information Systems (IS) to data-centric AI. We define relevant terms, provide key characteristics to contrast the data-centric paradigm to the model-centric one, and introduce a framework for data-centric AI. We distinguish data-centric AI from related concepts and discuss its longer-term implications for the IS community.
translated by 谷歌翻译
For improving short-length codes, we demonstrate that classic decoders can also be used with real-valued, neural encoders, i.e., deep-learning based codeword sequence generators. Here, the classical decoder can be a valuable tool to gain insights into these neural codes and shed light on weaknesses. Specifically, the turbo-autoencoder is a recently developed channel coding scheme where both encoder and decoder are replaced by neural networks. We first show that the limited receptive field of convolutional neural network (CNN)-based codes enables the application of the BCJR algorithm to optimally decode them with feasible computational complexity. These maximum a posteriori (MAP) component decoders then are used to form classical (iterative) turbo decoders for parallel or serially concatenated CNN encoders, offering a close-to-maximum likelihood (ML) decoding of the learned codes. To the best of our knowledge, this is the first time that a classical decoding algorithm is applied to a non-trivial, real-valued neural code. Furthermore, as the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
translated by 谷歌翻译
Time series, sets of sequences in chronological order, are essential data in statistical research with many forecasting applications. Although recent performance in many Transformer-based models has been noticeable, long multi-horizon time series forecasting remains a very challenging task. Going beyond transformers in sequence translation and transduction research, we observe the effects of down-and-up samplings that can nudge temporal saliency patterns to emerge in time sequences. Motivated by the mentioned observation, in this paper, we propose a novel architecture, Temporal Saliency Detection (TSD), on top of the attention mechanism and apply it to multi-horizon time series prediction. We renovate the traditional encoder-decoder architecture by making as a series of deep convolutional blocks to work in tandem with the multi-head self-attention. The proposed TSD approach facilitates the multiresolution of saliency patterns upon condensed multi-heads, thus progressively enhancing complex time series forecasting. Experimental results illustrate that our proposed approach has significantly outperformed existing state-of-the-art methods across multiple standard benchmark datasets in many far-horizon forecasting settings. Overall, TSD achieves 31% and 46% relative improvement over the current state-of-the-art models in multivariate and univariate time series forecasting scenarios on standard benchmarks. The Git repository is available at https://github.com/duongtrung/time-series-temporal-saliency-patterns.
translated by 谷歌翻译
A new method for solving the wave equation is presented, called the learned Born series (LBS), which is derived from a convergent Born Series but its components are found through training. The LBS is shown to be significantly more accurate than the convergent Born series for the same number of iterations, in the presence of high contrast scatterers, while maintaining a comparable computational complexity. The LBS is able to generate a reasonable prediction of the global pressure field with a small number of iterations, and the errors decrease with the number of learned iterations.
translated by 谷歌翻译
机器学习(ML)为生物处理工程的发展做出了重大贡献,但其应用仍然有限,阻碍了生物过程自动化的巨大潜力。用于模型构建自动化的ML可以看作是引入另一种抽象水平的一种方式,将专家的人类集中在生物过程开发的最认知任务中。首先,概率编程用于预测模型的自动构建。其次,机器学习会通过计划实验来测试假设并进行调查以收集信息性数据来自动评估替代决策,以收集基于模型预测不确定性的模型选择的信息数据。这篇评论提供了有关生物处理开发中基于ML的自动化的全面概述。一方面,生物技术和生物工程社区应意识到现有ML解决方案在生物技术和生物制药中的应用的限制。另一方面,必须确定缺失的链接,以使ML和人工智能(AI)解决方案轻松实施在有价值的生物社区解决方案中。我们总结了几个重要的生物处理系统的ML实施,并提出了两个至关重要的挑战,这些挑战仍然是生物技术自动化的瓶颈,并减少了生物技术开发的不确定性。没有一个合适的程序;但是,这项综述应有助于确定结合生物技术和ML领域的潜在自动化。
translated by 谷歌翻译
查找最佳消息量化是低复杂性信念传播(BP)解码的关键要求。为此,我们提出了一个浮点替代模型,该模型模仿量化效果,作为均匀噪声的添加,其幅度是可训练的变量。我们验证替代模型与定点实现的行为非常匹配,并提出了手工制作的损失功能,以实现复杂性和误差率性能之间的权衡。然后,采用一种基于深度学习的方法来优化消息位。此外,我们表明参数共享既可以确保实现友好的解决方案,又比独立参数导致更快的培训收敛。我们为5G低密度均衡检查(LDPC)代码提供模拟结果,并在浮点分解的0.2 dB内报告误差率性能,平均消息量化位低于3.1位。此外,我们表明,学到的位宽也将其推广到其他代码速率和渠道。
translated by 谷歌翻译
在最近针对生成任务的量子电路模型的建议中,关于其性能的讨论仅限于它们重现已知目标分布的能力。例如,诸如量子电路诞生的机器(QCBM)之类的表达模型家族几乎已经完全评估了其以高精度学习给定目标分布的能力。尽管此方面可能是某些任务的理想选择,但它将生成模型的评估范围限制在记忆数据而不是概括的能力上。结果,对模型的概括性能以及此类能力和资源需求之间的关系几乎没有理解,例如电路深度和培训数据的量。在这项工作中,我们利用最近提出的概括评估框架开始解决这一知识差距。我们首先研究了QCBM的基数受限分布的学习过程,并在增加电路深度的同时看到概括性能的提高。在此处介绍的12个问题示例中,我们观察到,只有30%的有效模式与训练集相比,QCBM表现出最佳的概括性能,以产生看不见和有效的模式。最后,我们评估了QCBM不仅可以概括有效特征的能力,而且还评估了根据充分偏见分布分布的高质量斑点。我们看到,QCBM能够有效地学习偏见并产生比培训集中的质量更高的看不见的样本。据我们所知,这是文献中的第一部作品,该作品将QCBM的概括性能作为量子生成模型的积分评估度量标准,并证明了QCBM将其推广到高质量的,所需的新型样品的能力。
translated by 谷歌翻译
在过去的几年中,神经语言模型(NLM)取得了巨大进步,在各种语言任务上取得了令人印象深刻的表现。利用这一点,对神经科学的研究已开始使用NLMS在语言处理过程中研究人脑中的神经活动。但是,关于哪些因素决定了神经语言模型捕获大脑活动的能力(又称其“大脑评分”)的能力,许多问题仍未得到解决。在这里,我们朝这个方向迈出了第一步,并检查了测试丢失,训练语料库和模型架构的影响(比较手套,LSTM,GPT-2和BERT),对参与者的功能磁共振成像的预测时间表的预测时间表。 。我们发现(1)每个模型的未经训练的版本已经通过捕获相同单词的大脑响应的相似性来解释大脑中的大量信号,而未经训练的LSTM优于基于变压器的模型,受到上下文效果的影响较小。 (2)训练NLP模型可改善同一大脑区域的大脑评分,而与模型的结构无关; (3)困惑(测试损失)不是大脑评分的良好预测指标; (4)训练数据对结果有很大的影响,尤其是,现成的模型可能缺乏检测大脑激活的统计能力。总体而言,我们概述了模型训练选择的影响,并为未来的研究提出了良好的实践,旨在使用神经语言模型来解释人类语言系统。
translated by 谷歌翻译