Spatio-temporal machine learning is critically needed for a variety of societal applications, such as agricultural monitoring, hydrological forecast, and traffic management. These applications greatly rely on regional features that characterize spatial and temporal differences. However, spatio-temporal data are often complex and pose several unique challenges for machine learning models: 1) multiple models are needed to handle region-based data patterns that have significant spatial heterogeneity across different locations; 2) local models trained on region-specific data have limited ability to adapt to other regions that have large diversity and abnormality; 3) spatial and temporal variations entangle data complexity that requires more robust and adaptive models; 4) limited spatial-temporal data in real scenarios (e.g., crop yield data is collected only once a year) makes the problems intrinsically challenging. To bridge these gaps, we propose task-adaptive formulations and a model-agnostic meta-learning framework that ensembles regionally heterogeneous data into location-sensitive meta tasks. We conduct task adaptation following an easy-to-hard task hierarchy in which different meta models are adapted to tasks of different difficulty levels. One major advantage of our proposed method is that it improves the model adaptation to a large number of heterogeneous tasks. It also enhances the model generalization by automatically adapting the meta model of the corresponding difficulty level to any new tasks. We demonstrate the superiority of our proposed framework over a diverse set of baselines and state-of-the-art meta-learning frameworks. Our extensive experiments on real crop yield data show the effectiveness of the proposed method in handling spatial-related heterogeneous tasks in real societal applications.
translated by 谷歌翻译
知识密集型语言任务(苏格兰信)通常需要大量信息来提供正确的答案。解决此问题的一种流行范式是将搜索系统与机器读取器相结合,前者检索支持证据,后者检查它们以产生答案。最近,读者组成部分在大规模预培养的生成模型的帮助下见证了重大进展。同时,搜索组件中的大多数现有解决方案都依赖于传统的``索引 - retrieve-then-Rank''管道,该管道遭受了巨大的内存足迹和端到端优化的困难。受到最新构建基于模型的IR模型的努力的启发,我们建议用新颖的单步生成模型替换传统的多步搜索管道,该模型可以极大地简化搜索过程并以端到端的方式进行优化。我们表明,可以通过一组经过适当设计的预训练任务来学习强大的生成检索模型,并被采用以通过进一步的微调来改善各种下游苏格兰短裙任务。我们将预训练的生成检索模型命名为Copusbrain,因为有关该语料库的所有信息均以其参数进行编码,而无需构造其他索引。经验结果表明,在苏格兰语基准上的检索任务并建立了新的最新性能,Copusbrain可以极大地超过强大的基准。我们还表明,在零农源和低资源设置下,科体班运行良好。
translated by 谷歌翻译
准确的蛋白质结构预测可以显着加速生命科学的发展。 Alphafold2的准确性是边界端到端结构预测系统,已经接近实验确定技术的准确性。由于复杂的模型体系结构和大量的内存消耗,因此需要大量的计算资源和时间来实施从头开始实施Alphafold2的训练和推断。对于大多数个人和机构来说,运行原始AlphaFold2的成本都是昂贵的。因此,降低这一成本可以加速生命科学的发展。我们使用PaddlePaddle(即HelixFold)实现Alphafold2,以提高训练和推理速度并减少记忆消耗。操作员融合,张量融合和混合并行性计算改善了性能,而通过重新计算,BFLOAT16和内存读/写入/编写就场,内存进行了优化。与原始的Alphafold2(由JAX实施)和OpenFold(由Pytorch实施)相比,HelixFold仅需7.5天即可完成完整的端到端培训,并且在使用Hybrid ParalleleSism时只需要5.3天,而Alphafold2和OpenFold都可以使用11个。天。 Helixfold节省了1倍的训练时间。我们验证了HelixFold的准确性可能与CASP14和CAMAO数据集上的Alphafold2相当。 HelixFold的代码可免费下载:https://github.com/paddlepaddle/paddlehelix/paddlehelix/tree/dev/dev/pprotein_folding/helixfold,我们还在https://paddlehelix.baidu.com/com上提供稳定的Web服务。应用程序/药物/蛋白质/预测。
translated by 谷歌翻译
阅读理解是一个复杂的认知过程,涉及许多人类大脑活动。大量作品研究了在信息检索相关方案中阅读理解的模式和注意力分配。但是,关于阅读理解过程中人脑中发生的事情以及这些认知活动如何影响信息检索过程,知之甚少。此外,随着脑成像技术(例如脑电图(EEG))的进步,几乎可以实时收集大脑信号,并探索是否可以用作反馈来促进信息获取性能。在本文中,我们仔细设计了一项基于实验室的用户研究,以调查阅读理解过程中的大脑活动。我们的发现表明,神经反应随着不同类型的阅读内容而变化,即可以满足用户信息需求和无法无法满足的内容的内容。我们建议在阅读理解过程中以微观时间量表以微观时间量表来支持各种认知活动,例如认知负载,语义主题理解和推论处理。从这些发现中,我们说明了一些有关信息检索任务的见解,例如排名模型构建和界面设计。此外,我们建议有可能检测主动现实世界系统的阅读理解状态。为此,我们为基于脑电图的阅读理解建模(UERCM)提出了一个统一的框架。为了验证其有效性,我们基于脑电图特征进行了大量的实验,以进行两项阅读理解任务:回答句子分类和回答提取。结果表明,通过大脑信号提高两个任务的性能是可行的。
translated by 谷歌翻译
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译
Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena. However, learning the co-representation of diverse modalities is still a long-standing endeavor in emerging machine learning applications and research. Previous generative approaches for multimodal input approximate a joint-modality posterior by uni-modality posteriors as product-of-experts (PoE) or mixture-of-experts (MoE). We argue that these approximations lead to a defective bound for the optimization process and loss of semantic connection among modalities. This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space while handling the missing modality problem. By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization. In public datasets of various domains, the experimental results demonstrate that the proposed method is applicable to order-agnostic cross-modal generation while achieving outstanding performance compared to the state-of-the-art multimodal methods. The source code for our method is available online https://anonymous.4open.science/r/SMVAE-9B3C/.
translated by 谷歌翻译
Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
We represent the ResNeRF, a novel geometry-guided two-stage framework for indoor scene novel view synthesis. Be aware of that a good geometry would greatly boost the performance of novel view synthesis, and to avoid the geometry ambiguity issue, we propose to characterize the density distribution of the scene based on a base density estimated from scene geometry and a residual density parameterized by the geometry. In the first stage, we focus on geometry reconstruction based on SDF representation, which would lead to a good geometry surface of the scene and also a sharp density. In the second stage, the residual density is learned based on the SDF learned in the first stage for encoding more details about the appearance. In this way, our method can better learn the density distribution with the geometry prior for high-fidelity novel view synthesis while preserving the 3D structures. Experiments on large-scale indoor scenes with many less-observed and textureless areas show that with the good 3D surface, our method achieves state-of-the-art performance for novel view synthesis.
translated by 谷歌翻译
Fully convolutional detectors discard the one-to-many assignment and adopt a one-to-one assigning strategy to achieve end-to-end detection but suffer from the slow convergence issue. In this paper, we revisit these two assignment methods and find that bringing one-to-many assignment back to end-to-end fully convolutional detectors helps with model convergence. Based on this observation, we propose {\em \textbf{D}ual \textbf{A}ssignment} for end-to-end fully convolutional de\textbf{TE}ction (DATE). Our method constructs two branches with one-to-many and one-to-one assignment during training and speeds up the convergence of the one-to-one assignment branch by providing more supervision signals. DATE only uses the branch with the one-to-one matching strategy for model inference, which doesn't bring inference overhead. Experimental results show that Dual Assignment gives nontrivial improvements and speeds up model convergence upon OneNet and DeFCN. Code: https://github.com/YiqunChen1999/date.
translated by 谷歌翻译
对比性语言图像预训练(剪辑)通过随时可用的自然语言监督学习丰富的表示。它可以改善下游视觉任务的一般性能,包括但不限于零射击,长尾巴,细分,检索,标题和视频。但是,据我们所知,尚未研究剪辑的视觉解释性。为了提供其预测的视觉解释,我们提出了图像文本相似性图(ITSM)。基于它,我们出人意料地发现,剪辑比前景更喜欢背景区域,并且对人类理解提出了错误的可视化。在实验上,我们发现魔鬼在汇总部分,其中不适当的合并方法导致一种称为语义转移的现象。为了纠正和提高可视化结果,我们提出了蒙版的最大池,并使用自我监督图像编码器的注意力图。同时,解释性任务和识别任务需要不同的表示。为了解决这个问题,我们提出了双重预测,以满足这一要求。我们将上述方法整合为可解释的对比度图像预训练(ICLIP)。实验表明ICLIP极大地提高了可解释性。例如,在VOC 2012数据集中,非平凡的改进分别为$ 32.85 \%$和$ 49.10 \%$。
translated by 谷歌翻译