Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
背景:精确诊断颅底肿瘤对于提供个性化的手术治疗策略至关重要。由于肿瘤多样性和缺乏术中病理资源,术中诊断可能具有挑战性。目的:开发独立且平行的术中病理学工作流程,可以使用无标签的光学成像和人工智能提供快速准确的颅底肿瘤诊断。方法:我们使用了基于光纤激光,无标签,非消费性,高分辨率显微镜方法($ <$ <$ <$ <$ 60秒,每1 $ \ times $ 1 mm $ $^\ text {2} $),称为刺激的拉曼组织学(SRH),以对颅底肿瘤患者的连续多中心队列进行成像。然后,使用三种表示学习策略:跨渗透性,自我监督的对比度学习和监督对比度学习,使用SRH图像来训练卷积神经网络(CNN)模型。我们训练有素的CNN模型在持有的多中心SRH数据集上进行了测试。结果:SRH能够成像良性和恶性颅底肿瘤的诊断特征。在三种表示策略中,有监督的对比度学习最有效地学习了每种颅底肿瘤类型的独特和诊断SRH图像特征。在我们的多中心测试集中,跨渗透性达到了91.5%的总体诊断准确性,自我监督的对比度学习为83.9%,并且有监督的对比度学习为96.6%。我们训练有素的模型能够鉴定出肿瘤正常的边缘,并检测整个SRH图像中微观肿瘤浸润的区域。结论:具有训练有素的人工智能模型的SRH可以对颅底肿瘤标本进行快速准确的术中分析,以告知手术决策。
translated by 谷歌翻译
人工智能(AI)启用的自主实验为加速科学发现提供了新的范式。非平衡材料合成是复杂,资源密集型实验的象征性,其加速将是物料发现和发展的流域。最近通过高吞吐量实验加速了非平衡合成相图的映射,但仍然限制了材料研究,因为参数空间太大而无法彻底探索。我们通过科学自主推理代理(SARA)管辖的分层自主实验,证明了加速的合成和促进亚稳材料。 SARA将机器人材料合成和表征与AI方法的层次集成,有效地揭示了处理相图的结构。 SARA设计横向梯度激光尖峰退火(LG-LSA)实验,用于平行材料合成,采用光学光谱速度迅速识别相转变。利用嵌套的主动学习(AL)周期实现了多维参数空间的高效探索,该嵌套主动学习模型包括实验的底层物理以及端到端的不确定性量化。有了这个,萨拉在多种尺度处的协调体现了复杂的科学任务的AI利用。我们通过自主映射综合映射_3 $ System的综合相位边界来展示其性能,导致幅度加速度,即建立一个合成相图,其中包括动力学稳定$ \ delta $ -bi $的条件_2 $ o $ _3 $在室温下,用于氧化固体氧化物燃料电池等电化学技术的关键开发。
translated by 谷歌翻译
疟疾是一种威胁生命的疾病,影响了数百万。基于显微镜的薄膜评估是(i)确定疟疾物种和(ii)定量高寄生虫感染的标准方法。通过机器学习(ML)对疟疾显微镜的完全自动化是一项具有挑战性的任务,因为预先准备的滑动在质量和表现方面差异很大,并且伪像通常超过相对较少的寄生虫。在这项工作中,我们描述了一个用于薄膜疟疾分析的完整,完全自动化的框架,该框架应用了ML方法,包括卷积神经网(CNN),该方法在大型且多样化的田间预先准备的薄膜数据集中进行了训练。定量和物种鉴定结果几乎足够准确地满足了耐药性监测和临床用例的混凝土需求。我们将方法和性能指标集中在现场用例要求上。我们讨论了将ML方法应用于疟疾显微镜的关键问题和重要指标。
translated by 谷歌翻译
Traditionally, data analysis and theory have been viewed as separate disciplines, each feeding into fundamentally different types of models. Modern deep learning technology is beginning to unify these two disciplines and will produce a new class of predictively powerful space weather models that combine the physical insights gained by data and theory. We call on NASA to invest in the research and infrastructure necessary for the heliophysics' community to take advantage of these advances.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
The devastation caused by the coronavirus pandemic makes it imperative to design automated techniques for a fast and accurate detection. We propose a novel non-invasive tool, using deep learning and imaging, for delineating COVID-19 infection in lungs. The Ensembling Attention-based Multi-scaled Convolution network (EAMC), employing Leave-One-Patient-Out (LOPO) training, exhibits high sensitivity and precision in outlining infected regions along with assessment of severity. The Attention module combines contextual with local information, at multiple scales, for accurate segmentation. Ensemble learning integrates heterogeneity of decision through different base classifiers. The superiority of EAMC, even with severe class imbalance, is established through comparison with existing state-of-the-art learning models over four publicly-available COVID-19 datasets. The results are suggestive of the relevance of deep learning in providing assistive intelligence to medical practitioners, when they are overburdened with patients as in pandemics. Its clinical significance lies in its unprecedented scope in providing low-cost decision-making for patients lacking specialized healthcare at remote locations.
translated by 谷歌翻译