经典的机器学习(ML)提供了一种潜在的强大方法来解决物理和化学中挑战性量子多体问题。但是,ML比更传统方法的优势尚未牢固确定。在这项工作中,我们证明,经典的ML算法可以在有限的空间维度中有效预测hipapped汉密尔顿人的基态特性,这是通过在物质相同量子阶段测量其他汉密尔顿人通过测量其他汉密尔顿人获得的数据后。相反,在广泛接受的复杂性理论假设下,不从数据中学习的经典算法无法获得相同的保证。我们还证明,经典的ML算法可以有效地对物质的各种量子阶段进行分类。我们的论点基于经典阴影的概念,这是对多体量子状态的简洁经典描述,可以在可行的量子实验中构造,并用于预测状态的许多特性。广泛的数值实验证实了我们在各种情况下的理论结果,包括Rydberg Atom Systems,2D随机Heisenberg模型,受对称性保护的拓扑阶段和拓扑结构有序的相。
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
现在,可以使用最先进的神经语言模型通过零射门提示来解决临时语言任务,而无需进行监督培训。近年来,这种方法已广受欢迎,研究人员证明了提示在特定的NLP任务上实现强烈准确的提示。但是,找到新任务的提示需要实验。具有不同措辞选择的不同提示模板会导致明显的准确性差异。提示允许用户尝试及时变化,可视化及时性能,并迭代优化提示。我们开发了一个工作流程,该工作流程允许用户首先使用少量数据专注于模型反馈,然后再进入大型数据制度,该数据制度允许使用任务的定量度量来实现有希望的提示的经验基础。然后,该工具可以轻松部署新创建的临时模型。我们使用多种现实世界用例演示了Fackide(http://prompt.vizhub.ai)和我们的工作流程的实用性。
translated by 谷歌翻译
最近已被证明大型语言模型在各种任务集中获得合理的零射普通化(Brown等,2020)。它已经假设这是语言模型的隐式多任务学习的结果,在语言模型中的预押(Radford等,2019)。可以通过明确的多任务学习直接引起零拍常规化?为了以缩放测试这个问题,我们开发一个系统,以便轻松地将任何自然语言任务映射到人类可读的提示表单中。我们转换一组大量的监督数据集,每个数据集都有多个提示,具有不同的措辞。这些提示的数据集允许基准测试模型执行完全看不见的任务的能力。我们介绍了一个普拉克尔编码器 - 解码器模型(Raffel等,2020; Lester等,2021),覆盖各种任务。该模型在多个标准数据集中达到强大的零点性能,通常优于其尺寸的型号超过16倍。此外,我们的方法对来自Big-替补基准测试的任务子集具有强烈性能,优于其尺寸的6倍。所有提示和培训的型号都可以在https://github.com/ bigscience-workshop / protectsource / httpsource / https://huggingface.co/bigscience/t0pp。
translated by 谷歌翻译
开发了一个3D深度学习模型(OARNet)并用于在CT图像上描绘28 H&N OAR。 OARNET利用密集连接的网络来检测OAR边界盒,然后在盒子内划定OAR。它将来自任何层的信息重用到后续层,并使用跳过连接来组合来自不同密集块电平的信息来逐步提高描绘精度。培训最多使用最多28名专家手册划定(MD)桨从165 CTS划算。骰子相似度系数(DSC)和第95百分位HAUSDORFF距离(HD95)相对于MD评估了70个其他CT。对MD的平均值,最大和根平均方形剂量差异评估了70cts的56个。 oarnet与UANET,ANATOMYNET和MULTI-ATLAS分段(MAS)进行比较。使用95%置信区间的Wilcoxon签名级别测试用于评估意义。 Wilcoxon签署了排名测试表明,与UANET相比,OARNET改善了(P <0.05)DSC(23/28桨)和HD95(17/28)。 OARNet优于DSC(28/28)和HD95(27/28)的Anatomynet和MAS。与UANET相比,OARNET将中位数DSC改善至0.05和HD95,高达1.5mm。与Anatomynet和MAS相比,OARNET将中位数(DSC,HD95)改为高达(0.08,2.7mm)和(0.17,6.3mm)。 DoSimetry,Oarnet优于Uanet(Dmax 7/28; Dmean 10/28),Anatomynet(Dmax 21/28; Dmean 24/28)和MAS(Dmax 22/28; Dmean 21/28)。 DenSenet架构使用混合方法进行优化,该混合方法执行OAR特定的边界框检测,然后是要素识别。与其他自动描绘方法相比,Oarnet优于或等于所有几何(颞叶L,HD95)和28 H&N OAR的一个剂量(眼睛L,平均剂量)终点,并且优于或者等于所有OAR的Anatomynet和MAS。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems. It is assumed that constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed via inexact probabilistic zeroth- and first-order oracles. Under reasonable assumptions, a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived. Numerical results on standard nonlinear optimization test problems illustrate the advantages and limitations of our proposed method.
translated by 谷歌翻译