Prevailing methods for assessing and comparing generative AIs incentivize responses that serve a hypothetical representative individual. Evaluating models in these terms presumes homogeneous preferences across the population and engenders selection of agglomerative AIs, which fail to represent the diverse range of interests across individuals. We propose an alternative evaluation method that instead prioritizes inclusive AIs, which provably retain the requisite knowledge not only for subsequent response customization to particular segments of the population but also for utility-maximizing decisions.
translated by 谷歌翻译
We study the compute-optimal trade-off between model and training data set sizes for large neural networks. Our result suggests a linear relation similar to that supported by the empirical analysis of Chinchilla. While that work studies transformer-based large language models trained on the MassiveText corpus (gopher), as a starting point for development of a mathematical theory, we focus on a simpler learning model and data generating process, each based on a neural network with a sigmoidal output unit and single hidden layer of ReLU activation units. We establish an upper bound on the minimal information-theoretically achievable expected error as a function of model and data set sizes. We then derive allocations of computation that minimize this bound. We present empirical results which suggest that this approximation correctly identifies an asymptotic linear compute-optimal scaling. This approximation can also generate new insights. Among other things, it suggests that, as the input space dimension or latent space complexity grows, as might be the case for example if a longer history of tokens is taken as input to a language model, a larger fraction of the compute budget should be allocated to growing the learning model rather than training data set.
translated by 谷歌翻译
We develop an extension of posterior sampling for reinforcement learning (PSRL) that is suited for a continuing agent-environment interface and integrates naturally into agent designs that scale to complex environments. The approach maintains a statistically plausible model of the environment and follows a policy that maximizes expected $\gamma$-discounted return in that model. At each time, with probability $1-\gamma$, the model is replaced by a sample from the posterior distribution over environments. For a suitable schedule of $\gamma$, we establish an $\tilde{O}(\tau S \sqrt{A T})$ bound on the Bayesian regret, where $S$ is the number of environment states, $A$ is the number of actions, and $\tau$ denotes the reward averaging time, which is a bound on the duration required to accurately estimate the average reward of any policy.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在过去的十年中,神经网络的成功已将它们确立为许多相关数据生成过程的有效模型。神经网络的统计理论表明样品复杂性的优雅缩放。例如,Joen&van Roy(Arxiv:2203.00246)证明,当带有$ W $参数的Relu教师网络生成数据时,最佳学习者只需要$ \ tilde {o} {o}(w/\ epsilon)$ sample达到预期错误$ \ epsilon $。但是,现有的计算理论表明,即使对于单层层教师网络,为了达到所有此类教师网络的小错误,实现此样本复杂性所需的计算也很棘手。在这项工作中,我们将单层神经网络拟合到由单层层的relu教师网络生成的数据,该网络具有从自然分布中绘制的参数。我们证明,具有自动宽度选择的随机梯度下降(SGD)达到了预期误差小的较小的预期误差,许多样本和查询总数几乎在输入维度和宽度中几乎是线性的。这表明SGD几乎以计算上有效的方式实现了Joen&van Roy(Arxiv:2203.00246)的信息理论样品复杂性界限。我们的积极经验结果与负理论结果之间的一个重要区别在于,后者解决了确定性算法的最坏情况误差,而我们的分析集中在随机算法的预期误差上。
translated by 谷歌翻译
最近的工作引入了该日期,作为深度学习中不确定性建模的一种新方法。Epatet是一个添加到传统神经网络中的小神经网络,它可以共同产生预测分布。尤其是,使用音调可以大大提高多个输入的联合预测的质量,这是神经网络了解其不知道的程度的衡量标准。在本文中,我们检查了在分配变化下是否可以提供类似的优势。我们发现,在ImageNet-A/O/C中,谐调通常可以改善稳健性指标。此外,这些改进比非常大的合奏所提供的改进更为重要,即计算成本较低的数量级。但是,与分配稳定深度学习的杰出问题相比,这些改进相对较小。播集可能是工具箱中的有用工具,但它们远非完整的解决方案。
translated by 谷歌翻译
我们介绍了BenchClamp,这是一种评估受约束语言模型解析的基准测试,该基准通过通过限制性解码的启动或微调语言模型来基于输入文本的分析来产生语义输出。目前,预审前语言模型的开发人员基于分类,跨度提取和自由文本生成任务。语言解析在语言模型评估中被忽略,因为处理特定于任务的体系结构和表示的复杂性。最近的工作表明,当输出被限制为有效的语义表示时,从提示或微调的语言模型中产生的发电能力可以很好地表现。台式设备包括无上下文的语法,适用于六个具有不同输出含义表示形式的语义解析数据集,以及一个受约束的解码接口,以生成这些语法覆盖的输出。我们为每个数据集提供低,中和高资源分割,从而可以在不同的数据制度下准确比较各种语言模型。我们的基准测试既支持基于及时的学习又支持微调,并为语言模型开发人员提供了易于使用的工具包,以评估语义解析。
translated by 谷歌翻译
在机器学习中,代理需要估计不确定性,以有效地探索和适应并做出有效的决策。不确定性估计的一种常见方法维护了模型的合奏。近年来,已经提出了几种用于培训合奏的方法,并且在这些方法的各种成分的重要性方面占上风。在本文中,我们旨在解决已受到质疑的两种成分的好处 - 先前的功能和引导。我们表明,先前的功能可以显着改善整体代理在输入之间的关节预测,如果信噪比在输入之间有所不同,则引导程序为其他好处提供了额外的好处。我们的主张是通过理论和实验结果证明的。
translated by 谷歌翻译
Thompson sampling has proven effective across a wide range of stationary bandit environments. However, as we demonstrate in this paper, it can perform poorly when applied to nonstationary environments. We show that such failures are attributed to the fact that, when exploring, the algorithm does not differentiate actions based on how quickly the information acquired loses its usefulness due to nonstationarity. Building upon this insight, we propose predictive sampling, which extends Thompson sampling to do this. We establish a Bayesian regret bound and establish that, in nonstationary bandit environments, the regret incurred by Thompson sampling can far exceed that of predictive sampling. We also present implementations of predictive sampling that scale to complex bandit environments of practical interest in a computationally tractable manner. Through simulations, we demonstrate that predictive sampling outperforms Thompson sampling and other state-of-the-art algorithms across a wide range of nonstationary bandit environments.
translated by 谷歌翻译
每年,深度学习都会通过更深层和更广泛的神经网络展示新的和改进的经验结果。同时,使用现有的理论框架,很难在不诉诸于计数参数或遇到深度指数的样本复杂性范围的情况下,比两层更深地分析网络。尝试在不同的镜头下分析现代机器学习也许是富有成效的。在本文中,我们提出了一个新颖的信息理论框架,其遗憾和样本复杂性的概念用于分析机器学习的数据要求。通过我们的框架,我们首先通过一些经典示例进行工作,例如标量估计和线性回归,以构建直觉并引入通用技术。然后,我们使用该框架来研究由深度符号神经网络,深度恢复神经网络和深层网络产生的数据的样本复杂性,这些数据无限宽,但具有限制的权重。对于符号神经网络,我们恢复了基于VC量的参数之后的样本复杂性界限。对于后两个神经网络环境,我们建立了新的结果,这些结果表明,在这些数据生成过程中,学习的样本复杂性最多是线性和二次的网络深度。
translated by 谷歌翻译