We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
As large language models (LLMs) grow larger and more sophisticated, assessing their "reasoning" capabilities in natural language grows more challenging. Recent question answering (QA) benchmarks that attempt to assess reasoning are often limited by a narrow scope of covered situations and subject matters. We introduce WikiWhy, a QA dataset built around a novel auxiliary task: explaining why an answer is true in natural language. WikiWhy contains over 9,000 "why" question-answer-rationale triples, grounded on Wikipedia facts across a diverse set of topics. Each rationale is a set of supporting statements connecting the question to the answer. WikiWhy serves as a benchmark for the reasoning capabilities of LLMs because it demands rigorous explicit rationales for each answer to demonstrate the acquisition of implicit commonsense knowledge, which is unlikely to be easily memorized. GPT-3 baselines achieve only 38.7% human-evaluated correctness in the end-to-end answer & explain condition, leaving significant room for future improvements.
translated by 谷歌翻译
噪声的去除或取消对成像和声学具有广泛的应用。在日常生活中,Denoising甚至可能包括对地面真理不忠的生成方面。但是,对于科学应用,denoing必须准确地重现地面真相。在这里,我们展示了如何通过深层卷积神经网络来定位数据,从而以定量精度出现弱信号。特别是,我们研究了晶体材料的X射线衍射。我们证明,弱信号是由电荷排序引起的,在嘈杂的数据中微不足道的信号,在DeNo的数据中变得可见和准确。通过对深度神经网络的监督培训,具有成对的低噪声数据,可以通过监督培训来实现这一成功。这样,神经网络就可以了解噪声的统计特性。我们证明,使用人造噪声(例如泊松和高斯)不会产生这种定量准确的结果。因此,我们的方法说明了一种实用的噪声过滤策略,可以应用于具有挑战性的获取问题。
translated by 谷歌翻译
在本文中,我们提出了一种半监督异常检测(SSAD)的新方法。我们的分类器命名为QMS22,因为其成立的日期为2022年,该框架是二次多形分离(QMS)的框架,这是一个最近引入的分类模型。 QMS22通过解决涉及训练集和原始问题的测试集的多类分类问题来解决SSAD。分类问题有意包括带有重叠样本的类。其中一个类包含普通样品和离群值的混合物,所有其他类别仅包含正常样品。然后使用分类问题的结果为测试集中的每个样本计算出异常得分。我们还使用龙骨存储库中的95个基准不平衡数据集对QMS22进行QMS22的性能评估。这些分类器是BRM(包装随机矿工),Ockra(具有随机投影特征算法的单级K-均值),ISOF(隔离林)和OCSVM(单级支持向量机)。通过在接收器操作特征曲线的曲线下使用该区域作为性能度量,QMS22显着优于ISOF和OCSVM。此外,Wilcoxon签署的秩检验表明,在针对BRM和QMS22对OCKRA的QMS22测试时,没有统计学上的显着差异。
translated by 谷歌翻译
随着机器学习(ML)模型在临床应用中获得吸引力,了解临床医生和社会偏见对ML模型的影响越来越重要。尽管用于模型训练的标签可能会出现偏见,但这些偏见的许多来源尚未得到充分研究。在本文中,我们重点介绍了不同的审查制度(即,患者组的测试率差异)是临床ML模型可能会放大的标签偏差来源,可能造成损害。许多患者风险分层模型都使用标签的临床医生诊断和实验室测试的结果进行培训。没有测试结果的患者通常会分配负标签,该标签假设未经测试的患者没有经历结果。由于订单受到临床和资源考虑因素的影响,因此在患者人群中进行测试可能不统一,从而导致不同的审查制度。同等风险患者的不同审查制度会导致某些组的承诺,进而对此类组的有偏见的标签进行审查。在标准ML管道中使用此类偏见的标签可能会导致患者组的模型性能差距。在这里,我们从理论和经验上表征了不同的条件,在这些条件下,不同的审查制度或承诺会影响跨亚组的模型绩效。我们的发现呼吁人们注意不同的审查制度,作为临床ML模型中标签偏差的来源。
translated by 谷歌翻译
焦点损失已获得了令人难以置信的知名度,因为它使用一种简单的技术来识别和利用硬性示例来在分类方面取得更好的性能。但是,此方法不容易在分类任务之外概括,例如在KePoint检测中。在本文中,我们提出了对焦点检测任务的焦点损失的新颖适应,称为对抗局灶性损失(AFL)。AFL不仅在语义上类似于焦点损失,而且还可以作为任意损失功能的插头升级。尽管焦点损失需要分类器的输出,但AFL利用单独的对抗网络来为每个输入产生难度分数。然后,即使在没有分类器的情况下,也可以将这种难度分数用于在硬示例上的学习优先级。在这项工作中,我们展示了AFL在增强关键点检测中现有方法的有效性,并验证其根据难度重新提交示例的能力。
translated by 谷歌翻译
迭代精致 - 从随机的猜测开始,然后迭代地改善猜测 - 是表示学习的有用范式,因为它提供了一种在数据中同样合理的解释之间打破对称性的方法。此属性使此类方法的应用可以推断实体集的表示,例如物理场景中的对象,在结构上类似于潜在空间中的聚类算法。但是,大多数先前的工作都通过展开的完善过程进行区分,这可能使优化挑战。我们观察到,可以通过隐式函数定理使此类方法可区分,并开发一种隐性分化方法,从而通过解耦来向前和向后传递来提高训练的稳定性和障碍。该连接使我们能够在优化隐式层时应用进步,不仅可以改善Slate中的插槽注意模块的优化,Slate是一种学习实体表示的最新方法,而且要在反向传播中持续不断的空间和时间复杂性。还有一条另外一行​​的代码。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译