在过去的几年中,卷积神经网络(CNN)占据了计算机视野的领域,这要归功于它们提取功能及其在分类问题中出色的表现,例如在自动分析X射线中。不幸的是,这些神经网络被认为是黑盒算法,即不可能了解该算法如何实现最终结果。要将这些算法应用于不同领域并测试方法论的工作原理,我们需要使用可解释的AI技术。医学领域的大多数工作都集中在二进制或多类分类问题上。但是,在许多现实生活中,例如胸部X射线射线,可以同时出现不同疾病的放射学迹象。这引起了所谓的“多标签分类问题”。这些任务的缺点是类不平衡,即不同的标签没有相同数量的样本。本文的主要贡献是一种深度学习方法,用于不平衡的多标签胸部X射线数据集。它为当前未充分利用的Padchest数据集建立了基线,并基于热图建立了可解释的AI技术。该技术还包括概率和模型间匹配。我们系统的结果很有希望,尤其是考虑到使用的标签数量。此外,热图与预期区域相匹配,即它们标志着专家将用来做出决定的区域。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Fighting online hate speech is a challenge that is usually addressed using Natural Language Processing via automatic detection and removal of hate content. Besides this approach, counter narratives have emerged as an effective tool employed by NGOs to respond to online hate on social media platforms. For this reason, Natural Language Generation is currently being studied as a way to automatize counter narrative writing. However, the existing resources necessary to train NLG models are limited to 2-turn interactions (a hate speech and a counter narrative as response), while in real life, interactions can consist of multiple turns. In this paper, we present a hybrid approach for dialogical data collection, which combines the intervention of human expert annotators over machine generated dialogues obtained using 19 different configurations. The result of this work is DIALOCONAN, the first dataset comprising over 3000 fictitious multi-turn dialogues between a hater and an NGO operator, covering 6 targets of hate.
translated by 谷歌翻译
至少达到一定程度的解释性需要对许多机器学习系统(例如共同的黑盒模型)进行复杂的分析。我们最近提出了一个新的基于规则的学习系统SuprB,通过利用单独的优化器来构建紧凑,可解释和透明的模型,用于模型选择任务,涉及规则发现和规则集合的组合。这允许用户专门定制其模型结构以实现其模型结构 - 提出具体的解释性要求。从优化的角度来看,这使我们能够定义更清晰的目标,并且我们发现与许多最先进的系统相比,这使我们能够使规则适应不独立。在本文中,我们在一组回归问题上彻底研究了该系统的性能,并将其与XCSF进行比较,XCSF是一个基于规则的学习系统。我们发现SuprB评估的总体结果与XCSF相当,同时允许更容易控制模型结构,并显示出对随机种子和数据分裂的敏感性较小。这种增加的控制可以有助于随后为模型的训练和最终结构提供解释。
translated by 谷歌翻译
鉴于当前全球的社交距离限制,大多数人现在使用社交媒体作为其主要交流媒介。因此,数百万患有精神疾病的人被孤立了,他们无法亲自获得帮助。他们越来越依赖在线场地,以表达自己并寻求有关处理精神障碍的建议。根据世界卫生组织(WHO)的说法,大约有4.5亿人受到影响。精神疾病(例如抑郁,焦虑等)非常普遍,并影响了个体的身体健康。最近提出了人工智能(AI)方法,以帮助基于患者的真实信息(例如,医疗记录,行为数据,社交媒体利用等),包括精神病医生和心理学家在内的心理健康提供者。 AI创新表明,在从计算机视觉到医疗保健的众多现实应用应用程序中,主要执行。这项研究分析了REDDIT平台上的非结构化用户数据,并分类了五种常见的精神疾病:抑郁,焦虑,双相情感障碍,ADHD和PTSD。我们培训了传统的机器学习,深度学习和转移学习多级模型,以检测个人的精神障碍。这项工作将通过自动化检测过程并告知适当当局需要紧急援助的人来使公共卫生系统受益。
translated by 谷歌翻译
我们介绍了一个大规模实验,该实验对编码器进行了预处理,其参数计数范围从700m到9.3b不等,随后蒸馏到较小的型号中,范围为17m-170亿参数,其应用到自然语言理解(NLU)组件(NLU)组件(虚拟助手系统。尽管我们使用70%的口语数据训练,但在对书面形式的跨语性自然语言推论(XNLI)语料库进行评估时,我们的教师模型与XLM-R和MT5相当。我们使用系统中的内域数据对教师模型进行了第二阶段的训练,以提高了3.86%的相对分类,而相对7.01%的插槽填充。我们发现,即使是从我们的2阶段教师模型中提取的170亿参数模型,与仅接受公共数据的2.3B参数老师相比,与2.3B参数老师相比,意图分类更好2.88%,并且7.69%的插槽填充错误率更好(第1阶段),强调了。内域数据对训练的重要性。当使用标记的NLU数据进行离线评估时,我们的17m参数阶段2蒸馏模型的表现分别优于XLM-R碱基(85m Params)和Distillbert(42m Params),分别优于4.23%至6.14%。最后,我们介绍了一个完整的虚拟助手实验平台的结果,在该平台中,我们发现使用经过预训练和蒸馏管道训练的模型超过了从8500万参数教师蒸馏的模型,在自动测量全系统用户不满的自动测量中,从8500万参数教师蒸馏出3.74%-4.91%。
translated by 谷歌翻译
数据对于机器学习(ML)模型的开发和评估至关重要。但是,在部署所得模型时,使用有问题或不适当的数据集可能会造成危害。为了通过对数据集进行更故意的反思和创建过程的透明度来鼓励负责任的练习,研究人员和从业人员已开始倡导增加数据文档,并提出了几个数据文档框架。但是,几乎没有研究这些数据文档框架是否满足创建和消费数据集的ML从业者的需求。为了解决这一差距,我们着手了解ML从业人员的数据文档感知,需求,挑战和Desiderata,目的是推导设计要求,以便为将来的数据文档框架提供信息。我们对一家大型国际技术公司的14名ML从业者进行了一系列半结构化访谈。我们让他们回答从数据集的数据表中提取的问题列表(Gebru,2021)。我们的发现表明,目前的数据文档方法在很大程度上是临时的,而且本质上是近视的。参与者表达了对数据文档框架的需求,可以适应其上下文,并将其集成到现有的工具和工作流程中,并尽可能自动化。尽管事实上,数据文档框架通常是从负责人的AI的角度出发的,但参与者并未在他们被要求回答的问题与负责的AI含义之间建立联系。此外,参与者通常会在数据集消费者的需求中优先考虑,并提供了不熟悉其数据集可能需要知道的信息。基于这些发现,我们为将来的数据文档框架得出了七个设计要求。
translated by 谷歌翻译
The spectacular successes of recurrent neural network models where key parameters are adjusted via backpropagation-based gradient descent have inspired much thought as to how biological neuronal networks might solve the corresponding synaptic credit assignment problem. There is so far little agreement, however, as to how biological networks could implement the necessary backpropagation through time, given widely recognized constraints of biological synaptic network signaling architectures. Here, we propose that extra-synaptic diffusion of local neuromodulators such as neuropeptides may afford an effective mode of backpropagation lying within the bounds of biological plausibility. Going beyond existing temporal truncation-based gradient approximations, our approximate gradient-based update rule, ModProp, propagates credit information through arbitrary time steps. ModProp suggests that modulatory signals can act on receiving cells by convolving their eligibility traces via causal, time-invariant and synapse-type-specific filter taps. Our mathematical analysis of ModProp learning, together with simulation results on benchmark temporal tasks, demonstrate the advantage of ModProp over existing biologically-plausible temporal credit assignment rules. These results suggest a potential neuronal mechanism for signaling credit information related to recurrent interactions over a longer time horizon. Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time.
translated by 谷歌翻译