The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译
现有的远处监督的关系提取器通常依靠嘈杂的数据进行模型培训和评估,这可能导致垃圾堆放系统。为了减轻问题,我们研究了小型清洁数据集是否可以帮助提高远距离监督模型的质量。我们表明,除了对模型进行更具说服力的评估外,一个小的清洁数据集还可以帮助我们构建更强大的Denoising模型。具体而言,我们提出了一个基于影响函数的清洁实例选择的新标准。它收集了样本级别的证据,以识别良好实例(这比损失级别的证据更具信息性)。我们还提出了一种教师实习机制,以控制自举套件时中间结果的纯度。整个方法是模型不合时宜的,并且在denoising Real(NYT)和合成噪声数据集上都表现出强烈的性能。
translated by 谷歌翻译
通常通过过去的选择来告知机器学习中的评估,例如要使用哪些数据集或指标。该标准化可以使用排行榜对平等基础进行比较,但是随着出现更好的替代方案,评估选择变得不佳。这个问题在自然语言生成中尤其相关,该语言需要不断改善的数据集,指标和人类评估以提出确定性的主张。为了使遵循最佳模型评估实践更加容易,我们介绍了GEMV2。新版本的一代,评估和指标基准为数据集,模型和指标开发人员提供了模块化基础架构,以使彼此受益。GEMV2支持40种记录的数据集中51种语言。所有数据集的模型都可以在线评估,我们的交互式数据卡创建和渲染工具使得在Living Benchmark中添加新数据集变得更加容易。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
The publication rates are skyrocketing across many fields of science, and it is difficult to stay up to date with the latest research. This makes automatically summarizing the latest findings and helping scholars to synthesize related work in a given area an attractive research objective. In this paper we study the problem of citation text generation, where given a set of cited papers and citing context the model should generate a citation text. While citation text generation has been tackled in prior work, existing studies use different datasets and task definitions, which makes it hard to study citation text generation systematically. To address this, we propose CiteBench: a benchmark for citation text generation that unifies the previous datasets and enables standardized evaluation of citation text generation models across task settings and domains. Using the new benchmark, we investigate the performance of multiple strong baselines, test their transferability between the datasets, and deliver new insights into task definition and evaluation to guide the future research in citation text generation. We make CiteBench publicly available at https://github.com/UKPLab/citebench.
translated by 谷歌翻译
NECE是一个基于事件的文本分析工具包,用于叙事文档。NECE的目的是通过图形界面和Python软件包为用户提供开放且轻松地访问基于事件的摘要和长期叙事文档的抽象,这些软件包可以很容易地用于叙事分析,理解或其他高级目的。我们的工作解决了长期通过事件提取和关键事件的时间顺序的挑战;同时,它提供了选择和查看与叙述实体有关的事件(例如主要角色和性别群体)的选项。我们进行人类评估以证明事件链提取系统的质量,并且角色具有挖掘算法。最后,我们通过证明其在性别偏见分析和提问任务中的用法来阐明该工具包的潜在下游应用程序。
translated by 谷歌翻译
快速的现场评估(ROSE)技术可以通过适当地分析快速染色的细胞病理学图像来显着加速胰腺癌的诊断。计算机辅助诊断(CAD)可以潜在地解决玫瑰病中病理学家的短缺。但是,不同样品之间的癌性模式差异很大,这使CAD任务极具挑战性。此外,由于不同的染色质量和各种采集装置类型,玫瑰图像在颜色分布,亮度和对比度方面具有复杂的扰动。为了应对这些挑战,我们提出了一种基于随机实例的视觉变压器(SI-VIT)方法,该方法可以减少扰动并增强实例之间的建模。借助重新组装的洗牌实例及其行李级软标签,该方法利用回归头将模型集中在细胞上,而不是各种扰动。同时,该模型与分类头结合在一起,可以有效地识别不同实例之间的一般分布模式。结果表明,分类准确性有了更准确的注意区域的显着提高,表明玫瑰图像的多种模式有效地提取了,并且复杂的扰动大大降低。这也表明SI-VIT在分析细胞病理学图像方面具有巨大的潜力。代码和实验结果可在https://github.com/sagizty/mil-si上获得。
translated by 谷歌翻译
物联网设备越来越多地通过神经网络模型实施,以启用智能应用程序。从环境环境中收集能源的能源收集(EH)技术是电池可为这些设备供电的有前途的替代方法,因为维护成本较低和能源的广泛可用性。但是,能量收割机提供的功率很低,并且具有不稳定性的固有缺点,因为它随环境环境而变化。本文提出了EVE,EVE是一种自动化机器学习(AUTOML)共同探索框架,以搜索具有共享权重的所需的多模型,以进行能源收集的物联网设备。这些共享模型显着降低了记忆足迹,具有不同级别的模型稀疏性,延迟和准确性,以适应环境变化。进一步开发了有效的实施实施体系结构,以有效地执行设备上的每个模型。提出了一种运行时模型提取算法,该算法在触发特定模型模式时以可忽略的开销检索单个模型。实验结果表明,EVE生成的神经网络模型平均比没有修剪和共享的基线模型快2.5倍倍权重。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译