Event Detection (ED) is the task of identifying and classifying trigger words of event mentions in text. Despite considerable research efforts in recent years for English text, the task of ED in other languages has been significantly less explored. Switching to non-English languages, important research questions for ED include how well existing ED models perform on different languages, how challenging ED is in other languages, and how well ED knowledge and annotation can be transferred across languages. To answer those questions, it is crucial to obtain multilingual ED datasets that provide consistent event annotation for multiple languages. There exist some multilingual ED datasets; however, they tend to cover a handful of languages and mainly focus on popular ones. Many languages are not covered in existing multilingual ED datasets. In addition, the current datasets are often small and not accessible to the public. To overcome those shortcomings, we introduce a new large-scale multilingual dataset for ED (called MINION) that consistently annotates events for 8 different languages; 5 of them have not been supported by existing multilingual datasets. We also perform extensive experiments and analysis to demonstrate the challenges and transferability of ED across languages in MINION that in all call for more research effort in this area.
translated by 谷歌翻译
Event Extraction (EE) is one of the fundamental tasks in Information Extraction (IE) that aims to recognize event mentions and their arguments (i.e., participants) from text. Due to its importance, extensive methods and resources have been developed for Event Extraction. However, one limitation of current research for EE involves the under-exploration for non-English languages in which the lack of high-quality multilingual EE datasets for model training and evaluation has been the main hindrance. To address this limitation, we propose a novel Multilingual Event Extraction dataset (MEE) that provides annotation for more than 50K event mentions in 8 typologically different languages. MEE comprehensively annotates data for entity mentions, event triggers and event arguments. We conduct extensive experiments on the proposed dataset to reveal challenges and opportunities for multilingual EE.
translated by 谷歌翻译
流视频是创作者与观众分享创意作品的方法之一。在这些视频中,流媒体分享了如何通过在一个或几个用于创意项目的程序中使用各种工具来实现最终目标。为此,可以讨论实现最终目标所需的步骤。因此,这些视频可以提供大量的教育内容,这些内容可用于学习如何使用流媒体使用的工具。但是,缺点之一是,流媒体可能无法为每个步骤提供足够的详细信息。因此,对于学习者来说,可能很难赶上所有步骤。为了减轻此问题,一种解决方案是将流视频与流视频中使用的工具可用的相关教程联系起来。更具体地说,系统可以分析实时流媒体视频的内容,并推荐最相关的教程。由于现有的文档推荐模型无法处理这种情况,因此在这项工作中,我们为实时流程视频的教程建议提供了一个新颖的数据集和模型。我们对拟议的数据集和模型进行了广泛的分析,揭示了该任务的挑战性质。
translated by 谷歌翻译
键形提取是NLP中文档理解的重要任务之一。虽然大多数先前的作品都致力于正式设置,例如书籍,新闻或网络博客,但探索视频成绩单等非正式文本的探索较少。为了解决这一局限性,在这项工作中,我们提出了一种新颖的语料库和方法,用于从Behance平台上流的视频的成绩单中提取钥匙短语。更具体地说,在这项工作中,提出了一种新型的数据增强,以通过从其他域中提取键形提取任务的背景知识来丰富模型。提出的数据集数据集上的广泛实验显示了引入方法的有效性。
translated by 谷歌翻译
大型预先接受的变压器的语言模型,如BERT大大改变了自然语言处理(NLP)字段。我们展示了对最近的工作的调查,这些工作使用这些大型语言模型通过预先训练,提示或文本生成方法来解决NLP任务。我们还提出了使用预先训练的语言模型来生成培训增强或其他目的的数据的方法。我们在讨论有关未来研究的局限性和建议方向的结论。
translated by 谷歌翻译
Semantic communication (SemCom) and edge computing are two disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse. However, edge computing resources are often provided by computing service providers and thus it is essential to design appealingly incentive mechanisms for the provision of limited resources. Deep learning (DL)- based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties, i.e., individual rationality and incentive compatibility. Therefore, in this work, we introduce the design of the DLbased auction for the computing resource allocation in SemComenabled Metaverse. First, we briefly introduce the fundamentals and challenges of Metaverse. Second, we present the preliminaries of SemCom and edge computing. Third, we review various incentive mechanisms for edge computing resource trading. Fourth, we present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse. Simulation results demonstrate that the DL-based auction improves the revenue while nearly satisfying the individual rationality and incentive compatibility constraints.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
3D inference from monocular vision using neural networks is an important research area of computer vision. Applications of the research area are various with many proposed solutions and have shown remarkable performance. Although many efforts have been invested, there are still unanswered questions, some of which are fundamental. In this paper, I discuss a problem that I hope will come to be known as a generalization of the Blind Perspective-n-Point (Blind PnP) problem for object-driven 3D inference based on 2D representations. The vital difference between the fundamental problem and the Blind PnP problem is that 3D inference parameters in the fundamental problem are attached directly to 3D points and the camera concept will be represented through the sharing of the parameters of these points. By providing an explainable and robust gradient-decent solution based on 2D representations for an important special case of the problem, the paper opens up a new approach for using available information-based learning methods to solve problems related to 3D object pose estimation from 2D images.
translated by 谷歌翻译
Robots have been brought to work close to humans in many scenarios. For coexistence and collaboration, robots should be safe and pleasant for humans to interact with. To this end, the robots could be both physically soft with multimodal sensing/perception, so that the robots could have better awareness of the surrounding environment, as well as to respond properly to humans' action/intention. This paper introduces a novel soft robotic link, named ProTac, that possesses multiple sensing modes: tactile and proximity sensing, based on computer vision and a functional material. These modalities come from a layered structure of a soft transparent silicon skin, a polymer dispersed liquid crystal (PDLC) film, and reflective markers. Here, the PDLC film can switch actively between the opaque and the transparent state, from which the tactile sensing and proximity sensing can be obtained by using cameras solely built inside the ProTac link. In this paper, inference algorithms for tactile proximity perception are introduced. Evaluation results of two sensing modalities demonstrated that, with a simple activation strategy, ProTac link could effectively perceive useful information from both approaching and in-contact obstacles. The proposed sensing device is expected to bring in ultimate solutions for design of robots with softness, whole-body and multimodal sensing, and safety control strategies.
translated by 谷歌翻译
本文讨论了面部表达识别模型和描述生成模型,以构建图像中人的图像和面部表情的描述性句子。我们的研究表明,Yolov5比传统的CNN获得了KDEF数据集的所有情绪的更好结果。特别是,CNN和Yolov5模型的精度分别为0.853和0.938。使用VGG16与LSTM模型编码的描述提出了用于基于合并体系结构的图像描述的模型。 Yolov5还用于识别图像中对象的主要颜色,并在必要时纠正生成的描述中的颜色单词。如果描述包含指称一个人的单词,我们会认识到图像中人的情感。最后,我们结合了所有模型的结果,以创建描述图像中视觉内容和人类情感的句子。越南语中FlickR8K数据集的实验结果实现了BLLEU-1,BLEU-2,BLEU-3,BLEU-4分数为0.628; 0.425; 0.280;和0.174。
translated by 谷歌翻译