资源说明框架(RDF)和属性图(PG)是表示,存储和查询图数据的两个最常用的数据模型。我们提出了表达推理图存储(ERGS) - 构建在Janusgraph(属性图存储)顶部的图存储,该图还允许存储和查询RDF数据集。首先,我们描述了如何将RDF数据转换为属性图表示,然后描述将SPARQL查询转换为一系列Gremlin遍历的查询翻译模块。因此,开发的转换器和翻译器可以允许任何Apache TinkerPop符合图形数据库存储和查询RDF数据集。我们证明了使用JanusGraph作为基本属性图存储的建议方法的有效性,并将其性能与标准RDF系统进行比较。
translated by 谷歌翻译
With the advent of Neural Style Transfer (NST), stylizing an image has become quite popular. A convenient way for extending stylization techniques to videos is by applying them on a per-frame basis. However, such per-frame application usually lacks temporal-consistency expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal-consistency suffers from one or more of the following drawbacks. They (1) are only suitable for a limited range of stylization techniques, (2) can only be applied in an offline fashion requiring the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency-control. Note that existing consistent video-filtering approaches aim to completely remove flickering artifacts and thus do not respect any specific consistency-control aspect. For stylization tasks, however, consistency-control is an essential requirement where a certain amount of flickering can add to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that can stylize video streams while providing interactive consistency-control. Apart from stylization, our approach also supports various other image processing filters. For achieving interactive performance, we develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. We show that the final consistent video-output using our flow network is comparable to that being obtained using state-of-the-art optical-flow network. Further, we employ an adaptive combination of local and global consistent features and enable interactive selection between the two. By objective and subjective evaluation, we show that our method is superior to state-of-the-art approaches.
translated by 谷歌翻译
Large language models have ushered in a golden age of semantic parsing. The seq2seq paradigm allows for open-schema and abstractive attribute and relation extraction given only small amounts of finetuning data. Language model pretraining has simultaneously enabled great strides in natural language inference, reasoning about entailment and implication in free text. These advances motivate us to construct ImPaKT, a dataset for open-schema information extraction, consisting of around 2500 text snippets from the C4 corpus, in the shopping domain (product buying guides), professionally annotated with extracted attributes, types, attribute summaries (attribute schema discovery from idiosyncratic text), many-to-one relations between compound and atomic attributes, and implication relations. We release this data in hope that it will be useful in fine tuning semantic parsers for information extraction and knowledge base construction across a variety of domains. We evaluate the power of this approach by fine-tuning the open source UL2 language model on a subset of the dataset, extracting a set of implication relations from a corpus of product buying guides, and conducting human evaluations of the resulting predictions.
translated by 谷歌翻译
Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledge-intensive NLP tasks. However, FiD suffers from very expensive inference. We show that the majority of inference time results from memory bandwidth constraints in the decoder, and propose two simple changes to the FiD architecture to speed up inference by 7x. The faster decoder inference then allows for a much larger decoder. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large.
translated by 谷歌翻译
数字双技术被认为是现代工业发展的组成部分。随着技术Internet技术(IoT)技术的快速发展以及自动化趋势的增加,虚拟世界与物理世界之间的整合现在可以实现生产实用的数字双胞胎。但是,数字双胞胎的现有定义是不完整的,有时是模棱两可的。在此,我们进行了历史审查,并分析了数字双胞胎的现代通用观点,以创建其新的扩展定义。我们还审查并讨论了在安全至关重要的机器人技术应用中数字双胞胎中现有的工作。特别是,由于环境挑战,数字双胞胎在工业应用中的使用需要自动和远程操作。但是,环境中的不确定性可能需要对机器人进行仔细监控和快速适应,这些机器人需要防止安全和成本效益。我们展示了一个案例研究,以开发针对安全至关重要的机器人臂应用框架,并提出系统性能以显示其优势,并讨论未来的挑战和范围。
translated by 谷歌翻译
Digital Twin Technology在现代工业发展中起着关键作用。尤其是,随着技术的技术进步(IoT)以及自主权的日益增长的趋势,配备多传感器的机器人技术可以创建实用的数字双胞胎,这在运营,维护和安全的工业应用程序中特别有用。在此,我们演示了一个现实世界中的数字双胞胎,其中包括安全至关重要的机器人应用程序,并带有Franka-Emika-Panda机器人臂。我们开发并展示了一个避免动态障碍物的边缘辅助协作数字双胞胎,这对于在工业物联网中不确定和动态的环境中运行时可以实时适应机器人。
translated by 谷歌翻译
由于计算机视觉的最新进展,流量视频数据已成为限制交通拥堵状况的关键因素。这项工作为使用颜色编码方案提供了一种独特的技术,用于在深度卷积神经网络中训练流量数据之前。首先,将视频数据转换为图像数据集。然后,使用您只看一次算法进行车辆检测。已经采用了颜色编码的方案将图像数据集转换为二进制图像数据集。这些二进制图像被馈送到深度卷积神经网络中。使用UCSD数据集,我们获得了98.2%的分类精度。
translated by 谷歌翻译
在这项工作中,我们提出了一种使用位置偏差模型来确定性记录策略的新型方法。该技术大大扩大了可以使用OPE的策略。我们使用有关行业规模数据的两个不同的实验来验证该技术。OPE结果显然与在线结果密切相关,并且存在一些持续的偏见。估算器要求检查模型是对真实用户行为的合理准确近似。
translated by 谷歌翻译
编写代码时,大多数程序员会犯错误。这些错误中的一些很小,几乎不需要对原始程序进行编辑 - 最近称为最后一个英里错误的错误。这些错误打破了经验丰富的开发人员的流程,并且可以使新手程序员陷入困境。针对此类错误的现有自动化维修技术是特定于域的,并且不容易延续到新域。转移符号方法需要实质性的工程和神经方法需要数据和重新培训。我们介绍RING,这是一种多语言维修引擎,该引擎由经过代码训练的大型语言模型(例如Codex)提供动力。这样的多语言引擎可以为编程援助提供一个翻转的模型,该模型与传统的代码建议技术相比,程序员编写代码和AI援助建议修复。从程序员手动修复错误的方式中汲取灵感,我们表明,基于迅速的策略将修复作为本地化,转换和候选排名概念化,可以成功地在多个域中成功维修程序,但努力最少。我们通过评估6个不同的域并将性能与域特异性维修引擎进行比较,为这种多语言维修引擎提供了第一个结果。我们表明,环可以超过这些域中3个域中的特定于域特异性修复引擎。我们还确定了使用LLMC进行多语言维修的未来研究方向。
translated by 谷歌翻译
大型基于变压器的预训练的语言模型在各种知识密集的任务上取得了令人印象深刻的表现,并可以在其参数中捕获事实知识。我们认为,考虑到不断增长的知识和资源需求,在模型参数中存储大量知识是亚最佳选择。我们认为,更有效的替代方法是向模型提供对上下文相关的结构化知识的明确访问,并训练它以使用该知识。我们提出了LM核 - 实现这一目标的一般框架 - 允许从外部知识源对语言模型培训的\ textit {解耦},并允许后者更新而不会影响已经训练的模型。实验结果表明,LM核心获得外部知识,在知识探索任务上的最先进的知识增强语言模型中实现了重要而强大的优于性能。可以有效处理知识更新;并在两个下游任务上表现良好。我们还提出了一个彻底的错误分析,突出了LM核的成功和失败。
translated by 谷歌翻译