本文提出了一种用于端到端现场文本识别的新颖培训方法。端到端的场景文本识别提供高识别精度,尤其是在使用基于变压器的编码器 - 解码器模型时。要培训高度准确的端到端模型,我们需要为目标语言准备一个大型图像到文本配对数据集。但是,很难收集这些数据,特别是对于资源差的语言。为了克服这种困难,我们所提出的方法利用富裕的大型数据集,以资源丰富的语言,如英语,培训资源差的编码器解码器模型。我们的主要思想是建立一个模型,其中编码器反映了多种语言的知识,而解码器专门从事资源差的语言。为此,所提出的方法通过使用组合资源贫乏语言数据集和资源丰富的语言数据集的多语言数据集来预先培训编码器,以学习用于场景文本识别的语言不变知识。所提出的方法还通过使用资源贫乏语言的数据集预先列举解码器,使解码器更适合资源较差的语言。使用小型公共数据集进行日本现场文本识别的实验证明了该方法的有效性。
translated by 谷歌翻译
本文提出了一种用于对话序列标记的新型知识蒸馏方法。对话序列标签是监督的学习任务,估计目标对话文档中每个话语的标签,并且对于许多诸如对话法估计的许多应用是有用的。准确的标签通常通过分层结构化的大型模型来实现,这些大型模型组成的话语级和对话级网络,分别捕获话语内和话语之间的上下文。但是,由于其型号大小,因此无法在资源受限设备上部署此类模型。为了克服这种困难,我们专注于通过蒸馏了大型和高性能教师模型的知识来列举一个小型模型的知识蒸馏。我们的主要思想是蒸馏知识,同时保持教师模型捕获的复杂环境。为此,所提出的方法,等级知识蒸馏,通过蒸馏来列举小型模型,而不是通过培训模型在教师模型中培训的话语水平和对话级环境的知识模拟教师模型在每个级别的输出。对话法案估算和呼叫场景分割的实验证明了该方法的有效性。
translated by 谷歌翻译
Video understanding is a growing field and a subject of intense research, which includes many interesting tasks to understanding both spatial and temporal information, e.g., action detection, action recognition, video captioning, video retrieval. One of the most challenging problems in video understanding is dealing with feature extraction, i.e. extract contextual visual representation from given untrimmed video due to the long and complicated temporal structure of unconstrained videos. Different from existing approaches, which apply a pre-trained backbone network as a black-box to extract visual representation, our approach aims to extract the most contextual information with an explainable mechanism. As we observed, humans typically perceive a video through the interactions between three main factors, i.e., the actors, the relevant objects, and the surrounding environment. Therefore, it is very crucial to design a contextual explainable video representation extraction that can capture each of such factors and model the relationships between them. In this paper, we discuss approaches, that incorporate the human perception process into modeling actors, objects, and the environment. We choose video paragraph captioning and temporal action detection to illustrate the effectiveness of human perception based-contextual representation in video understanding. Source code is publicly available at https://github.com/UARK-AICV/Video_Representation.
translated by 谷歌翻译
Video anomaly detection (VAD) -- commonly formulated as a multiple-instance learning problem in a weakly-supervised manner due to its labor-intensive nature -- is a challenging problem in video surveillance where the frames of anomaly need to be localized in an untrimmed video. In this paper, we first propose to utilize the ViT-encoded visual features from CLIP, in contrast with the conventional C3D or I3D features in the domain, to efficiently extract discriminative representations in the novel technique. We then model long- and short-range temporal dependencies and nominate the snippets of interest by leveraging our proposed Temporal Self-Attention (TSA). The ablation study conducted on each component confirms its effectiveness in the problem, and the extensive experiments show that our proposed CLIP-TSA outperforms the existing state-of-the-art (SOTA) methods by a large margin on two commonly-used benchmark datasets in the VAD problem (UCF-Crime and ShanghaiTech Campus). The source code will be made publicly available upon acceptance.
translated by 谷歌翻译
Telework "avatar work," in which people with disabilities can engage in physical work such as customer service, is being implemented in society. In order to enable avatar work in a variety of occupations, we propose a mobile sales system using a mobile frozen drink machine and an avatar robot "OriHime", focusing on mobile customer service like peddling. The effect of the peddling by the system on the customers are examined based on the results of video annotation.
translated by 谷歌翻译
Video paragraph captioning aims to generate a multi-sentence description of an untrimmed video with several temporal event locations in coherent storytelling. Following the human perception process, where the scene is effectively understood by decomposing it into visual (e.g. human, animal) and non-visual components (e.g. action, relations) under the mutual influence of vision and language, we first propose a visual-linguistic (VL) feature. In the proposed VL feature, the scene is modeled by three modalities including (i) a global visual environment; (ii) local visual main agents; (iii) linguistic scene elements. We then introduce an autoregressive Transformer-in-Transformer (TinT) to simultaneously capture the semantic coherence of intra- and inter-event contents within a video. Finally, we present a new VL contrastive loss function to guarantee learnt embedding features are matched with the captions semantics. Comprehensive experiments and extensive ablation studies on ActivityNet Captions and YouCookII datasets show that the proposed Visual-Linguistic Transformer-in-Transform (VLTinT) outperforms prior state-of-the-art methods on accuracy and diversity.
translated by 谷歌翻译
神经领域对3D视觉任务的成功现在是无可争议的。遵循这种趋势,已经提出了几种旨在进行视觉定位的方法(例如,大满贯)使用神经场估算距离或密度场。但是,很难仅通过基于密度字段的方法(例如神经辐射场(NERF))实现较高的定位性能,因为它们在大多数空区域中不提供密度梯度。另一方面,基于距离场的方法,例如神经隐式表面(NEU)在物体表面形状中具有局限性。本文提出了神经密度距离场(NEDDF),这是一种新颖的3D表示,可相互约束距离和密度场。我们将距离场公式扩展到没有明确边界表面的形状,例如皮毛或烟雾,从而可以从距离场到密度场进行显式转换。通过显式转换实现的一致距离和密度字段使稳健性可以符合初始值和高质量的注册。此外,字段之间的一致性允许从稀疏点云中快速收敛。实验表明,NEDDF可以实现较高的定位性能,同时在新型视图合成中提供可比的结果。该代码可在https://github.com/ueda0319/neddf上找到。
translated by 谷歌翻译
人类不仅将机器人用作工具,而且还用作与人类的交互协助和合作,从而形成了人类机器人的互动。在这些相互作用中,反馈回路会导致不稳定的力相互作用,在这种情况下,力量升级使人类面临危险。先前的研究已经分析了自愿相互作用的稳定性,但在相互作用中忽略了非自愿行为。与先前的研究相反,本研究考虑了非自愿行为:人类的力量繁殖偏见是离散事件的人类机器人相互作用。我们基于数学偏见模型得出了渐近稳定性条件,发现偏差稳定了远离隐式平衡点的人类隐式平衡点,并破坏了该点附近的点。偏置模型,与隐式平衡点的相互作用的收敛性以及该点周围的差异通过使用三种不同的身体部位在三种相互作用下进行的行为实验来验证:手指,手腕和脚。我们的结果表明,人类与他们的非自愿行为暗中确保了自己与机器人之间的稳定和紧密的关系。
translated by 谷歌翻译
在本文中,我们利用涉及视觉和语言互动的人类感知过程来生成对未修剪视频的连贯段落描述。我们提出了视觉语言(VL)功能,这些功能由两种模态组成,即(i)视觉方式,以捕获整个场景的全局视觉内容以及(ii)语言方式来提取人类和非人类对象的场景元素描述(例如,动物,车辆等),视觉和非视觉元素(例如关系,活动等)。此外,我们建议在对比度学习VL损失下培训我们提出的VLCAP。有关活动网字幕和YouCookii数据集的实验和消融研究表明,我们的VLCAP在准确性和多样性指标上都优于现有的SOTA方法。
translated by 谷歌翻译
我们建议基于负担能力识别和一种神经远期模型的组合来预测负担执行的效果的新型动作序列计划。通过对预测期货进行负担能力识别,我们避免依赖多步计划的明确负担效果定义。由于该系统从经验数据中学习负担能力效果,因此该系统不仅可以预见到负担的规范效应,还可以预见到特定情况的副作用。这使系统能够避免由于这种非规范效应而避免计划故障,并可以利用非规范效应来实现给定目标。我们在一组需要考虑规范和非典型负担效应的测试任务上评估了模拟系统的系统。
translated by 谷歌翻译