在数字人文学科和计算社会科学中,比较两个文本体系和搜索它们在它们之间使用情况不同的单词的问题。这通常是通过在每个语料库上的训练单词嵌入,对齐矢量空间,并寻找余弦距离在对齐空间中的单词很大。然而,这些方法通常需要大量过滤词汇表表现良好,而且 - 正如我们在这项工作中所展示的那样 - 导致不稳定,因此不太可靠,结果。我们提出了一种不使用矢量空间对齐的替代方法,而是考虑每个单词的邻居。该方法简单,可解释和稳定。我们在9种不同的设置中展示了它的有效性,考虑了不同的语料库分裂标准(年龄,性别和推文作者,Tweet的时间)和不同的语言(英语,法语和希伯来语)。
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Long-term OCR services aim to provide high-quality output to their users at competitive costs. It is essential to upgrade the models because of the complex data loaded by the users. The service providers encourage the users who provide data where the OCR model fails by rewarding them based on data complexity, readability, and available budget. Hitherto, the OCR works include preparing the models on standard datasets without considering the end-users. We propose a strategy of consistently upgrading an existing Handwritten Hindi OCR model three times on the dataset of 15 users. We fix the budget of 4 users for each iteration. For the first iteration, the model directly trains on the dataset from the first four users. For the rest iteration, all remaining users write a page each, which service providers later analyze to select the 4 (new) best users based on the quality of predictions on the human-readable words. Selected users write 23 more pages for upgrading the model. We upgrade the model with Curriculum Learning (CL) on the data available in the current iteration and compare the subset from previous iterations. The upgraded model is tested on a held-out set of one page each from all 23 users. We provide insights into our investigations on the effect of CL, user selection, and especially the data from unseen writing styles. Our work can be used for long-term OCR services in crowd-sourcing scenarios for the service providers and end users.
translated by 谷歌翻译
Handwritten Text Recognition (HTR) is more interesting and challenging than printed text due to uneven variations in the handwriting style of the writers, content, and time. HTR becomes more challenging for the Indic languages because of (i) multiple characters combined to form conjuncts which increase the number of characters of respective languages, and (ii) near to 100 unique basic Unicode characters in each Indic script. Recently, many recognition methods based on the encoder-decoder framework have been proposed to handle such problems. They still face many challenges, such as image blur and incomplete characters due to varying writing styles and ink density. We argue that most encoder-decoder methods are based on local visual features without explicit global semantic information. In this work, we enhance the performance of Indic handwritten text recognizers using global semantic information. We use a semantic module in an encoder-decoder framework for extracting global semantic information to recognize the Indic handwritten texts. The semantic information is used in both the encoder for supervision and the decoder for initialization. The semantic information is predicted from the word embedding of a pre-trained language model. Extensive experiments demonstrate that the proposed framework achieves state-of-the-art results on handwritten texts of ten Indic languages.
translated by 谷歌翻译
Reinforcement Learning (RL) algorithms are known to scale poorly to environments with many available actions, requiring numerous samples to learn an optimal policy. The traditional approach of considering the same fixed action space in every possible state implies that the agent must understand, while also learning to maximize its reward, to ignore irrelevant actions such as $\textit{inapplicable actions}$ (i.e. actions that have no effect on the environment when performed in a given state). Knowing this information can help reduce the sample complexity of RL algorithms by masking the inapplicable actions from the policy distribution to only explore actions relevant to finding an optimal policy. This is typically done in an ad-hoc manner with hand-crafted domain logic added to the RL algorithm. In this paper, we propose a more systematic approach to introduce this knowledge into the algorithm. We (i) standardize the way knowledge can be manually specified to the agent; and (ii) present a new framework to autonomously learn these state-dependent action constraints jointly with the policy. We show experimentally that learning inapplicable actions greatly improves the sample efficiency of the algorithm by providing a reliable signal to mask out irrelevant actions. Moreover, we demonstrate that thanks to the transferability of the knowledge acquired, it can be reused in other tasks to make the learning process more efficient.
translated by 谷歌翻译
Video Question Answering methods focus on commonsense reasoning and visual cognition of objects or persons and their interactions over time. Current VideoQA approaches ignore the textual information present in the video. Instead, we argue that textual information is complementary to the action and provides essential contextualisation cues to the reasoning process. To this end, we propose a novel VideoQA task that requires reading and understanding the text in the video. To explore this direction, we focus on news videos and require QA systems to comprehend and answer questions about the topics presented by combining visual and textual cues in the video. We introduce the ``NewsVideoQA'' dataset that comprises more than $8,600$ QA pairs on $3,000+$ news videos obtained from diverse news channels from around the world. We demonstrate the limitations of current Scene Text VQA and VideoQA methods and propose ways to incorporate scene text information into VideoQA methods.
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
姿势图优化是同时定位和映射问题的一种特殊情况,其中唯一要估计的变量是姿势变量,而唯一的测量值是施加间约束。绝大多数PGO技术都是基于顶点的(变量是机器人姿势),但是最近的工作以相对方式参数化了姿势图优化问题(变量是姿势之间的变换),利用最小循环基础来最大程度地提高范围的稀疏性。问题。我们以增量方式探索周期基础的构建,同时最大程度地提高稀疏性。我们验证一种算法,该算法逐渐构建稀疏循环基础,并将其性能与最小循环基础进行比较。此外,我们提出了一种算法,以近似两个图表的最小周期基础,这些图在多代理方案中常见。最后,姿势图优化的相对参数化仅限于使用SE(2)或SE(3)上的刚体变换作为姿势之间的约束。我们引入了一种方法,以允许在相对姿势图优化问题中使用低度测量值。我们对标准基准,模拟数据集和自定义硬件的算法进行了广泛的验证。
translated by 谷歌翻译
在这项工作中,我们解决了为野外任何演讲者发出静音唇部视频演讲的问题。与以前的作品形成鲜明对比的是,我们的方法(i)不仅限于固定数量的扬声器,(ii)并未明确对域或词汇构成约束,并且(iii)涉及在野外记录的视频,反对实验室环境。该任务提出了许多挑战,关键是,所需的目标语音的许多功能(例如语音,音调和语言内容)不能完全从无声的面部视频中推断出来。为了处理这些随机变化,我们提出了一种新的VAE-GAN结构,该结构学会了将唇部和语音序列关联到变化中。在指导培训过程的多个强大的歧视者的帮助下,我们的发电机学会了以任何人的唇部运动中的任何声音综合语音序列。多个数据集上的广泛实验表明,我们的优于所有基线的差距很大。此外,我们的网络可以在特定身份的视频上进行微调,以实现与单扬声器模型相当的性能,该模型接受了$ 4 \ times $ $数据的培训。我们进行了大量的消融研究,以分析我们体系结构不同模块的效果。我们还提供了一个演示视频,该视频与我们的网站上的代码和经过训练的模型一起展示了几个定性结果: -合成}}
translated by 谷歌翻译
许多具有某种形式听力损失的人认为唇读是他们日常交流的主要模式。但是,寻找学习或提高唇部阅读技能的资源可能具有挑战性。由于对与同行和言语治疗师的直接互动的限制,Covid $ 19 $流行的情况进一步加剧了这一点。如今,Coursera和Udemy等在线MOOCS平台已成为多种技能开发的最有效培训形式。但是,在线口头资源很少,因为创建这样的资源是一个广泛的过程,需要数月的手动努力来记录雇用的演员。由于手动管道,此类平台也受到词汇,支持语言,口音和扬声器的限制,并且使用成本很高。在这项工作中,我们研究了用合成生成的视频代替真实的人说话视频的可能性。合成数据可用于轻松合并更大的词汇,口音甚至本地语言以及许多说话者。我们提出了一条端到端的自动管道,以使用最先进的通话标题视频发电机网络,文本到语音的模型和计算机视觉技术来开发这样的平台。然后,我们使用仔细考虑的口头练习进行了广泛的人类评估,以验证我们设计平台针对现有的唇读平台的质量。我们的研究具体地指出了我们方法开发大规模唇读MOOC平台的潜力,该平台可能会影响数百万听力损失的人。
translated by 谷歌翻译