场景文本识别低资源印度语言是挑战,因为具有多个脚本,字体,文本大小和方向等复杂性。在这项工作中,我们调查从英语到两个常见的印度语言的深度场景文本识别网络的所有层的转移学习的力量。我们对传统的CRNN模型和星网进行实验,以确保连续性。为研究不同脚本的变化影响,我们最初在使用Unicode字体呈现的综合单词图像上运行我们的实验。我们表明英语模型转移到印度语言简单的合成数据集并不实用。相反,我们建议由于其n-gram分布的相似性以及像元音和结合字符的视觉功能,因此在印度语言中应用转移学习技术。然后,我们研究了六种印度语言之间的转移学习,在字体和单词长度统计中不同的复杂性。我们还证明,从其他印度语言转移的模型的学习功能与来自英语转移的人的特征视觉更接近(并且有时甚至更好)。我们终于通过在MLT-17上实现了6%,5%,2%和23%的单词识别率(WRRS )与以前的作品相比。通过将新颖的校正Bilstm插入我们的模型,我们进一步提高了MLT-17 Bangla结果。我们还释放了大约440个场景图像的数据集,其中包含了500古吉拉蒂和2535个泰米尔单词。在MLT-19 Hindi和Bangla Datasets和Gujarati和泰米尔数据集上,WRRS在基线上提高了8%,4%,5%和3%。
translated by 谷歌翻译
由于多个字体,简单的词汇统计,更新的数据生成工具和写入系统,场景 - 文本识别比非拉丁语语言更好地比非拉丁语语言更好。本文通过将英文数据集与非拉丁语语言进行比较,检查了低精度的可能原因。我们比较单词图像和Word Length Statistics的大小(宽度和高度)等各种功能。在过去的十年中,通过强大的深度学习技术生成合成数据集具有极大地改善了场景文本识别。通过改变(i)字体的数量来创建合成数据的数量和(ii)创建字图像来对英语进行几个受控实验。我们发现这些因素对于场景文本识别系统至关重要。英语合成数据集使用超过1400字体,而阿拉伯语和其他非拉丁数据集使用少于100个字体的数据生成。由于这些语言中的一些是不同区域的一部分,我们通过基于地区的搜索来加入额外的字体,以改善阿拉伯语和Devanagari中的场景文本识别模型。与以前的作品或基线相比,我们将阿拉伯MLT-17和MLT-19数据集的单词识别率(WRRS)提高了24.54%和2.32%。对于IIT-ILST和MLT-19 Devanagari数据集,我们实现了7.88%和3.72%的WRR收益。
translated by 谷歌翻译
Long-term OCR services aim to provide high-quality output to their users at competitive costs. It is essential to upgrade the models because of the complex data loaded by the users. The service providers encourage the users who provide data where the OCR model fails by rewarding them based on data complexity, readability, and available budget. Hitherto, the OCR works include preparing the models on standard datasets without considering the end-users. We propose a strategy of consistently upgrading an existing Handwritten Hindi OCR model three times on the dataset of 15 users. We fix the budget of 4 users for each iteration. For the first iteration, the model directly trains on the dataset from the first four users. For the rest iteration, all remaining users write a page each, which service providers later analyze to select the 4 (new) best users based on the quality of predictions on the human-readable words. Selected users write 23 more pages for upgrading the model. We upgrade the model with Curriculum Learning (CL) on the data available in the current iteration and compare the subset from previous iterations. The upgraded model is tested on a held-out set of one page each from all 23 users. We provide insights into our investigations on the effect of CL, user selection, and especially the data from unseen writing styles. Our work can be used for long-term OCR services in crowd-sourcing scenarios for the service providers and end users.
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
Image-based sequence recognition has been a longstanding research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.
translated by 谷歌翻译
Handwritten Text Recognition (HTR) is more interesting and challenging than printed text due to uneven variations in the handwriting style of the writers, content, and time. HTR becomes more challenging for the Indic languages because of (i) multiple characters combined to form conjuncts which increase the number of characters of respective languages, and (ii) near to 100 unique basic Unicode characters in each Indic script. Recently, many recognition methods based on the encoder-decoder framework have been proposed to handle such problems. They still face many challenges, such as image blur and incomplete characters due to varying writing styles and ink density. We argue that most encoder-decoder methods are based on local visual features without explicit global semantic information. In this work, we enhance the performance of Indic handwritten text recognizers using global semantic information. We use a semantic module in an encoder-decoder framework for extracting global semantic information to recognize the Indic handwritten texts. The semantic information is used in both the encoder for supervision and the decoder for initialization. The semantic information is predicted from the word embedding of a pre-trained language model. Extensive experiments demonstrate that the proposed framework achieves state-of-the-art results on handwritten texts of ten Indic languages.
translated by 谷歌翻译
在过去的几十年中,由于其在广泛的应用中,现场文本认可从学术界和实际用户获得了全世界的关注。尽管在光学字符识别方面取得了成就,但由于诸如扭曲或不规则布局等固有问题,现场文本识别仍然具有挑战性。大多数现有方法主要利用基于复发或卷积的神经网络。然而,虽然经常性的神经网络(RNN)通常由于顺序计算而遭受慢的训练速度,并且遇到消失的梯度或瓶颈,但CNN在复杂性和性能之间衡量折衷。在本文中,我们介绍了SAFL,一种基于自我关注的神经网络模型,具有场景文本识别的焦点损失,克服现有方法的限制。使用焦损而不是负值对数似然有助于模型更多地关注低频样本训练。此外,为应对扭曲和不规则文本,我们在传递到识别网络之前,我们利用空间变换(STN)来纠正文本。我们执行实验以比较拟议模型的性能与七个基准。数值结果表明,我们的模型实现了最佳性能。
translated by 谷歌翻译
自动识别脚本是多语言OCR引擎的重要组成部分。在本文中,我们介绍了基于CNN-LSTM网络的高效,轻量级,实时和设备空间关注,用于场景文本脚本标识,可在资源受限移动设备上部署部署。我们的网络由CNN组成,配备有空间注意模块,有助于减少自然图像中存在的空间扭曲。这允许特征提取器在忽略畸形的同时产生丰富的图像表示,从而提高了该细粒化分类任务的性能。该网络还采用残留卷积块来构建深度网络以专注于脚本的鉴别特征。 CNN通过识别属于特定脚本的每个字符来学习文本特征表示,并且使用LSTM层的序列学习能力捕获文本内的长期空间依赖关系。将空间注意机制与残留卷积块相结合,我们能够增强基线CNN的性能,以构建用于脚本识别的端到端可训练网络。若干标准基准测试的实验结果证明了我们方法的有效性。该网络实现了最先进的方法竞争准确性,并且在网络尺寸方面优越,总共仅为110万个参数,推理时间为2.7毫秒。
translated by 谷歌翻译
几十年来,手写的中文文本识别(HCTR)一直是一个活跃的研究主题。但是,大多数以前的研究仅关注裁剪文本图像的识别,而忽略了实际应用程序中文本线检测引起的错误。尽管近年来已经提出了一些针对页面文本识别的方法,但它们要么仅限于简单布局,要么需要非常详细的注释,包括昂贵的线条级别甚至角色级边界框。为此,我们建议Pagenet端到端弱监督的页面级HCTR。 Pagenet检测并识别角色并预测其之间的阅读顺序,在处理复杂的布局(包括多方向和弯曲的文本线路)时,这更健壮和灵活。利用所提出的弱监督学习框架,Pagenet只需要对真实数据进行注释。但是,它仍然可以在字符和线级别上输出检测和识别结果,从而避免标记字符和文本线条的界限框的劳动和成本。在五个数据集上进行的广泛实验证明了Pagenet优于现有的弱监督和完全监督的页面级方法。这些实验结果可能会引发进一步的研究,而不是基于连接主义时间分类或注意力的现有方法的领域。源代码可在https://github.com/shannanyinxiang/pagenet上获得。
translated by 谷歌翻译
Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via `sequential decoding'. However, scene text images suffer from rich noises of different sources such as complex background and geometric distortions which often confuse the decoder and lead to incorrect alignment of visual features at noisy decoding time steps. This paper presents I2C2W, a novel scene text recognition technique that is tolerant to geometric and photometric degradation by decomposing scene text recognition into two inter-connected tasks. The first task focuses on image-to-character (I2C) mapping which detects a set of character candidates from images based on different alignments of visual features in an non-sequential way. The second task tackles character-to-word (C2W) mapping which recognizes scene text by decoding words from the detected character candidates. The direct learning from character semantics (instead of noisy image features) corrects falsely detected character candidates effectively which improves the final text recognition accuracy greatly. Extensive experiments over nine public datasets show that the proposed I2C2W outperforms the state-of-the-art by large margins for challenging scene text datasets with various curvature and perspective distortions. It also achieves very competitive recognition performance over multiple normal scene text datasets.
translated by 谷歌翻译
这项工作提出了一个基于注意力的序列到序列模型,用于手写单词识别,并探讨了用于HTR系统数据有效培训的转移学习。为了克服培训数据稀缺性,这项工作利用了在场景文本图像上预先训练的模型,作为调整手写识别模型的起点。Resnet特征提取和基于双向LSTM的序列建模阶段一起形成编码器。预测阶段由解码器和基于内容的注意机制组成。拟议的端到端HTR系统的有效性已在新型的多作用数据集IMGUR5K和IAM数据集上进行了经验评估。实验结果评估了HTR框架的性能,并通过对误差案例的深入分析进一步支持。源代码和预培训模型可在https://github.com/dmitrijsk/attentionhtr上找到。
translated by 谷歌翻译
无约束的手写文本识别是一项具有挑战性的计算机视觉任务。传统上,它是通过两步方法来处理的,结合了线细分,然后是文本线识别。我们第一次为手写文档识别任务提出了无端到端的无分段体系结构:文档注意网络。除文本识别外,该模型还接受了使用类似XML的方式使用开始和结束标签标记文本零件的训练。该模型由用于特征提取的FCN编码器和用于复发令牌预测过程的变压器解码器层组成。它将整个文本文档作为输入和顺序输出字符以及逻辑布局令牌。与现有基于分割的方法相反,该模型是在不使用任何分割标签的情况下进行训练的。我们在页面级别的Read 2016数据集以及CER分别为3.43%和3.70%的双页级别上获得了竞争成果。我们还为Rimes 2009数据集提供了页面级别的结果,达到CER的4.54%。我们在https://github.com/factodeeplearning/dan上提供所有源代码和预训练的模型权重。
translated by 谷歌翻译
近年来,深入学习的蓬勃发展的开花目睹了文本认可的快速发展。但是,现有的文本识别方法主要用于英语文本,而忽略中文文本的关键作用。作为另一种广泛的语言,中文文本识别各种方式​​都有广泛的应用市场。根据我们的观察,我们将稀缺关注缺乏对缺乏合理的数据集建设标准,统一评估方法和现有基线的结果。为了填补这一差距,我们手动收集来自公开的竞争,项目和论文的中文文本数据集,然后将它们分为四类,包括场景,网络,文档和手写数据集。此外,我们在这些数据集中评估了一系列代表性的文本识别方法,具有统一的评估方法来提供实验结果。通过分析实验结果,我们令人惊讶地观察到识别英语文本的最先进的基线不能很好地表现出对中国情景的良好。由于中国文本的特征,我们认为仍然存在众多挑战,这与英文文本完全不同。代码和数据集在https://github.com/fudanvi/benchmarking-chinese-text-recognition中公开使用。
translated by 谷歌翻译
在线和离线手写的中文文本识别(HTCR)已经研究了数十年。早期方法采用了基于过度裂段的策略,但遭受低速,准确性不足和角色分割注释的高成本。最近,基于连接主义者时间分类(CTC)和注意机制的无分割方法主导了HCTR的领域。但是,人们实际上是按字符读取文本的,尤其是对于中文等意识形态图。这就提出了一个问题:无细分策略真的是HCTR的最佳解决方案吗?为了探索此问题,我们提出了一种基于细分的新方法,用于识别使用简单但有效的完全卷积网络实现的手写中文文本。提出了一种新型的弱监督学习方法,以使网络仅使用笔录注释进行训练。因此,可以避免以前基于细分的方法所需的昂贵字符分割注释。由于缺乏完全卷积网络中的上下文建模,我们提出了一种上下文正则化方法,以在培训阶段将上下文信息集成到网络中,这可以进一步改善识别性能。在四个广泛使用的基准测试中进行的广泛实验,即Casia-HWDB,Casia-Olhwdb,ICDAR2013和Scut-HCCDOC,表明我们的方法在线和离线HCTR上都显着超过了现有方法,并且表现出比CTC/ CTC/ CTC/ CTC/ CTC/速度高得多的方法。基于注意力的方法。
translated by 谷歌翻译
具有注释的缺乏大规模的真实数据集使转移学习视频活动的必要性。我们的目标是为少数行动分类开发几次拍摄转移学习的有效方法。我们利用独立培训的本地视觉提示来学习可以从源域传输的表示,该源域只能使用少数示例来从源域传送到不同的目标域。我们使用的视觉提示包括对象 - 对象交互,手掌和地区内的动作,这些地区是手工位置的函数。我们采用了一个基于元学习的框架,以提取部署的视觉提示的独特和域不变组件。这使得能够在使用不同的场景和动作配置捕获的公共数据集中传输动作分类模型。我们呈现了我们转让学习方法的比较结果,并报告了阶级阶级和数据间数据间际传输的最先进的行动分类方法。
translated by 谷歌翻译
Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.
translated by 谷歌翻译
本文介绍了用于文档图像分析的图像数据集的系统文献综述,重点是历史文档,例如手写手稿和早期印刷品。寻找适当的数据集进行历史文档分析是促进使用不同机器学习算法进行研究的关键先决条件。但是,由于实际数据非常多(例如,脚本,任务,日期,支持系统和劣化量),数据和标签表示的不同格式以及不同的评估过程和基准,因此找到适当的数据集是一项艰巨的任务。这项工作填补了这一空白,并在现有数据集中介绍了元研究。经过系统的选择过程(根据PRISMA指南),我们选择了56项根据不同因素选择的研究,例如出版年份,文章中实施的方法数量,所选算法的可靠性,数据集大小和期刊的可靠性出口。我们通过将其分配给三个预定义的任务之一来总结每个研究:文档分类,布局结构或语义分析。我们为每个数据集提供统计,文档类型,语言,任务,输入视觉方面和地面真实信息。此外,我们还提供了这些论文或最近竞争的基准任务和结果。我们进一步讨论了该领域的差距和挑战。我们倡导将转换工具提供到通用格式(例如,用于计算机视觉任务的可可格式),并始终提供一组评估指标,而不仅仅是一种评估指标,以使整个研究的结果可比性。
translated by 谷歌翻译
由于深度学习的进步和数据集的增加,自动许可证板识别(ALPR)系统对来自多个区域的牌照(LPS)的表现显着。对深度ALPR系统的评估通常在每个数据集内完成;因此,如果这种结果是泛化能力的可靠指标,则是可疑的。在本文中,我们提出了一种传统分配的与休假 - 单数据集实验设置,以统一地评估12个光学字符识别(OCR)模型的交叉数据集泛化,其在九个公共数据集上应用于LP识别,具有良好的品种在若干方面(例如,获取设置,图像分辨率和LP布局)。我们还介绍了一个用于端到端ALPR的公共数据集,这是第一个包含带有Mercosur LP的车辆的图像和摩托车图像数量最多的图像。实验结果揭示了传统分离协议的局限性,用于评估ALPR上下文中的方法,因为在训练和测试休假时,大多数数据集在大多数数据集中的性能显着下降。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
无约束的手写文本识别仍然具有挑战性的计算机视觉系统。段落识别传统上由两个模型实现:第一个用于线分割和用于文本线路识别的第二个。我们提出了一个统一的端到端模型,使用混合注意力来解决这项任务。该模型旨在迭代地通过线路进行段落图像线。它可以分为三个模块。编码器从整个段落图像生成特征映射。然后,注意力模块循环生成垂直加权掩模,使能专注于当前的文本线特征。这样,它执行一种隐式线分割。对于每个文本线特征,解码器模块识别关联的字符序列,导致整个段落的识别。我们在三个流行的数据集赛中达到最先进的字符错误率:ribs的1.91%,IAM 4.45%,读取2016年3.59%。我们的代码和培训的模型重量可在HTTPS:// GitHub上获得.com / fefodeeplearning / watermentattentocroc。
translated by 谷歌翻译