The ability to identify and temporally segment finegrained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We introduce a new class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.
translated by 谷歌翻译
Temporally locating and classifying action segments in long untrimmed videos is of particular interest to many applications like surveillance and robotics. While traditional approaches follow a two-step pipeline, by generating framewise probabilities and then feeding them to high-level temporal models, recent approaches use temporal convolutions to directly classify the video frames. In this paper, we introduce a multi-stage architecture for the temporal action segmentation task. Each stage features a set of dilated temporal convolutions to generate an initial prediction that is refined by the next one. This architecture is trained using a combination of a classification loss and a proposed smoothing loss that penalizes over-segmentation errors. Extensive evaluation shows the effectiveness of the proposed model in capturing long-range dependencies and recognizing action segments. Our model achieves state-of-the-art results on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset.
translated by 谷歌翻译
Temporal action segmentation tags action labels for every frame in an input untrimmed video containing multiple actions in a sequence. For the task of temporal action segmentation, we propose an encoder-decoder-style architecture named C2F-TCN featuring a "coarse-to-fine" ensemble of decoder outputs. The C2F-TCN framework is enhanced with a novel model agnostic temporal feature augmentation strategy formed by the computationally inexpensive strategy of the stochastic max-pooling of segments. It produces more accurate and well-calibrated supervised results on three benchmark action segmentation datasets. We show that the architecture is flexible for both supervised and representation learning. In line with this, we present a novel unsupervised way to learn frame-wise representation from C2F-TCN. Our unsupervised learning approach hinges on the clustering capabilities of the input features and the formation of multi-resolution features from the decoder's implicit structure. Further, we provide the first semi-supervised temporal action segmentation results by merging representation learning with conventional supervised learning. Our semi-supervised learning scheme, called ``Iterative-Contrastive-Classify (ICC)'', progressively improves in performance with more labeled data. The ICC semi-supervised learning in C2F-TCN, with 40% labeled videos, performs similar to fully supervised counterparts.
translated by 谷歌翻译
本文在完全和时间戳监督的设置中介绍了通过序列(SEQ2SEQ)翻译序列(SEQ2SEQ)翻译的统一框架。与当前的最新帧级预测方法相反,我们将动作分割视为SEQ2SEQ翻译任务,即将视频帧映射到一系列动作段。我们提出的方法涉及在标准变压器SEQ2SEQ转换模型上进行一系列修改和辅助损失函数,以应对与短输出序列相对的长输入序列,相对较少的视频。我们通过框架损失为编码器合并了一个辅助监督信号,并在隐式持续时间预测中提出了单独的对齐解码器。最后,我们通过提出的约束K-Medoids算法将框架扩展到时间戳监督设置,以生成伪分段。我们提出的框架在完全和时间戳监督的设置上始终如一地表现,在几个数据集上表现优于或竞争的最先进。
translated by 谷歌翻译
手卫生是世界卫生组织(WHO)提出的标准六步洗手行动。但是,没有很好的方法来监督医务人员进行手卫生,这带来了疾病传播的潜在风险。在这项工作中,我们提出了一项新的计算机视觉任务,称为手动卫生评估,以为医务人员提供手动卫生的明智监督。现有的行动评估工作通常在整个视频上做出总体质量预测。但是,手动卫生作用的内部结构在手工卫生评估中很重要。因此,我们提出了一个新颖的细粒学习框架,以联合方式进行步骤分割和关键动作得分手,以进行准确的手部卫生评估。现有的时间分割方法通常采用多阶段卷积网络来改善分割的鲁棒性,但由于缺乏远距离依赖性,因此很容易导致过度分割。为了解决此问题,我们设计了一个多阶段卷积转换器网络,以进行步骤细分。基于这样的观察,每个手洗步骤都涉及确定手洗质量的几个关键动作,我们设计了一组关键的动作得分手,以评估每个步骤中关键动作的质量。此外,在手工卫生评估中缺乏统一的数据集。因此,在医务人员的监督下,我们贡献了一个视频数据集,其中包含300个带有细粒注释的视频序列。数据集上的广泛实验表明,我们的方法很好地评估了手动卫生视频并取得了出色的性能。
translated by 谷歌翻译
设计可以成功部署在日常生活环境中的活动检测系统需要构成现实情况典型挑战的数据集。在本文中,我们介绍了一个新的未修剪日常生存数据集,该数据集具有几个现实世界中的挑战:Toyota Smarthome Untrimmed(TSU)。 TSU包含以自发方式进行的各种活动。数据集包含密集的注释,包括基本的,复合活动和涉及与对象相互作用的活动。我们提供了对数据集所需的现实世界挑战的分析,突出了检测算法的开放问题。我们表明,当前的最新方法无法在TSU数据集上实现令人满意的性能。因此,我们提出了一种新的基线方法,以应对数据集提供的新挑战。此方法利用一种模态(即视线流)生成注意力权重,以指导另一种模态(即RGB)以更好地检测活动边界。这对于检测以高时间差异为特征的活动特别有益。我们表明,我们建议在TSU和另一个受欢迎的挑战数据集Charades上优于最先进方法的方法。
translated by 谷歌翻译
最近,出于手术目的,基于视频的应用程序的发展不断增长。这些应用程序的一部分可以在程序结束后离线工作,其他应用程序必须立即做出反应。但是,在某些情况下,应在过程中进行响应,但可以接受一些延迟。在文献中,已知在线访问性能差距。我们在这项研究中的目标是学习绩效 - 延迟权衡并设计一种基于MS-TCN ++的算法,该算法可以利用这种权衡。为此,我们使用了开放手术模拟数据集,其中包含96个参与者的视频,这些视频在可变的组织模拟器上执行缝合任务。在这项研究中,我们使用了从侧视图捕获的视频数据。对网络进行了训练,以识别执行的手术手势。幼稚的方法是减少MS-TCN ++深度,结果减少了接受场,并且还减少了所需的未来帧数。我们表明该方法是最佳的,主要是在小延迟情况下。第二种方法是限制每个时间卷积中可访问的未来。这样,我们在网络设计方面具有灵活性,因此,与幼稚的方法相比,我们的性能要好得多。
translated by 谷歌翻译
人类行动识别是计算机视觉中的重要应用领域。它的主要目的是准确地描述人类的行为及其相互作用,从传感器获得的先前看不见的数据序列中。识别,理解和预测复杂人类行动的能力能够构建许多重要的应用,例如智能监视系统,人力计算机界面,医疗保健,安全和军事应用。近年来,计算机视觉社区特别关注深度学习。本文使用深度学习技术的视频分析概述了当前的动作识别最新识别。我们提出了识别人类行为的最重要的深度学习模型,并分析它们,以提供用于解决人类行动识别问题的深度学习算法的当前进展,以突出其优势和缺点。基于文献中报道的识别精度的定量分析,我们的研究确定了动作识别中最新的深层体系结构,然后为该领域的未来工作提供当前的趋势和开放问题。
translated by 谷歌翻译
In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block "R(2+1)D" which produces CNNs that achieve results comparable or superior to the state-of-theart on Sports-1M, Kinetics, UCF101, and HMDB51.
translated by 谷歌翻译
We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and achieve high temporal localization accuracy. In the end, only the proposal network and the localization network are used during prediction. On two largescale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7% to 7.4% on MEXaction2 and increases from 15.0% to 19.0% on THUMOS 2014.
translated by 谷歌翻译
Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving stateof-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 73.0%).
translated by 谷歌翻译
从视频和动态数据自动活动识别是一种重要的机器学习问题,其应用范围从机器人到智能健康。大多数现有的作品集中在确定粗动作,如跑步,登山,或切割植物,其具有相对长的持续时间。这对于那些需要细微动作中的高时间分辨率识别应用的一个重要限制。例如,在中风恢复,定量康复剂量需要区分具有亚秒持续时间的运动。我们的目标是弥合这一差距。为此,我们引入了一个大规模,多数据集,StrokeRehab,为包括标记高时间分辨率微妙的短期操作的新动作识别基准。这些短期的行为被称为功能性原语和由河段,运输,重新定位,稳定作用,和空转的。所述数据集由高品质的惯性测量单元的传感器和执行的日常生活像馈送,刷牙等的活动41中风影响的病人的视频数据的,我们表明,基于分割产生嘈杂状态的最先进的现有机型预测时,对这些数据,这往往会导致行动超量。为了解决这个问题,我们提出了高分辨率的活动识别,通过语音识别技术的启发,它是基于一个序列到序列模型,直接预测的动作序列的新方法。这种方法优于国家的最先进的电流在StrokeRehab数据集的方法,以及对标准的基准数据集50Salads,早餐,和拼图。
translated by 谷歌翻译
我们介绍了在视频中发现时间精确,细粒度事件的任务(检测到时间事件的精确时刻)。精确的斑点需要模型在全球范围内对全日制动作规模进行推理,并在本地识别微妙的框架外观和运动差异,以识别这些动作过程中事件的识别。令人惊讶的是,我们发现,最高的绩效解决方案可用于先前的视频理解任务,例如操作检测和细分,不能同时满足这两个要求。作为响应,我们提出了E2E点,这是一种紧凑的端到端模型,在精确的发现任务上表现良好,可以在单个GPU上快速培训。我们证明,E2E点的表现明显优于最近根据视频动作检测,细分和将文献发现到精确的发现任务的基线。最后,我们为几个细粒度的运动动作数据集贡献了新的注释和分裂,以使这些数据集适用于未来的精确发现工作。
translated by 谷歌翻译
时间动作分割对(长)视频序列中的每个帧的动作进行分类。由于框架明智标签的高成本,我们提出了第一种用于时间动作分割的半监督方法。我们对无监督的代表学习铰接,对于时间动作分割,造成独特的挑战。未经目针视频中的操作长度变化,并且具有未知的标签和开始/结束时间。跨视频的行动订购也可能有所不同。我们提出了一种新颖的方式,通过聚类输入特征来学习来自时间卷积网络(TCN)的帧智表示,其中包含增加的时间接近条件和多分辨率相似性。通过与传统的监督学习合并表示学习,我们开发了一个“迭代 - 对比 - 分类(ICC)”半监督学习计划。通过更多标记的数据,ICC逐步提高性能; ICC半监督学习,具有40%标记的视频,执行类似于完全监督的对应物。我们的ICC分别通过{+1.8,+ 5.6,+2.5}%的{+1.8,+ 5.6,+2.5}%分别改善了100%标记的视频。
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
机器学习和非接触传感器的进步使您能够在医疗保健环境中理解复杂的人类行为。特别是,已经引入了几种深度学习系统,以实现对自闭症谱系障碍(ASD)等神经发展状况的全面分析。这种情况会影响儿童的早期发育阶段,并且诊断完全依赖于观察孩子的行为和检测行为提示。但是,诊断过程是耗时的,因为它需要长期的行为观察以及专家的稀缺性。我们展示了基于区域的计算机视觉系统的效果,以帮助临床医生和父母分析孩子的行为。为此,我们采用并增强了一个数据集,用于使用在不受控制的环境中捕获的儿童的视频来分析自闭症相关的动作(例如,在各种环境中使用消费级摄像机收集的视频)。通过检测视频中的目标儿童以减少背景噪声的影响,可以预处理数据。在时间卷积模型的有效性的推动下,我们提出了能够从视频帧中提取动作功能并通过分析视频中的框架之间的关系来从视频帧中提取动作功能并分类与自闭症相关的行为。通过对功能提取和学习策略的广泛评估,我们证明了通过膨胀的3D Convnet和多阶段的时间卷积网络实现最佳性能,达到了0.83加权的F1得分,以分类三种自闭症相关的动作,超越表现优于表现现有方法。我们还通过在同一系统中采用ESNET主链来提出一个轻重量解决方案,实现0.71加权F1得分的竞争结果,并在嵌入式系统上实现潜在的部署。
translated by 谷歌翻译
未来的活动预期是在Egocentric视觉中具有挑战性问题。作为标准的未来活动预期范式,递归序列预测遭受错误的累积。为了解决这个问题,我们提出了一个简单有效的自我监管的学习框架,旨在使中间表现为连续调节中间代表性,以产生表示(a)与先前观察到的对比的当前时间戳框架中的新颖信息内容和(b)反映其与先前观察到的帧的相关性。前者通过最小化对比损失来实现,并且后者可以通过动态重量机制来实现在观察到的内容中的信息帧中,具有当前帧的特征与观察到的帧之间的相似性比较。通过多任务学习可以进一步增强学习的最终视频表示,该多任务学习在目标活动标签上执行联合特征学习和自动检测到的动作和对象类令牌。在大多数自我传统视频数据集和两个第三人称视频数据集中,SRL在大多数情况下急剧表现出现有的现有最先进。通过实验性事实,还可以准确识别支持活动语义的行动和对象概念的实验性。
translated by 谷歌翻译
在视频中,人类的行为是三维(3D)信号。这些视频研究了人类行为的时空知识。使用3D卷积神经网络(CNN)研究了有希望的能力。 3D CNN尚未在静止照片中为其建立良好的二维(2D)等效物获得高输出。董事会3D卷积记忆和时空融合面部训练难以防止3D CNN完成非凡的评估。在本文中,我们实施了混合深度学习体系结构,该体系结构结合了Stip和3D CNN功能,以有效地增强3D视频的性能。实施后,在每个时空融合圈中进行训练的较详细和更深的图表。训练模型在处理模型的复杂评估后进一步增强了结果。视频分类模型在此实现模型中使用。引入了使用深度学习的多媒体数据分类的智能3D网络协议,以进一步了解人类努力中的时空关联。在实施结果时,著名的数据集(即UCF101)评估了提出的混合技术的性能。结果击败了提出的混合技术,该混合动力技术基本上超过了最初的3D CNN。将结果与文献的最新框架进行比较,以识别UCF101的行动识别,准确度为95%。
translated by 谷歌翻译
Unhealthy dietary habits are considered as the primary cause of multiple chronic diseases such as obesity and diabetes. The automatic food intake monitoring system has the potential to improve the quality of life (QoF) of people with dietary related diseases through dietary assessment. In this work, we propose a novel contact-less radar-based food intake monitoring approach. Specifically, a Frequency Modulated Continuous Wave (FMCW) radar sensor is employed to recognize fine-grained eating and drinking gestures. The fine-grained eating/drinking gesture contains a series of movement from raising the hand to the mouth until putting away the hand from the mouth. A 3D temporal convolutional network (3D-TCN) is developed to detect and segment eating and drinking gestures in meal sessions by processing the Range-Doppler Cube (RD Cube). Unlike previous radar-based research, this work collects data in continuous meal sessions. We create a public dataset that contains 48 meal sessions (3121 eating gestures and 608 drinking gestures) from 48 participants with a total duration of 783 minutes. Four eating styles (fork & knife, chopsticks, spoon, hand) are included in this dataset. To validate the performance of the proposed approach, 8-fold cross validation method is applied. Experimental results show that our proposed 3D-TCN outperforms the model that combines a convolutional neural network and a long-short-term-memory network (CNN-LSTM), and also the CNN-Bidirectional LSTM model (CNN-BiLSTM) in eating and drinking gesture detection. The 3D-TCN model achieves a segmental F1-score of 0.887 and 0.844 for eating and drinking gestures, respectively. The results of the proposed approach indicate the feasibility of using radar for fine-grained eating and drinking gesture detection and segmentation in meal sessions.
translated by 谷歌翻译