常规的识别抑郁症的方法无法扩展,公众对心理健康的认识有限,尤其是在发展中国家。从最近的研究中可以明显看出,社交媒体有可能更涉及心理健康筛查。按时间顺序排列的大量第一人称叙事帖子可以在一段时间内为人们的思想,感觉,行为或情绪提供见解,从而更好地理解在线空间中反映的抑郁症状。在本文中,我们提出了SERCNN,该文章通过(1)从不同域中堆叠两个预处理的嵌入方式以及(2)将嵌入环境重新引入MLP分类器来改善用户表示。我们的Sercnn在最先进的基线和其他基线方面表现出色,在5倍的交叉验证设置中达到93.7%的精度。由于并非所有用户都共享相同级别的在线活动,因此我们介绍了固定观察窗口的概念,该窗口量化了预定义的帖子中的观察期。 Sercnn的精度非常出色,其精度与BERT模型相当,而参数数量却少98%,Sercnn的表现出色,其精度非常出色。我们的发现为在社交媒体上检测抑郁症的方向开辟了一个有希望的方向,并较少的推断帖子,以为具有成本效益和及时干预的解决方案。我们希望我们的工作能够使该研究领域在现有临床实践中更接近现实世界的采用。
translated by 谷歌翻译
High-fidelity facial avatar reconstruction from a monocular video is a significant research problem in computer graphics and computer vision. Recently, Neural Radiance Field (NeRF) has shown impressive novel view rendering results and has been considered for facial avatar reconstruction. However, the complex facial dynamics and missing 3D information in monocular videos raise significant challenges for faithful facial reconstruction. In this work, we propose a new method for NeRF-based facial avatar reconstruction that utilizes 3D-aware generative prior. Different from existing works that depend on a conditional deformation field for dynamic modeling, we propose to learn a personalized generative prior, which is formulated as a local and low dimensional subspace in the latent space of 3D-GAN. We propose an efficient method to construct the personalized generative prior based on a small set of facial images of a given individual. After learning, it allows for photo-realistic rendering with novel views and the face reenactment can be realized by performing navigation in the latent space. Our proposed method is applicable for different driven signals, including RGB images, 3DMM coefficients, and audios. Compared with existing works, we obtain superior novel view synthesis results and faithfully face reenactment performance.
translated by 谷歌翻译
与关节位置相比,在皮肤多人线性模型(SMPL)基于多视图图像的基于皮肤的多人线性模型(SMPL)的人网格重建中,关节旋转和形状估计的准确性相对较少。该领域的工作大致分为两类。第一种方法执行关节估计,然后通过将SMPL拟合到最终的接头来产生SMPL参数。第二种方法通过基于卷积神经网络(CNN)模型直接从输入图像中回归SMPL参数。但是,这些方法缺乏解决联合旋转和形状重建和网络学习难度的歧义的信息。为了解决上述问题,我们提出了一种两阶段的方法。提出的方法首先通过从输入图像中的基于CNN的模型估算网格顶点的坐标,并通过将SMPL模型拟合到估计的顶点来获取SMPL参数。估计的网格顶点提供了足够的信息来确定关节旋转和形状,并且比SMPL参数更容易学习。根据使用Human3.6M和MPI-INF-3DHP数据集的实验,所提出的方法在关节旋转和形状估计方面显着优于先前的作品,并在关节位置估计方面实现了竞争性能。
translated by 谷歌翻译
最近的研究表明,在多个应用中,基于深度学习(DL)的MRI重建优于常规方法,例如并行成像和压缩传感(CS)。与通常使用预定的正规化线性表示形式实现的CS不同,DL固有地使用从大数据库中学到的非线性表示。另一个工作线使用转化学习(TL)通过从数据中学习线性表示来弥合这两种方法之间的差距。在这项工作中,我们将CS,TL和DL重建的想法结合在一起,以学习深层线性卷积转换,作为算法展开方法的一部分。使用端到端训练,我们的结果表明,所提出的技术可以将MR图像重建为与DL方法相当的水平,同时支持统一的不足采样模式,与常规CS方法不同。我们提出的方法依赖于凸稀疏的图像重建,并在推理时线性表示,这可能有益于表征鲁棒性,稳定性和概括性。
translated by 谷歌翻译
单眼同时定位和映射(SLAM)在先进的驾驶员辅助系统和自主驾驶中出现,因为单个相机便宜且易于安装。传统的单眼猛击有两个主要挑战,导致定位和映射不准确。首先,估计本地化和映射中的尺度是挑战性的。其次,传统单眼SLAM在映射中使用诸如动态对象和低视差区域的不适当的映射因子。本文提出了一种改进的实时单眼血液,通过有效地使用基于深度学习的语义分割来解决上述挑战。为了实现所提出的方法的实时执行,我们仅用映射进程并行地应用于下采样的关键帧的语义分段。此外,所提出的方法校正相机姿势和三维(3D)点的尺度,使用从道路标记的3D点和真实相机高度的估计接地平面。该方法还删除了标记为移动对象和低视差区域的不恰当的角色功能。八个视频序列的实验表明,与现有的最先进的单眼和立体声猛击系统相比,所提出的单眼血压系统达到显着提高和可比的轨迹跟踪精度。该建议的系统可以通过标准GPU支持,在标准CPU上实现实时跟踪,而现有的分段辅助单眼血液则不会。
translated by 谷歌翻译
双能计算机断层扫描(DECT)已广泛用于需要材料分解的许多应用中。图像域方法直接分解来自高能和低能量衰减图像的材料图像,因此,衰减图像上的噪声和伪影易感。本研究的目的是开发一种改进的迭代神经网络(INN),用于DECT中的高质量图像域材料分解,并研究其性质。我们为DECT材料分解提出了一个新的Inn架构。该建议的Inn Architection在图像精炼模块中使用不同的跨材料卷积神经网络(CNN),并在图像重建模块中使用图像分解物理。独特的交叉材料CNN炼油厂包括不同的编码解码滤波器和跨材料模型,其捕获不同材料之间的相关性。我们研究了具有贴片式重构和紧密框架条件的不同跨材料CNN炼油厂。扩展Cardiacorso(XCAT)幻像和临床数据的数值实验表明,所提出的INN显着提高了几种图像域材料分解方法的图像质量,包括使用边缘保留规范器的传统模型的图像分解(MBID)方法,最近使用预先学习的材料缺口变换的MBID方法,以及非特性深层CNN方法。我们的研究基于补丁的重新制作表明,不同的跨材料CNN炼油厂的学习过滤器可以大致满足紧密框架状态。
translated by 谷歌翻译
We present a simple, fully-convolutional model for realtime instance segmentation that achieves 29.8 mAP on MS COCO at 33.5 fps evaluated on a single Titan Xp, which is significantly faster than any previous competitive approach. Moreover, we obtain this result after training on only one GPU. We accomplish this by breaking instance segmentation into two parallel subtasks: (1) generating a set of prototype masks and (2) predicting per-instance mask coefficients. Then we produce instance masks by linearly combining the prototypes with the mask coefficients. We find that because this process doesn't depend on repooling, this approach produces very high-quality masks and exhibits temporal stability for free. Furthermore, we analyze the emergent behavior of our prototypes and show they learn to localize instances on their own in a translation variant manner, despite being fully-convolutional. Finally, we also propose Fast NMS, a drop-in 12 ms faster replacement for standard NMS that only has a marginal performance penalty.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
For Prognostics and Health Management (PHM) of Lithium-ion (Li-ion) batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there is no general and flexible methods to fuse the information represented by those models. Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion-batteries. When there is little prior knowledge about the dynamics, we leverage the data-driven Deep Hidden Physics Model (DeepHPM) to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertainty-based adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Li-ion Phosphate (LFP)/graphite batteries.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译