创建人工社会智能 - 可以理解多人互动的细微差别的算法 - 在处理多模式视频的面部表情和手势方面是一个令人兴奋的新兴挑战。最近的多模式方法已经在许多任务上设定了最新的现状,但是很难在社交互动中对复杂的面对面对话动态进行建模,尤其是在自我监督的设置中。在本文中,我们提出了面对面的对比学习(F2F-CL),这是一个图形神经网络,旨在使用分解节点对社交互动进行建模,以将沿语言转弯界限的多模式面对面互动进行上下文。借助F2F-CL模型,我们建议在同一视频中不同口语转弯的分数节点之间进行对比学习。我们通过实验评估了具有挑战性的社会IQ数据集并显示了最先进的结果。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Conventional cameras capture image irradiance on a sensor and convert it to RGB images using an image signal processor (ISP). The images can then be used for photography or visual computing tasks in a variety of applications, such as public safety surveillance and autonomous driving. One can argue that since RAW images contain all the captured information, the conversion of RAW to RGB using an ISP is not necessary for visual computing. In this paper, we propose a novel $\rho$-Vision framework to perform high-level semantic understanding and low-level compression using RAW images without the ISP subsystem used for decades. Considering the scarcity of available RAW image datasets, we first develop an unpaired CycleR2R network based on unsupervised CycleGAN to train modular unrolled ISP and inverse ISP (invISP) models using unpaired RAW and RGB images. We can then flexibly generate simulated RAW images (simRAW) using any existing RGB image dataset and finetune different models originally trained for the RGB domain to process real-world camera RAW images. We demonstrate object detection and image compression capabilities in RAW-domain using RAW-domain YOLOv3 and RAW image compressor (RIC) on snapshots from various cameras. Quantitative results reveal that RAW-domain task inference provides better detection accuracy and compression compared to RGB-domain processing. Furthermore, the proposed \r{ho}-Vision generalizes across various camera sensors and different task-specific models. Additional advantages of the proposed $\rho$-Vision that eliminates the ISP are the potential reductions in computations and processing times.
translated by 谷歌翻译
Emotion-cause pair extraction (ECPE) aims to extract emotion clauses and corresponding cause clauses, which have recently received growing attention. Previous methods sequentially encode features with a specified order. They first encode the emotion and cause features for clause extraction and then combine them for pair extraction. This lead to an imbalance in inter-task feature interaction where features extracted later have no direct contact with the former. To address this issue, we propose a novel Pair-Based Joint Encoding (PBJE) network, which generates pairs and clauses features simultaneously in a joint feature encoding manner to model the causal relationship in clauses. PBJE can balance the information flow among emotion clauses, cause clauses and pairs. From a multi-relational perspective, we construct a heterogeneous undirected graph and apply the Relational Graph Convolutional Network (RGCN) to capture the various relationship between clauses and the relationship between pairs and clauses. Experimental results show that PBJE achieves state-of-the-art performance on the Chinese benchmark corpus.
translated by 谷歌翻译
Pessimism is of great importance in offline reinforcement learning (RL). One broad category of offline RL algorithms fulfills pessimism by explicit or implicit behavior regularization. However, most of them only consider policy divergence as behavior regularization, ignoring the effect of how the offline state distribution differs with that of the learning policy, which may lead to under-pessimism for some states and over-pessimism for others. Taking account of this problem, we propose a principled algorithmic framework for offline RL, called \emph{State-Aware Proximal Pessimism} (SA-PP). The key idea of SA-PP is leveraging discounted stationary state distribution ratios between the learning policy and the offline dataset to modulate the degree of behavior regularization in a state-wise manner, so that pessimism can be implemented in a more appropriate way. We first provide theoretical justifications on the superiority of SA-PP over previous algorithms, demonstrating that SA-PP produces a lower suboptimality upper bound in a broad range of settings. Furthermore, we propose a new algorithm named \emph{State-Aware Conservative Q-Learning} (SA-CQL), by building SA-PP upon representative CQL algorithm with the help of DualDICE for estimating discounted stationary state distribution ratios. Extensive experiments on standard offline RL benchmark show that SA-CQL outperforms the popular baselines on a large portion of benchmarks and attains the highest average return.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
SMPL(SMPL)的参数3D身体模型仅代表最小衣服的人,并且很难扩展到衣服,因为它们具有固定的网格拓扑和分辨率。为了解决这些局限性,最近的工作使用隐式表面或点云来建模衣服。虽然不受拓扑的限制,但这种方法仍然很难为偏离身体的偏离的衣服建模,例如裙子和连衣裙。这是因为他们依靠身体来通过将衣服表面放置为参考形状。不幸的是,当衣服远离身体时,这个过程的定义很差。此外,他们使用线性混合剥皮来摆姿势,并将皮肤重量与下面的身体部位绑在一起。相比之下,我们在没有规范化的情况下对局部坐标空间中的衣服变形进行了建模。我们还放松皮肤重量以使多个身体部位影响表面。具体而言,我们用粗糙的阶段扩展了基于点的方法,该方法用学习的姿势独立的“粗大形状”代替了规范化,该方法可以捕获裙子(如裙子)的粗糙表面几何形状。然后,我们使用一个网络来完善该网络,该网络会渗透到粗糙表示中的线性混合剥皮权重和姿势依赖的位移。该方法适合符合身体并偏离身体的服装。我们通过从示例中学习特定于人的化身,然后展示如何以新的姿势和动作来展示它们的有用性。我们还表明,该方法可以直接从原始扫描中学习缺少数据,从而大大简化了创建逼真的化身的过程。代码可用于研究目的,可在{\ small \ url {https://qianlim.github.io/skirt}}中使用。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
为了利用同一场景的视频框架中的高时间相关性,使用基于块的运动估计和补偿技术从已经编码的参考帧中预测了当前帧。尽管这种方法可以有效利用移动对象的翻译运动,但它容易受到其他类型的仿射运动和对象遮挡/除含量的影响。最近,深度学习已被用来模拟人类姿势的高级结构,以从短视频中的特定动作中进行,然后通过使用生成的对抗网络(GAN)来预测姿势,从而在未来的时间内生成虚拟框架。因此,建模人姿势的高级结构能够通过预测人类的行为并确定其轨迹来利用语义相关性。视频监视应用程序将受益,因为可以通过估算人类姿势轨迹并通过语义相关性产生未来的框架来压缩存储的大监视数据。本文通过从已经编码的框架中对人姿势进行建模并在当前时间使用生成的框架来探讨一种新的视频编码方式。预计所提出的方法可以通过预测包含具有较低残差的移动对象的块来克服传统向后引用框架的局限性。实验结果表明,提出的方法平均可以实现高达2.83 dB PSNR增益和25.93 \%比特率的节省,用于高运动视频序列
translated by 谷歌翻译
高速,高分辨率的立体视频(H2-STEREO)视频使我们能够在细粒度上感知动态3D内容。然而,对商品摄像机的收购H2-STEREO视频仍然具有挑战性。现有的空间超分辨率或时间框架插值方法分别提供了缺乏时间或空间细节的折衷解决方案。为了减轻这个问题,我们提出了一个双摄像头系统,其中一台相机捕获具有丰富空间细节的高空间分辨率低框架速率(HSR-LFR)视频,而另一个摄像头则捕获了低空间分辨率的高架框架-Rate(LSR-HFR)视频带有光滑的时间细节。然后,我们设计了一个学习的信息融合网络(LIFNET),该网络利用跨摄像机冗余,以增强两种相机视图,从而有效地重建H2-STEREO视频。即使在大型差异场景中,我们也利用一个差异网络将时空信息传输到视图上,基于该视图,我们建议使用差异引导的LSR-HFR视图基于差异引导的流量扭曲,并针对HSR-LFR视图进行互补的扭曲。提出了特征域中的多尺度融合方法,以最大程度地减少HSR-LFR视图中闭塞引起的翘曲幽灵和孔。 LIFNET使用YouTube收集的高质量立体视频数据集以端到端的方式进行训练。广泛的实验表明,对于合成数据和摄像头捕获的真实数据,我们的模型均优于现有的最新方法。消融研究探讨了各个方面,包括时空分辨率,摄像头基线,摄像头解理,长/短曝光和应用程序,以充分了解其对潜在应用的能力。
translated by 谷歌翻译