超过人类决策能力的机器学习模型的出现,在复杂的领域中启动了一种运动,以构建与人类互动的AI系统。许多构建基础对于这项活动至关重要,中心是人类行为的算法表征。尽管现有的大部分工作都集中在人类的总体行为上,但一个重要的远程目标是开发专门针对个人人并可以在其中区分的行为模型。为了使这个过程形式化,我们研究了行为风格的问题,其中任务是仅从决策中确定决策者。我们提出了一种基于变压器的方法,用于在国际象棋的背景下进行行为风格测量法,其中有人试图识别玩一组游戏的玩家。我们的方法在几个弹药的分类框架中运行,并且可以在只有100个标签游戏的情况下正确地从成千上万的候选玩家中识别出98%精度的候选人。即使接受业余比赛的训练,我们的方法还是对大师级玩家的分布样本的概括,尽管业余球员和世界一流的球员之间存在巨大差异。最后,我们更广泛地考虑了我们所产生的嵌入有关国际象棋中人类风格的揭示的内容,以及在行为数据中识别个人的强大方法的潜在伦理含义。
translated by 谷歌翻译
用于头部和颈鳞状细胞癌(HNSCC)的诊断和治疗管理由常规诊断头和颈部计算断层扫描(CT)扫描引导,以识别肿瘤和淋巴结特征。折叠延伸(ECE)是患者的患者生存结果与HNSCC的强烈预测因子。在改变患者的暂存和管理时,必须检测ECE的发生至关重要。目前临床ECE检测依赖于放射科学医生进行的视觉鉴定和病理确认。基于机器学习(ML)的ECE诊断在近年来的潜力上表现出很高的潜力。然而,在大多数基于ML的ECE诊断研究中,手动注释是淋巴结区域的必要数据预处理步骤。此外,本手册注释过程是耗时,劳动密集型和容易出错。因此,在本文中,我们提出了一种梯度映射引导的可解释网络(GMGenet)框架,以自动执行ECE识别而不需要注释的淋巴结区域信息。提出了梯度加权类激活映射(GRAC-CAM)技术,以指导深度学习算法专注于与ECE高度相关的区域。提取信息丰富的兴趣(VoIS),无需标记淋巴结区域信息。在评估中,所提出的方法是使用交叉验证的训练和测试,可分别实现测试精度和90.2%和91.1%的AUC。已经分析了ECE的存在或不存在并与黄金标准组织病理学发现相关。
translated by 谷歌翻译
时间一致的深度估计对于诸如增强现实之类的实时应用至关重要。虽然立体声深度估计已经接受了显着的注意,导致逐帧的改进,虽然相对较少的工作集中在跨越帧的时间一致性。实际上,基于我们的分析,当前立体声深度估计技术仍然遭受不良时间一致性。由于并发对象和摄像机运动,在动态场景中稳定深度是挑战。在在线设置中,此过程进一步加剧,因为只有过去的帧可用。在本文中,我们介绍了一种技术,在线设置中的动态场景中产生时间一致的深度估计。我们的网络增强了具有新颖运动和融合网络的当前每帧立体声网络。通过预测每个像素SE3变换,运动网络占对象和相机运动。融合网络通过用回归权重聚合当前和先前预测来提高预测的一致性。我们在各种数据集中进行广泛的实验(合成,户外,室内和医疗)。在零射泛化和域微调中,我们证明我们所提出的方法在数量和定性的时间稳定和每个帧精度方面优于竞争方法。我们的代码将在线提供。
translated by 谷歌翻译
在人类可能希望从这些系统中学习,与它们合作或作为合作伙伴互动的情况下,可以捕获类似人类行为的AI系统越来越有用。为了开发以人为导向的AI系统,预测人类行为(而不是预测最佳行动)的问题受到了广泛关注。现有的工作集中在总体意义上捕获人类行为,这可能会限制任何特定个人可以从与这些系统互动中获得的收益。我们通过开发国际象棋中人类行为的高度准确的预测模型来扩展这一工作。国际象棋是探索人类互动的一个丰富领域,因为它结合了一套独特的属性:AI系统在多年前实现了超人类的表现,但人类仍然与他们以及对手和准备工具紧密互动,并且有一种关于单个玩家游戏的大量记录数据。从迈亚(Maia)开始,该版本的Alphazero经过了对人类人群的培训,我们证明我们可以通过应用一系列微调方法来显着提高特定玩家的举动的预测准确性。此外,我们的个性化模型可用于执行风格测定法 - 预测谁采取了一组给定的动作 - 表明他们在个人层面上捕获了人类的决策。我们的工作展示了一种使AI系统更好地与个人行为保持一致的方法,这可能会导致人类互动的大量改善。
translated by 谷歌翻译
The little girl jumps back up after falling. Figure 1: We consider localizing moments in video with natural language and demonstrate that incorporating local and global video features is important for this task. To train and evaluate our model, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 40,000 pairs of localized video moments and corresponding natural language.
translated by 谷歌翻译
Purpose: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. Methods: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. Results: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below 1{\deg}. We further illustrate how TAToo may be used in a surgical navigation setting. Conclusion: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.
translated by 谷歌翻译
We present temporally layered architecture (TLA), a biologically inspired system for temporally adaptive distributed control. TLA layers a fast and a slow controller together to achieve temporal abstraction that allows each layer to focus on a different time-scale. Our design is biologically inspired and draws on the architecture of the human brain which executes actions at different timescales depending on the environment's demands. Such distributed control design is widespread across biological systems because it increases survivability and accuracy in certain and uncertain environments. We demonstrate that TLA can provide many advantages over existing approaches, including persistent exploration, adaptive control, explainable temporal behavior, compute efficiency and distributed control. We present two different algorithms for training TLA: (a) Closed-loop control, where the fast controller is trained over a pre-trained slow controller, allowing better exploration for the fast controller and closed-loop control where the fast controller decides whether to "act-or-not" at each timestep; and (b) Partially open loop control, where the slow controller is trained over a pre-trained fast controller, allowing for open loop-control where the slow controller picks a temporally extended action or defers the next n-actions to the fast controller. We evaluated our method on a suite of continuous control tasks and demonstrate the advantages of TLA over several strong baselines.
translated by 谷歌翻译
The xView2 competition and xBD dataset spurred significant advancements in overhead building damage detection, but the competition's pixel level scoring can lead to reduced solution performance in areas with tight clusters of buildings or uninformative context. We seek to advance automatic building damage assessment for disaster relief by proposing an auxiliary challenge to the original xView2 competition. This new challenge involves a new dataset and metrics indicating solution performance when damage is more local and limited than in xBD. Our challenge measures a network's ability to identify individual buildings and their damage level without excessive reliance on the buildings' surroundings. Methods that succeed on this challenge will provide more fine-grained, precise damage information than original xView2 solutions. The best-performing xView2 networks' performances dropped noticeably in our new limited/local damage detection task. The common causes of failure observed are that (1) building objects and their classifications are not separated well, and (2) when they are, the classification is strongly biased by surrounding buildings and other damage context. Thus, we release our augmented version of the dataset with additional object-level scoring metrics https://gitlab.kitware.com/dennis.melamed/xfbd to test independence and separability of building objects, alongside the pixel-level performance metrics of the original competition. We also experiment with new baseline models which improve independence and separability of building damage predictions. Our results indicate that building damage detection is not a fully-solved problem, and we invite others to use and build on our dataset augmentations and metrics.
translated by 谷歌翻译
Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy.
translated by 谷歌翻译
Hidden parameters are latent variables in reinforcement learning (RL) environments that are constant over the course of a trajectory. Understanding what, if any, hidden parameters affect a particular environment can aid both the development and appropriate usage of RL systems. We present an unsupervised method to map RL trajectories into a feature space where distance represents the relative difference in system behavior due to hidden parameters. Our approach disentangles the effects of hidden parameters by leveraging a recurrent neural network (RNN) world model as used in model-based RL. First, we alter the standard world model training algorithm to isolate the hidden parameter information in the world model memory. Then, we use a metric learning approach to map the RNN memory into a space with a distance metric approximating a bisimulation metric with respect to the hidden parameters. The resulting disentangled feature space can be used to meaningfully relate trajectories to each other and analyze the hidden parameter. We demonstrate our approach on four hidden parameters across three RL environments. Finally we present two methods to help identify and understand the effects of hidden parameters on systems.
translated by 谷歌翻译