Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning.
translated by 谷歌翻译
The combination of conduct, emotion, motivation, and thinking is referred to as personality. To shortlist candidates more effectively, many organizations rely on personality predictions. The firm can hire or pick the best candidate for the desired job description by grouping applicants based on the necessary personality preferences. A model is created to identify applicants' personality types so that employers may find qualified candidates by examining a person's facial expression, speech intonation, and resume. Additionally, the paper emphasises detecting the changes in employee behaviour. Employee attitudes and behaviour towards each set of questions are being examined and analysed. Here, the K-Modes clustering method is used to predict employee well-being, including job pressure, the working environment, and relationships with peers, utilizing the OCEAN Model and the CNN algorithm in the AVI-AI administrative system. Findings imply that AVIs can be used for efficient candidate screening with an AI decision agent. The study of the specific field is beyond the current explorations and needed to be expanded with deeper models and new configurations that can patch extremely complex operations.
translated by 谷歌翻译
3D牙齿分割是数字正畸技术的重要任务。已经提出了几种深度学习方法,用于从3D牙科模型或口腔内扫描中进行自动牙齿分割。这些方法需要注释的3D口内扫描。手动注释3D口腔内扫描是一项费力的任务。一种方法是设计自学方法来减少手动标签工作。与其他类型的点云数据(例如场景点云或形状点云数据)相比,3D牙齿点云数据具有非常规定的结构和强大的形状。我们查看可以从单个3D口内扫描中学到多少代表性信息。我们借助十种不同的方法来定量评估,其中六种是通用点云分割方法,而其他四种是特定于牙齿分割的方法。令人惊讶的是,我们发现,在单个3D口内扫描训练中,骰子得分可以高达0.86,而完整的训练组可得分为0.94。我们得出的结论是,分割方法可以从单个3D牙齿点云扫描中学习大量信息,例如数据增强。我们是第一个从单个3D口内扫描中进行定量评估并证明深度学习方法的表示能力的人。这可以通过最大程度地利用可用的数据来实现在极端数据限制方案下构建牙齿分割的自学方法。
translated by 谷歌翻译
医学图像中的自动对象识别可以促进医学诊断和治疗。在本文中,我们自动对超声图像中的锁骨神经进行了分割,以帮助注入周围神经块。神经块通常用于手术后的疼痛治疗,其中使用超声指导在靶神经旁边注入局部麻醉药。这种治疗可以阻止疼痛信号向大脑的传播,这可以帮助提高手术中的恢复速率,并显着减少术后阿片类药物的需求。但是,超声引导的区域麻醉(UGRA)要求麻醉师在视觉上识别超声图像中的实际神经位置。鉴于超声图像中神经的无视觉效果以及它们与许多相邻组织的视觉相似性,这是一项复杂的任务。在这项研究中,我们使用了自动神经检测系统进行UGRA神经阻滞治疗。该系统可以使用深度学习技术识别神经在超声图像中的位置。我们开发了一个模型来捕获神经的特征,通过训练两个具有跳过连接的深神经网络:两种扩展的U-NET体系结构,有或没有扩张的卷积。该溶液可能会导致区域麻醉中靶向神经的封锁。
translated by 谷歌翻译
为了在安全 - 关键设置中负责任的决策,机器学习模型必须有效地检测和处理边缘数据。尽管现有的作品表明预测不确定性对这些任务有用,但从文献中尚不明显哪些不确定性感知模型最适合给定数据集。因此,我们比较了一组边缘任务上的六个不确定性意识深度学习模型:对对抗性攻击的鲁棒性以及分布外和对抗性检测。我们发现,数据子字节的几何形状是确定各种模型成功的重要因素。我们的发现在研究不确定性意识深度学习模型的研究中提出了一个有趣的方向。
translated by 谷歌翻译