National Health and Nutritional Status Survey (NHANSS) is conducted annually by the Ministry of Health in Negara Brunei Darussalam to assess the population health and nutritional patterns and characteristics. The main aim of this study was to discover meaningful patterns (groups) from the obese sample of NHANSS data by applying data reduction and interpretation techniques. The mixed nature of the variables (qualitative and quantitative) in the data set added novelty to the study. Accordingly, the Categorical Principal Component (CATPCA) technique was chosen to interpret the meaningful results. The relationships between obesity and the lifestyle factors like demography, socioeconomic status, physical activity, dietary behavior, history of blood pressure, diabetes, etc., were determined based on the principal components generated by CATPCA. The results were validated with the help of the split method technique to counter verify the authenticity of the generated groups. Based on the analysis and results, two subgroups were found in the data set, and the salient features of these subgroups have been reported. These results can be proposed for the betterment of the healthcare industry.
translated by 谷歌翻译
超光谱成像已成为光学成像系统领域的最新趋势。在其他各种应用中,超光谱成像已被广泛用于分析印刷和手写文档。本文提出了一种有效的技术,用于估计超光谱文档图像中存在的不同但明显相似的油墨的数量。我们的方法基于无监督的学习,不需要数据集的任何先验知识。该算法在IVISION HHID数据集上进行了测试,并与文献中存在的算法状态达到了可比的结果。在超光谱文档图像中,在伪造检测的早期阶段使用时,这项工作可能是有效的。
translated by 谷歌翻译
在临床工作流程中成功部署AI的计算机辅助诊断(CAD)系统的一个主要障碍是它们缺乏透明决策。虽然常用可解释的AI方法提供了一些对不透明算法的洞察力,但除了高度训练的专家外,这种解释通常是复杂的,而不是易于理解的。关于皮肤病图像的皮肤病病变恶性的决定的解释需要特别清晰,因为潜在的医疗问题定义本身是模棱两可的。这项工作提出了exaid(可解释的ai用于皮肤科),是生物医学图像分析的新框架,提供了由易于理解的文本解释组成的多模态概念的解释,该概念由可视地图证明预测的视觉映射。 Exap依赖于概念激活向量,将人类概念映射到潜在空间中的任意深度学习模型学习的人,以及概念本地化地图,以突出输入空间中的概念。然后,这种相关概念的识别将用于构建由概念 - 明智地点信息补充的细粒度文本解释,以提供全面和相干的多模态解释。所有信息都在诊断界面中全面呈现,用于临床常规。教育模式为数据和模型探索提供数据集级别解释统计和工具,以帮助医学研究和教育。通过严谨的exaid定量和定性评估,即使在错误的预测情况下,我们展示了CAD辅助情景的多模态解释的效用。我们认为突然将为皮肤科医生提供一种有效的筛查工具,他们都理解和信任。此外,它将是其他生物医学成像领域的类似应用的基础。
translated by 谷歌翻译
3D手形状和姿势估计从单一深度地图是一种新的和具有挑战性的计算机视觉问题,具有许多应用。现有方法通过2D卷积神经网络直接回归手网,这导致由于图像中的透视失真导致人工制品。为了解决现有方法的局限性,我们开发HandvoxNet ++,即基于体素的深网络,其3D和图形卷轴以完全监督的方式训练。对我们网络的输入是基于截短的符号距离函数(TSDF)的3D Voxelized-Depth-Map。 handvoxnet ++依赖于两只手形状表示。第一个是手工形状的3D体蛋白化网格,它不保留网状拓扑,这是最准确的表示。第二个表示是保留网状拓扑的手表面。我们通过用基于新的神经图卷曲的网格登记(GCN-Meshreg)或典型的段 - 明智的非刚性重力方法(NRGA ++)来将手表面与Voxelized手形状对齐,通过将手表面对准依靠培训数据。在三个公共基准的广泛评估中,即Synhand5M,基于深度的Hands19挑战和HO-3D,所提出的Handvoxnet ++实现了最先进的性能。在本杂志中,我们在CVPR 2020呈现的先前方法的延伸中,我们分别在Synhand5M和17分数据集上获得41.09%和13.7%的形状对准精度。我们的方法在2020年8月将结果提交到门户网站时,首先在Hands19挑战数据集(任务1:基于深度3D手姿势估计)上排名。
translated by 谷歌翻译
The performance of the Deep Learning (DL) models depends on the quality of labels. In some areas, the involvement of human annotators may lead to noise in the data. When these corrupted labels are blindly regarded as the ground truth (GT), DL models suffer from performance deficiency. This paper presents a method that aims to learn a confident model in the presence of noisy labels. This is done in conjunction with estimating the uncertainty of multiple annotators. We robustly estimate the predictions given only the noisy labels by adding entropy or information-based regularizer to the classifier network. We conduct our experiments on a noisy version of MNIST, CIFAR-10, and FMNIST datasets. Our empirical results demonstrate the robustness of our method as it outperforms or performs comparably to other state-of-the-art (SOTA) methods. In addition, we evaluated the proposed method on the curated dataset, where the noise type and level of various annotators depend on the input image style. We show that our approach performs well and is adept at learning annotators' confusion. Moreover, we demonstrate how our model is more confident in predicting GT than other baselines. Finally, we assess our approach for segmentation problem and showcase its effectiveness with experiments.
translated by 谷歌翻译
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译
Nowadays, the current neural network models of dialogue generation(chatbots) show great promise for generating answers for chatty agents. But they are short-sighted in that they predict utterances one at a time while disregarding their impact on future outcomes. Modelling a dialogue's future direction is critical for generating coherent, interesting dialogues, a need that has led traditional NLP dialogue models that rely on reinforcement learning. In this article, we explain how to combine these objectives by using deep reinforcement learning to predict future rewards in chatbot dialogue. The model simulates conversations between two virtual agents, with policy gradient methods used to reward sequences that exhibit three useful conversational characteristics: the flow of informality, coherence, and simplicity of response (related to forward-looking function). We assess our model based on its diversity, length, and complexity with regard to humans. In dialogue simulation, evaluations demonstrated that the proposed model generates more interactive responses and encourages a more sustained successful conversation. This work commemorates a preliminary step toward developing a neural conversational model based on the long-term success of dialogues.
translated by 谷歌翻译
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译
A "heart attack" or myocardial infarction (MI), occurs when an artery supplying blood to the heart is abruptly occluded. The "gold standard" method for imaging MI is Cardiovascular Magnetic Resonance Imaging (MRI), with intravenously administered gadolinium-based contrast (late gadolinium enhancement). However, no "gold standard" fully automated method for the quantification of MI exists. In this work, we propose an end-to-end fully automatic system (MyI-Net) for the detection and quantification of MI in MRI images. This has the potential to reduce the uncertainty due to the technical variability across labs and inherent problems of the data and labels. Our system consists of four processing stages designed to maintain the flow of information across scales. First, features from raw MRI images are generated using feature extractors built on ResNet and MoblieNet architectures. This is followed by the Atrous Spatial Pyramid Pooling (ASPP) to produce spatial information at different scales to preserve more image context. High-level features from ASPP and initial low-level features are concatenated at the third stage and then passed to the fourth stage where spatial information is recovered via up-sampling to produce final image segmentation output into: i) background, ii) heart muscle, iii) blood and iv) scar areas. New models were compared with state-of-art models and manual quantification. Our models showed favorable performance in global segmentation and scar tissue detection relative to state-of-the-art work, including a four-fold better performance in matching scar pixels to contours produced by clinicians.
translated by 谷歌翻译
Increasing popularity of deep-learning-powered applications raises the issue of vulnerability of neural networks to adversarial attacks. In other words, hardly perceptible changes in input data lead to the output error in neural network hindering their utilization in applications that involve decisions with security risks. A number of previous works have already thoroughly evaluated the most commonly used configuration - Convolutional Neural Networks (CNNs) against different types of adversarial attacks. Moreover, recent works demonstrated transferability of the some adversarial examples across different neural network models. This paper studied robustness of the new emerging models such as SpinalNet-based neural networks and Compact Convolutional Transformers (CCT) on image classification problem of CIFAR-10 dataset. Each architecture was tested against four White-box attacks and three Black-box attacks. Unlike VGG and SpinalNet models, attention-based CCT configuration demonstrated large span between strong robustness and vulnerability to adversarial examples. Eventually, the study of transferability between VGG, VGG-inspired SpinalNet and pretrained CCT 7/3x1 models was conducted. It was shown that despite high effectiveness of the attack on the certain individual model, this does not guarantee the transferability to other models.
translated by 谷歌翻译