本文构成了新型的HyperGraph卷积神经网络基于聚类技术。该技术用于解决Citeseer数据集和CORA数据集的聚类问题。每个数据集都包含特征矩阵和HyperGraph的发射矩阵(即,由特征矩阵构造)。这种新颖的聚类方法利用了两个矩阵。最初,使用HyperGraph自动编码器将入射矩阵和特征矩阵从高维空间转换为低维空间。最后,我们将K-均值聚类技术应用于转换的矩阵。与其他经典聚类技术相比,基于Hypergraph卷积神经网络(CNN)的聚类技术在实验过程中的性能取得了更好的结果。
translated by 谷歌翻译
在技术的发展中,脑部疾病的病例越来越多,提出了更多的治疗方法,并取得了积极的结果。但是,通过大脑质量,早期诊断可以改善成功治疗的可能性,并可以帮助患者更好地恢复治疗。源于这个原因,脑化是如今的医学图像分析中有争议的主题之一。随着体系结构的改进,提出了多种方法并获得竞争分数。在本文中,我们提出了一种将有效网络用于3D图像的技术,尤其是用于大脑质量分类任务解决方案的有效网络B0并达到竞争分数。此外,我们还提出了使用多尺度效率网络对MRI数据切片进行分类的方法
translated by 谷歌翻译
医学图像预处理中最有争议的研究领域之一是3D CT扫描。随着Covid-19的快速扩散,CT扫描在正确,迅速诊断疾病的功能变得至关重要。它对预防感染有积极影响。通过CT-Scan图像诊断疾病有许多任务,包括Covid-19。在本文中,我们提出了一种使用堆叠深神经网络的方法,通过一系列3D CT扫描图像来检测COVID 19。在我们的方法中,我们使用两个骨架进行实验是Densenet 121和Resnet101。此方法在某些评估指标上实现了竞争性能
translated by 谷歌翻译
基于硬件的加速度是促进许多计算密集型数学操作的广泛尝试。本文提出了一个基于FPGA的体系结构来加速卷积操作 - 在许多卷积神经网络模型中出现的复杂且昂贵的计算步骤。我们将设计定为标准卷积操作,打算以边缘-AI解决方案启动产品。该项目的目的是产生一个可以一次处理卷积层的FPGA IP核心。系统开发人员可以使用Verilog HDL作为体系结构的主要设计语言来部署IP核心。实验结果表明,我们在简单的边缘计算FPGA板上合成的单个计算核心可以提供0.224 GOPS。当董事会充分利用时,可以实现4.48 GOP。
translated by 谷歌翻译
本文介绍了HyperGraph神经网络方法的新颖版本。该方法用于解决嘈杂的标签学习问题。首先,我们将PCA尺寸还原技术应用于图像数据集的特征矩阵,以减少图像数据集的特征矩阵中的“噪声”和冗余功能方法。然后,基于经典的半监督学习方法,经典的基于超毛图的半手法学习方法,图形神经网络,HyperGraph神经网络和我们提出的HyperGraph神经网络用于解决嘈杂的标签学习问题。评估和比较这五种方法的精度。实验结果表明,当噪声水平提高时,超图神经网络方法达到了最佳性能。此外,高图神经网络方法至少与图神经网络一样好。
translated by 谷歌翻译
Here, we demonstrate how machine learning enables the prediction of comonomers reactivity ratios based on the molecular structure of monomers. We combined multi-task learning, multi-inputs, and Graph Attention Network to build a model capable of predicting reactivity ratios based on the monomers chemical structures.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
Machine Reading Comprehension has become one of the most advanced and popular research topics in the fields of Natural Language Processing in recent years. The classification of answerability questions is a relatively significant sub-task in machine reading comprehension; however, there haven't been many studies. Retro-Reader is one of the studies that has solved this problem effectively. However, the encoders of most traditional machine reading comprehension models in general and Retro-Reader, in particular, have not been able to exploit the contextual semantic information of the context completely. Inspired by SemBERT, we use semantic role labels from the SRL task to add semantics to pre-trained language models such as mBERT, XLM-R, PhoBERT. This experiment was conducted to compare the influence of semantics on the classification of answerability for the Vietnamese machine reading comprehension. Additionally, we hope this experiment will enhance the encoder for the Retro-Reader model's Sketchy Reading Module. The improved Retro-Reader model's encoder with semantics was first applied to the Vietnamese Machine Reading Comprehension task and obtained positive results.
translated by 谷歌翻译
RTE is a significant problem and is a reasonably active research community. The proposed research works on the approach to this problem are pretty diverse with many different directions. For Vietnamese, the RTE problem is moderately new, but this problem plays a vital role in natural language understanding systems. Currently, methods to solve this problem based on contextual word representation learning models have given outstanding results. However, Vietnamese is a semantically rich language. Therefore, in this paper, we want to present an experiment combining semantic word representation through the SRL task with context representation of BERT relative models for the RTE problem. The experimental results give conclusions about the influence and role of semantic representation on Vietnamese in understanding natural language. The experimental results show that the semantic-aware contextual representation model has about 1% higher performance than the model that does not incorporate semantic representation. In addition, the effects on the data domain in Vietnamese are also higher than those in English. This result also shows the positive influence of SRL on RTE problem in Vietnamese.
translated by 谷歌翻译
To the best of our knowledge, this paper made the first attempt to answer whether word segmentation is necessary for Vietnamese sentiment classification. To do this, we presented five pre-trained monolingual S4- based language models for Vietnamese, including one model without word segmentation, and four models using RDRsegmenter, uitnlp, pyvi, or underthesea toolkits in the pre-processing data phase. According to comprehensive experimental results on two corpora, including the VLSP2016-SA corpus of technical article reviews from the news and social media and the UIT-VSFC corpus of the educational survey, we have two suggestions. Firstly, using traditional classifiers like Naive Bayes or Support Vector Machines, word segmentation maybe not be necessary for the Vietnamese sentiment classification corpus, which comes from the social domain. Secondly, word segmentation is necessary for Vietnamese sentiment classification when word segmentation is used before using the BPE method and feeding into the deep learning model. In this way, the RDRsegmenter is the stable toolkit for word segmentation among the uitnlp, pyvi, and underthesea toolkits.
translated by 谷歌翻译