有关连接车辆的高级研究最近针对将车辆到所有设施(V2X)网络与机器学习(ML)工具(ML)工具和分布式决策制定的集成。联合学习(FL)正在作为训练机器学习(ML)模型(包括V2X网络中的车辆)的新范式出现。与其将培训数据共享和上传到服务器,不如将模型参数(例如,神经网络的权重和偏见)更新,由大量的互连车辆种群应用,充当本地学习者。尽管有这些好处,但现有方法的局限性是集中式优化,它依靠服务器来汇总和融合本地参数,从而导致单个故障点和扩展问题的缺点,以增加V2X网络大小。同时,在智能运输方案中,从车载传感器收集的数据是多余的,这会降低聚合的性能。为了解决这些问题,我们探索了一个分散数据处理的新颖想法,并引入了用于网络内工具的联合学习框架,C-DFL(基于共识的分散联盟学习),以解决有关连接车辆的联合学习并提高学习质量的联盟学习。已经实施了广泛的仿真来评估C-DFL的性能,该表明C-DFL在所有情况下都胜过常规方法的性能。
translated by 谷歌翻译
磁共振光谱成像(MRSI)是量化体内代谢物的必不可少的工具,但是低空间分辨率限制了其临床应用。基于深度学习的超分辨率方法为改善MRSI的空间分辨率提供了有希望的结果,但是与实验获得的高分辨率图像相比,超级分辨图像通常是模糊的。已经使用生成对抗网络进行了尝试,以提高图像视觉质量。在这项工作中,我们考虑了另一种类型的生成模型,即基于流的模型,与对抗网络相比,训练更稳定和可解释。具体而言,我们提出了一个基于流动的增强器网络,以提高超分辨率MRSI的视觉质量。与以前的基于流的模型不同,我们的增强器网络包含了来自其他图像模式(MRI)的解剖信息,并使用可学习的基础分布。此外,我们施加指南丢失和数据一致性丢失,以鼓励网络在保持高忠诚度的同时以高视觉质量生成图像。从25名高级神经胶质瘤患者获得的1H-MRSI数据集上进行的实验表明,我们的增强子网络的表现优于对抗网络和基线基线方法。我们的方法还允许视觉质量调整和不确定性估计。
translated by 谷歌翻译
磁共振光谱成像(MRSI)是研究人体代谢活动的宝贵工具,但目前的应用仅限于低空间分辨率。现有的基于深度学习的MRSI超分辨率方法需要培训一个单独的网络,为每个升级因素训练,这是耗时的,并且记忆力低下。我们使用过滤器缩放策略来解决这个多尺度的超分辨率问题,该级别的缩放策略根据升级因素调节卷积过滤器,以便可以将单个网络用于各种高尺度因素。观察每个代谢物具有不同的空间特征,我们还根据特定的代谢产物调节网络。此外,我们的网络基于对抗损失的重量,因此可以在单个网络中调整超级分辨代谢图的感知清晰度。我们使用新型的多条件模块结合了这些网络条件。实验是在15名高级神经胶质瘤患者的1H-MRSI数据集上进行的。结果表明,所提出的网络在多种多尺度超分辨率方法中实现了最佳性能,并且可以提供具有可调清晰度的超级分辨代谢图。
translated by 谷歌翻译
最近已经为医疗图像分割任务创建了许多医疗数据集,并且自然质疑我们是否可以使用它们来依次训练(1)在所有这些数据集中表现更好的单个模型,并且(2)良好的概括和传输更好到未知的目标站点域。先前的工作通过在多站点数据集上共同训练一个模型来实现这一目标,该模型平均实现了竞争性能,但是这种方法依赖于所有培训数据的可用性的假设,从而限制了其在实际部署中的有效性。在本文中,我们提出了一个称为增量转移学习(ITL)的新型多站点分割框架,该框架以端到端的顺序方式从多站点数据集中学习模型。具体而言,“增量”是指顺序构建的数据集,而“转移”是通过利用每个数据集上嵌入功能的线性组合的有用信息来实现的。此外,我们介绍了ITL框架,在该框架中,我们在其中训练网络,包括具有预先训练的权重和最多两个分段解码器头的站点不合时宜的编码器。我们还设计了一种新型的站点级增量损失,以便在目标域上良好地概括。其次,我们首次表明利用我们的ITL培训计划能够减轻富有灾难性的遗忘问题,从而在渐进学习中遇到了挑战。我们使用五个具有挑战性的基准数据集进行实验,以验证我们的增量转移学习方法的有效性。我们的方法对计算资源和特定于领域的专业知识的假设最少,因此构成了多站点医学图像细分的强大起点。
translated by 谷歌翻译
Transformers have made remarkable progress towards modeling long-range dependencies within the medical image analysis domain. However, current transformer-based models suffer from several disadvantages: (1) existing methods fail to capture the important features of the images due to the naive tokenization scheme; (2) the models suffer from information loss because they only consider single-scale feature representations; and (3) the segmentation label maps generated by the models are not accurate enough without considering rich semantic contexts and anatomical textures. In this work, we present CASTformer, a novel type of adversarial transformers, for 2D medical image segmentation. First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations. We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures. Lastly, we utilize an adversarial training strategy that boosts segmentation accuracy and correspondingly allows a transformer-based discriminator to capture high-level semantically correlated contents and low-level anatomical features. Our experiments demonstrate that CASTformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining 2.54%-5.88% absolute improvements in Dice over previous models. Further qualitative experiments provide a more detailed picture of the model's inner workings, shed light on the challenges in improved transparency, and demonstrate that transfer learning can greatly improve performance and reduce the size of medical image datasets in training, making CASTformer a strong starting point for downstream medical image analysis tasks.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译