Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose Attention Retractable Transformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image.We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at the website https://github.com/gladzhang/ART.
translated by 谷歌翻译
有丝分裂细胞的描述是肿瘤诊断的关键特征。但是,由于有丝分裂细胞形态的变异性,检测肿瘤组织中有丝分裂细胞是一项高度挑战的任务。同时,尽管先进的深度学习方法在细胞检测方面取得了巨大成功,但从另一个域(即不同的肿瘤类型和不同的扫描仪)测试数据时,性能通常是不令人满意的。因此,有必要开发用于检测域中稳健性的有丝分裂细胞的算法。我们的工作进一步提出了基于基线(视网膜)的前景检测和肿瘤分类任务,并利用数据扩展来改善模型的域泛化性能。我们在具有挑战性的前测试数据集上实现了最先进的性能(F1分数:0.5809)。
translated by 谷歌翻译
近年来,随着新颖的策略和应用,神经网络一直在迅速扩展。然而,尽管不可避免地会针对关键应用程序来解决这些挑战,例如神经网络技术诸如神经网络技术中仍未解决诸如神经网络技术的挑战。已经尝试通过用符号表示来表示和嵌入域知识来克服神经网络计算中的挑战。因此,出现了神经符号学习(Nesyl)概念,其中结合了符号表示的各个方面,并将常识带入神经网络(Nesyl)。在可解释性,推理和解释性至关重要的领域中,例如视频和图像字幕,提问和推理,健康信息学和基因组学,Nesyl表现出了有希望的结果。这篇综述介绍了一项有关最先进的Nesyl方法的全面调查,其原理,机器和深度学习算法的进步,诸如Opthalmology之类的应用以及最重要的是该新兴领域的未来观点。
translated by 谷歌翻译
随着计算病理学的发展,通过整个幻灯片图像(WSIS)的Gleason评分的深度学习方法具有良好的前景。由于WSIS的大小非常大,因此图像标签通常仅包含幻灯片级标签或有限的像素级标签。当前的主流方法采用了多个实体学习来预测格里森等级。但是,某些方法仅考虑幻灯片级标签,忽略了包含丰富本地信息的有限像素级标签。此外,考虑到像素级标签的另外方法忽略了像素级标签的不准确性。为了解决这些问题,我们根据多个实例学习框架提出了一个混合监督变压器。该模型同时使用幻灯片级标签和实例级别标签,以在幻灯片级别实现更准确的Gleason分级。通过在混合监督培训过程中引入有效的随机掩盖策略,进一步降低了实例级标签的影响。我们在SICAPV2数据集上实现了最新性能,视觉分析显示了实例级别的准确预测结果。源代码可从https://github.com/bianhao123/mixed_supervision获得。
translated by 谷歌翻译
高光谱成像是各种应用的基本成像模型,尤其是遥感,农业和医学。灵感来自现有的高光谱相机,可以慢,昂贵或笨重,从低预算快照测量中重建高光谱图像(HSIS)已经绘制了广泛的关注。通过将截断的数值优化算法映射到具有固定数量的相位的网络中,近期深度展开网络(DUNS)用于光谱快照压缩感应(SCI)已经取得了显着的成功。然而,DUNS远未通过缺乏交叉相位相互作用和适应性参数调整来达到有限的工业应用范围。在本文中,我们提出了一种新的高光谱可分解的重建和最佳采样深度网络,用于SCI,被称为HeroSnet,其中包括在ISTA展开框架下的几个阶段。每个阶段可以灵活地模拟感测矩阵,并在梯度下降步骤中进行上下文调整步骤,以及分层熔断器,并在近侧映射步骤中有效地恢复当前HSI帧的隐藏状态。同时,终端实现硬件友好的最佳二进制掩模,以进一步提高重建性能。最后,我们的Herosnet被验证以优于大幅边缘的模拟和实际数据集的最先进的方法。
translated by 谷歌翻译
多实例学习(MIL)是一种强大的工具,可以解决基于整个滑动图像(WSI)的病理学诊断中的弱监督分类。然而,目前的MIL方法通常基于独立和相同的分布假设,从而忽略不同实例之间的相关性。为了解决这个问题,我们提出了一个被称为相关的MIL的新框架,并提供了融合证明。基于此框架,我们设计了一种基于变压器的MIL(TMARMIL),其探讨了形态和空间信息。所提出的传输可以有效地处理不平衡/平衡和二元/多重分类,具有良好的可视化和可解释性。我们对三种不同的计算病理问题进行了各种实验,与最先进的方法相比,实现了更好的性能和更快的会聚。在CAMELYON16数据集中的二进制肿瘤分类的测试AUC最高可达93.09%。在TCGA-NSCLC数据集和TCGA-RCC数据集中,癌症亚型分类的AUC分别可以高达96.03%和98.82%。实现可用于:https://github.com/szc19990412/transmil。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译