在本文中,我们考虑了神经视频压缩(NVC)中位分配的问题。由于帧参考结构,使用相同的R-D(速率)权衡参数$ \ lambda $的当前NVC方法是次优的,这带来了位分配的需求。与以前基于启发式和经验R-D模型的方法不同,我们建议通过基于梯度的优化解决此问题。具体而言,我们首先提出了一种基于半损坏的变异推理(SAVI)的连续位实现方法。然后,我们通过更改SAVI目标,使用迭代优化提出了一个像素级隐式分配方法。此外,我们基于NVC的可区分特征得出了精确的R-D模型。我们通过使用精确的R-D模型证明其等效性与位分配的等效性来展示我们的方法的最佳性。实验结果表明,我们的方法显着改善了NVC方法,并且胜过现有的位分配方法。我们的方法是所有可区分NVC方法的插件,并且可以直接在现有的预训练模型上采用。
translated by 谷歌翻译
从嘈杂的点云中恢复高质量的表面,称为点云降级,是几何处理中的一个基本而又具有挑战性的问题。大多数现有方法要么直接将嘈杂的输入或过滤器原始正态变为更新点位置。由点云降解和正常过滤之间的基本相互作用的动机,我们从多任务的角度重新访问点云,并提出一个名为PCDNF的端到端网络,以通过关节正常滤波来denoise点云。特别是,我们引入了一项辅助正常过滤任务,以帮助整体网络更有效地消除噪声,同时更准确地保留几何特征。除了整体体系结构外,我们的网络还具有两个新型模块。一方面,为了提高降噪性能,我们设计了一种形状感知的选择器,以全面考虑学习点,正常特征和几何学先验,以构建特定点的潜在切线空间表示。另一方面,点特征更适合描述几何细节,正常特征更有利于表示几何结构(例如,边缘和角落)。结合点和正常特征使我们能够克服它们的弱点。因此,我们设计一个功能改进模块,以融合点和正常功能,以更好地恢复几何信息。广泛的评估,比较和消融研究表明,所提出的方法在点云降解和正常过滤方面优于最先进的方法。
translated by 谷歌翻译
与传统方法相比,学到的图像压缩已在PSNR和MS-SSIM中取得了非凡的速率延伸性能。但是,它遭受了密集的计算,这对于现实世界的应用是无法忍受的,目前导致其工业应用有限。在本文中,我们将神经体系结构搜索(NAS)介绍到具有较低延迟的更有效网络,并利用量化以加速推理过程。同时,已经为提高效率而做出了工程努力。使用PSNR和MS-SSIM的混合损失以更好的视觉质量进行了优化,我们获得的MSSIM比JPEG,JPEG XL和AVIF在所有比特率上都高得多,而JPEG XL和AVIF之间的PSNR则获得了PSNR。与JPEG-Turbo相比,我们的LIC的软件实施实现了可比较甚至更快的推理速度,而多次比JPEG XL和AVIF快。此外,我们的LIC实施达到了145 fps的惊人吞吐量,用于编码为208 fps,用于在Tesla T4 GPU上解码1080p图像。在CPU上,我们实施的延迟与JPEG XL相当。
translated by 谷歌翻译
由于具有强大的功能学习能力和高效率,深层哈希在大规模图像检索中取得了巨大的成功。同时,广泛的作品表明,深层神经网络(DNN)容易受到对抗例子的影响,并且探索针对深哈希的对抗性攻击吸引了许多研究工作。然而,尚未对Backdoor攻击(对DNNS的另一个著名威胁)进行深入研究。尽管图像分类领域已经提出了各种后门攻击,但现有方法未能实现真正的不可思议的后门攻击,该攻击享受着隐形触发器并同时享受清洁标签设置,而且它们也无法满足图像检索后门的内在需求。在本文中,我们提出了Badhash,这是第一个基于生成的无透感的后门攻击,对深哈希的攻击,它可以有效地用干净的标签产生隐形和投入特定的中毒图像。具体而言,我们首先提出了一种新的条件生成对抗网络(CGAN)管道,以有效生成中毒样品。对于任何给定的良性图像,它试图产生具有独特无形扳机的自然中毒对应物。为了提高攻击效果,我们引入了基于标签的对比学习网络LabCln来利用不同标签的语义特征,随后将其用于混淆和误导目标模型以学习嵌入式触发器。我们终于探索了在哈希空间中对图像检索的后门攻击的机制。在多个基准数据集上进行的广泛实验证明,Badhash可以生成不察觉的中毒样本,具有强大的攻击能力和对最新的深层哈希方案的可转移性。主要主题领域:[参与]多媒体搜索和建议
translated by 谷歌翻译
本文提出了一种基于以评估患者的肺的条件与COVID-19肺炎,以及严重的区分/和在肺超声(LUS)图像,所述特定的图像图案的定量表征半自动系统无重症病例。具体而言,四个参数从每个LUS图像中提取,即厚度(TPL)和粗糙度的胸膜线的(RPL),并用乙线(AWBL)和声学系数(ACBL)的累积。 27名患者被纳入本研究,并分成13名中度患者,7名重症患者和7名危重病人。此外,重型,危重病人被视为严重的情况下,以及适度的患者被视为无重症病例。不同群体之间的生物标志物进行比较。每个单个生物标志物和与所有生物标记物作为输入的分类器被用于分别严重的情况和非严重情况下,二进制诊断。分类器实现了所有被比较的方法中最好的分类性能(受试者工作特征曲线= 0.93下的面积,灵敏度= 0.93,特异性= 0.85)。所提出的图像分析系统,可以潜在地适用于患者的分级和预后评估与COVID-19的肺炎。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Capturing feature information effectively is of great importance in vision tasks. With the development of convolutional neural networks (CNNs), concepts like residual connection and multiple scales promote continual performance gains on diverse deep learning vision tasks. However, the existing methods do not organically combined advantages of these valid ideas. In this paper, we propose a novel CNN architecture called GoogLe2Net, it consists of residual feature-reutilization inceptions (ResFRI) or split residual feature-reutilization inceptions (Split-ResFRI) which create transverse passages between adjacent groups of convolutional layers to enable features flow to latter processing branches and possess residual connections to better process information. Our GoogLe2Net is able to reutilize information captured by foregoing groups of convolutional layers and express multi-scale features at a fine-grained level, which improves performances in image classification. And the inception we proposed could be embedded into inception-like networks directly without any migration costs. Moreover, in experiments based on popular vision datasets, such as CIFAR10 (97.94%), CIFAR100 (85.91%) and Tiny Imagenet (70.54%), we obtain better results on image classification task compared with other modern models.
translated by 谷歌翻译