异步事件序列广泛分布在自然界和人类活动中,例如地震记录,社交媒体中的用户活动等。如何蒸馏来自这些看似混乱的数据是研究人员专注的持久主题。最有用的模型之一是点过程模型,在此基础上,研究人员获得了许多明显的结果。此外,近年来,提出了神经网络基础的点过程模型,特别是复发性神经网络(RNN),并与传统模型进行比较,其性能大大提高。变压器模型的启发,可以有效地学习序列数据而无需反复和卷积结构,变压器鹰过程出现,并实现了最先进的性能。然而,有一些研究证明,转换变压器中的递归计算可以进一步提高变压器性能。因此,我们出现了一种新型的变压器鹰过程模型,通用变压器鹰过程(UTHP),其中包含递归机制和自我关注机制,并提高了模型的局部感知能力,我们还介绍了卷积神经网络(CNN)在位置方向前馈部分。我们对几个数据集进行实验,以验证UTHP的有效性,并在引入递归机制后探索变化。这些关于多个数据集的实验表明,与以前的最先进模型相比,我们提出的新模式的性能具有一定的改进。
translated by 谷歌翻译
近年来,霍克斯进程的异步序列的知识是一个值得关注的主题,基于神经网络的鹰过程逐渐成为最热门研究的领域,特别是基于复发神经网络(RNN)。然而,这些模型仍然包含RNN的一些固有缺点,例如消失和爆炸梯度和长期依赖性问题。同时,基于自我关注的变压器在文本处理和语音识别等顺序建模中取得了巨大成功。虽然变压器鹰过程(THP)已经获得了巨大的性能改进,但是THP不会有效地利用异步事件中的时间信息,因为这些异步序列,事件发生时刻与事件的类型一样重要,而传统的THPS只是转换时间信息进入位置编码并将其添加为变压器的输入。考虑到这一点,我们提出了一种新型的基于变压器的霍克斯工艺模型,暂时关注增强变压器鹰过程(TAA-THP),我们修改了传统的DOT产品注意力结构,并介绍了关注结构的时间编码。我们对多种合成和现实生活数据集进行多项实验,以验证我们提出的TAA-THP模型的性能,与现有的基线模型相比,在不同测量上实现的显着改进,包括在测试数据集上的日志可能性,并预测事件类型的准确性和发生时间。此外,通过烧蚀研究,我们通过比较模型的性能和没有时间关注的模型的性能,生动地证明了引入额外的时间关注的优点。
translated by 谷歌翻译
在本文中,我们使用霍克斯过程来模拟失效序列,即压缩机站的事件,并对压缩机站的各种故障事件进行生存分析。然而,到目前为止,几乎所有相关文献的霍克斯点过程都假定条件强度函数的基本强度是时间不变。这种假设显然太苛刻了才能得到验证。例如,在实际应用中,包括财务分析,可靠性分析,生存分析和社会网络分析,真理条件强度函数的基本强度很可能是时变的。恒定基本强度不会反映随时间发生的故障的基本概率。因此,为了解决这个问题,在本文中,我们提出了一种新的时变基强度,例如,来自威布尔分布。首先,我们从Weibull分布介绍基本强度,然后我们通过最大似然估计器提出有效的学习算法。对恒基强度合成数据,时变基本强度合成数据和实际数据的实验表明,我们的方法可以同时和鲁棒地学习鹰过程和时变基强度的触发模式。真实世界数据的实验揭示了不同种类的失败的格兰杰因果关系和随着时间的推移变化的故障基础概率。
translated by 谷歌翻译
抽象的。我们遇到的大多数真实世界数据都是异步事件序列,因此过去几十年的特点是在社交网络,电子医疗记录和金融交易领域实施各种点进程。在开始时,霍克斯过程及其变体可以同时模拟复杂序列中不同事件之间的自触发和相互触发模式,以清晰和定量的方式更受欢迎.Later On,随着神经网络的进步,神经网络的进程陆续提出,逐渐成为一个研究热点。变压器鹰过程(THP)的提议取得了巨大的性能改进,因此掀起了基于变压器的神经鹰过程的新升级。但是,THP不会充分利用异步事件序列中发生的发生时间和事件类型的信息。它只是添加了事件类型转换的编码和将时间转换的位置编码到源编码。与此同时,从单个变压器构建的学习者将导致偏差不可避免。为了缓解这些问题,我们提出了一个三变形率霍克斯进程(TRI-THP)模型,其中将事件和时间信息作为辅助信息添加到DOT-Product Idition中,以形成新的多回力。 TRI-THP的有效性由一系列关于现实世界和合成数据的一系列精心设计的实验证明。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译